Oct 08 09:00:43 localhost kernel: Linux version 5.14.0-620.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-11), GNU ld version 2.35.2-67.el9) #1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025
Oct 08 09:00:43 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Oct 08 09:00:43 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 08 09:00:43 localhost kernel: BIOS-provided physical RAM map:
Oct 08 09:00:43 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Oct 08 09:00:43 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Oct 08 09:00:43 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Oct 08 09:00:43 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Oct 08 09:00:43 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Oct 08 09:00:43 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Oct 08 09:00:43 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Oct 08 09:00:43 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Oct 08 09:00:43 localhost kernel: NX (Execute Disable) protection: active
Oct 08 09:00:43 localhost kernel: APIC: Static calls initialized
Oct 08 09:00:43 localhost kernel: SMBIOS 2.8 present.
Oct 08 09:00:43 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Oct 08 09:00:43 localhost kernel: Hypervisor detected: KVM
Oct 08 09:00:43 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Oct 08 09:00:43 localhost kernel: kvm-clock: using sched offset of 4135282089 cycles
Oct 08 09:00:43 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Oct 08 09:00:43 localhost kernel: tsc: Detected 2800.000 MHz processor
Oct 08 09:00:43 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Oct 08 09:00:43 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Oct 08 09:00:43 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Oct 08 09:00:43 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Oct 08 09:00:43 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Oct 08 09:00:43 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Oct 08 09:00:43 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Oct 08 09:00:43 localhost kernel: Using GB pages for direct mapping
Oct 08 09:00:43 localhost kernel: RAMDISK: [mem 0x2d7c4000-0x32bd9fff]
Oct 08 09:00:43 localhost kernel: ACPI: Early table checksum verification disabled
Oct 08 09:00:43 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Oct 08 09:00:43 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 08 09:00:43 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 08 09:00:43 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 08 09:00:43 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Oct 08 09:00:43 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 08 09:00:43 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 08 09:00:43 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Oct 08 09:00:43 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Oct 08 09:00:43 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Oct 08 09:00:43 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Oct 08 09:00:43 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Oct 08 09:00:43 localhost kernel: No NUMA configuration found
Oct 08 09:00:43 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Oct 08 09:00:43 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Oct 08 09:00:43 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Oct 08 09:00:43 localhost kernel: Zone ranges:
Oct 08 09:00:43 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Oct 08 09:00:43 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Oct 08 09:00:43 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Oct 08 09:00:43 localhost kernel:   Device   empty
Oct 08 09:00:43 localhost kernel: Movable zone start for each node
Oct 08 09:00:43 localhost kernel: Early memory node ranges
Oct 08 09:00:43 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Oct 08 09:00:43 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Oct 08 09:00:43 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Oct 08 09:00:43 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Oct 08 09:00:43 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Oct 08 09:00:43 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Oct 08 09:00:43 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Oct 08 09:00:43 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Oct 08 09:00:43 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Oct 08 09:00:43 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Oct 08 09:00:43 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Oct 08 09:00:43 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Oct 08 09:00:43 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Oct 08 09:00:43 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Oct 08 09:00:43 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Oct 08 09:00:43 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Oct 08 09:00:43 localhost kernel: TSC deadline timer available
Oct 08 09:00:43 localhost kernel: CPU topo: Max. logical packages:   8
Oct 08 09:00:43 localhost kernel: CPU topo: Max. logical dies:       8
Oct 08 09:00:43 localhost kernel: CPU topo: Max. dies per package:   1
Oct 08 09:00:43 localhost kernel: CPU topo: Max. threads per core:   1
Oct 08 09:00:43 localhost kernel: CPU topo: Num. cores per package:     1
Oct 08 09:00:43 localhost kernel: CPU topo: Num. threads per package:   1
Oct 08 09:00:43 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Oct 08 09:00:43 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Oct 08 09:00:43 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Oct 08 09:00:43 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Oct 08 09:00:43 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Oct 08 09:00:43 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Oct 08 09:00:43 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Oct 08 09:00:43 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Oct 08 09:00:43 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Oct 08 09:00:43 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Oct 08 09:00:43 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Oct 08 09:00:43 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Oct 08 09:00:43 localhost kernel: Booting paravirtualized kernel on KVM
Oct 08 09:00:43 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Oct 08 09:00:43 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Oct 08 09:00:43 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Oct 08 09:00:43 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Oct 08 09:00:43 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Oct 08 09:00:43 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Oct 08 09:00:43 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 08 09:00:43 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64", will be passed to user space.
Oct 08 09:00:43 localhost kernel: random: crng init done
Oct 08 09:00:43 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Oct 08 09:00:43 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Oct 08 09:00:43 localhost kernel: Fallback order for Node 0: 0 
Oct 08 09:00:43 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Oct 08 09:00:43 localhost kernel: Policy zone: Normal
Oct 08 09:00:43 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Oct 08 09:00:43 localhost kernel: software IO TLB: area num 8.
Oct 08 09:00:43 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Oct 08 09:00:43 localhost kernel: ftrace: allocating 49370 entries in 193 pages
Oct 08 09:00:43 localhost kernel: ftrace: allocated 193 pages with 3 groups
Oct 08 09:00:43 localhost kernel: Dynamic Preempt: voluntary
Oct 08 09:00:43 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Oct 08 09:00:43 localhost kernel: rcu:         RCU event tracing is enabled.
Oct 08 09:00:43 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Oct 08 09:00:43 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Oct 08 09:00:43 localhost kernel:         Rude variant of Tasks RCU enabled.
Oct 08 09:00:43 localhost kernel:         Tracing variant of Tasks RCU enabled.
Oct 08 09:00:43 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Oct 08 09:00:43 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Oct 08 09:00:43 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 08 09:00:43 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 08 09:00:43 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 08 09:00:43 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Oct 08 09:00:43 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Oct 08 09:00:43 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Oct 08 09:00:43 localhost kernel: Console: colour VGA+ 80x25
Oct 08 09:00:43 localhost kernel: printk: console [ttyS0] enabled
Oct 08 09:00:43 localhost kernel: ACPI: Core revision 20230331
Oct 08 09:00:43 localhost kernel: APIC: Switch to symmetric I/O mode setup
Oct 08 09:00:43 localhost kernel: x2apic enabled
Oct 08 09:00:43 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Oct 08 09:00:43 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Oct 08 09:00:43 localhost kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Oct 08 09:00:43 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Oct 08 09:00:43 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Oct 08 09:00:43 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Oct 08 09:00:43 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Oct 08 09:00:43 localhost kernel: Spectre V2 : Mitigation: Retpolines
Oct 08 09:00:43 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Oct 08 09:00:43 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Oct 08 09:00:43 localhost kernel: RETBleed: Mitigation: untrained return thunk
Oct 08 09:00:43 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Oct 08 09:00:43 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Oct 08 09:00:43 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Oct 08 09:00:43 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Oct 08 09:00:43 localhost kernel: x86/bugs: return thunk changed
Oct 08 09:00:43 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Oct 08 09:00:43 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Oct 08 09:00:43 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Oct 08 09:00:43 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Oct 08 09:00:43 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Oct 08 09:00:43 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Oct 08 09:00:43 localhost kernel: Freeing SMP alternatives memory: 40K
Oct 08 09:00:43 localhost kernel: pid_max: default: 32768 minimum: 301
Oct 08 09:00:43 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Oct 08 09:00:43 localhost kernel: landlock: Up and running.
Oct 08 09:00:43 localhost kernel: Yama: becoming mindful.
Oct 08 09:00:43 localhost kernel: SELinux:  Initializing.
Oct 08 09:00:43 localhost kernel: LSM support for eBPF active
Oct 08 09:00:43 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct 08 09:00:43 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct 08 09:00:43 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Oct 08 09:00:43 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Oct 08 09:00:43 localhost kernel: ... version:                0
Oct 08 09:00:43 localhost kernel: ... bit width:              48
Oct 08 09:00:43 localhost kernel: ... generic registers:      6
Oct 08 09:00:43 localhost kernel: ... value mask:             0000ffffffffffff
Oct 08 09:00:43 localhost kernel: ... max period:             00007fffffffffff
Oct 08 09:00:43 localhost kernel: ... fixed-purpose events:   0
Oct 08 09:00:43 localhost kernel: ... event mask:             000000000000003f
Oct 08 09:00:43 localhost kernel: signal: max sigframe size: 1776
Oct 08 09:00:43 localhost kernel: rcu: Hierarchical SRCU implementation.
Oct 08 09:00:43 localhost kernel: rcu:         Max phase no-delay instances is 400.
Oct 08 09:00:43 localhost kernel: smp: Bringing up secondary CPUs ...
Oct 08 09:00:43 localhost kernel: smpboot: x86: Booting SMP configuration:
Oct 08 09:00:43 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Oct 08 09:00:43 localhost kernel: smp: Brought up 1 node, 8 CPUs
Oct 08 09:00:43 localhost kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Oct 08 09:00:43 localhost kernel: node 0 deferred pages initialised in 25ms
Oct 08 09:00:43 localhost kernel: Memory: 7765352K/8388068K available (16384K kernel code, 5784K rwdata, 13996K rodata, 4068K init, 7304K bss, 616504K reserved, 0K cma-reserved)
Oct 08 09:00:43 localhost kernel: devtmpfs: initialized
Oct 08 09:00:43 localhost kernel: x86/mm: Memory block size: 128MB
Oct 08 09:00:43 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Oct 08 09:00:43 localhost kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Oct 08 09:00:43 localhost kernel: pinctrl core: initialized pinctrl subsystem
Oct 08 09:00:43 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Oct 08 09:00:43 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Oct 08 09:00:43 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Oct 08 09:00:43 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Oct 08 09:00:43 localhost kernel: audit: initializing netlink subsys (disabled)
Oct 08 09:00:43 localhost kernel: audit: type=2000 audit(1759914042.174:1): state=initialized audit_enabled=0 res=1
Oct 08 09:00:43 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Oct 08 09:00:43 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Oct 08 09:00:43 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Oct 08 09:00:43 localhost kernel: cpuidle: using governor menu
Oct 08 09:00:43 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Oct 08 09:00:43 localhost kernel: PCI: Using configuration type 1 for base access
Oct 08 09:00:43 localhost kernel: PCI: Using configuration type 1 for extended access
Oct 08 09:00:43 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Oct 08 09:00:43 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Oct 08 09:00:43 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Oct 08 09:00:43 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Oct 08 09:00:43 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Oct 08 09:00:43 localhost kernel: Demotion targets for Node 0: null
Oct 08 09:00:43 localhost kernel: cryptd: max_cpu_qlen set to 1000
Oct 08 09:00:43 localhost kernel: ACPI: Added _OSI(Module Device)
Oct 08 09:00:43 localhost kernel: ACPI: Added _OSI(Processor Device)
Oct 08 09:00:43 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Oct 08 09:00:43 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Oct 08 09:00:43 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Oct 08 09:00:43 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Oct 08 09:00:43 localhost kernel: ACPI: Interpreter enabled
Oct 08 09:00:43 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Oct 08 09:00:43 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Oct 08 09:00:43 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Oct 08 09:00:43 localhost kernel: PCI: Using E820 reservations for host bridge windows
Oct 08 09:00:43 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Oct 08 09:00:43 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Oct 08 09:00:43 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Oct 08 09:00:43 localhost kernel: acpiphp: Slot [3] registered
Oct 08 09:00:43 localhost kernel: acpiphp: Slot [4] registered
Oct 08 09:00:43 localhost kernel: acpiphp: Slot [5] registered
Oct 08 09:00:43 localhost kernel: acpiphp: Slot [6] registered
Oct 08 09:00:43 localhost kernel: acpiphp: Slot [7] registered
Oct 08 09:00:43 localhost kernel: acpiphp: Slot [8] registered
Oct 08 09:00:43 localhost kernel: acpiphp: Slot [9] registered
Oct 08 09:00:43 localhost kernel: acpiphp: Slot [10] registered
Oct 08 09:00:43 localhost kernel: acpiphp: Slot [11] registered
Oct 08 09:00:43 localhost kernel: acpiphp: Slot [12] registered
Oct 08 09:00:43 localhost kernel: acpiphp: Slot [13] registered
Oct 08 09:00:43 localhost kernel: acpiphp: Slot [14] registered
Oct 08 09:00:43 localhost kernel: acpiphp: Slot [15] registered
Oct 08 09:00:43 localhost kernel: acpiphp: Slot [16] registered
Oct 08 09:00:43 localhost kernel: acpiphp: Slot [17] registered
Oct 08 09:00:43 localhost kernel: acpiphp: Slot [18] registered
Oct 08 09:00:43 localhost kernel: acpiphp: Slot [19] registered
Oct 08 09:00:43 localhost kernel: acpiphp: Slot [20] registered
Oct 08 09:00:43 localhost kernel: acpiphp: Slot [21] registered
Oct 08 09:00:43 localhost kernel: acpiphp: Slot [22] registered
Oct 08 09:00:43 localhost kernel: acpiphp: Slot [23] registered
Oct 08 09:00:43 localhost kernel: acpiphp: Slot [24] registered
Oct 08 09:00:43 localhost kernel: acpiphp: Slot [25] registered
Oct 08 09:00:43 localhost kernel: acpiphp: Slot [26] registered
Oct 08 09:00:43 localhost kernel: acpiphp: Slot [27] registered
Oct 08 09:00:43 localhost kernel: acpiphp: Slot [28] registered
Oct 08 09:00:43 localhost kernel: acpiphp: Slot [29] registered
Oct 08 09:00:43 localhost kernel: acpiphp: Slot [30] registered
Oct 08 09:00:43 localhost kernel: acpiphp: Slot [31] registered
Oct 08 09:00:43 localhost kernel: PCI host bridge to bus 0000:00
Oct 08 09:00:43 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Oct 08 09:00:43 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Oct 08 09:00:43 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Oct 08 09:00:43 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Oct 08 09:00:43 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Oct 08 09:00:43 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Oct 08 09:00:43 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Oct 08 09:00:43 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Oct 08 09:00:43 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Oct 08 09:00:43 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Oct 08 09:00:43 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Oct 08 09:00:43 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Oct 08 09:00:43 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Oct 08 09:00:43 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Oct 08 09:00:43 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Oct 08 09:00:43 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Oct 08 09:00:43 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Oct 08 09:00:43 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Oct 08 09:00:43 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Oct 08 09:00:43 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Oct 08 09:00:43 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Oct 08 09:00:43 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Oct 08 09:00:43 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Oct 08 09:00:43 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Oct 08 09:00:43 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Oct 08 09:00:43 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct 08 09:00:43 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Oct 08 09:00:43 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Oct 08 09:00:43 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Oct 08 09:00:43 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Oct 08 09:00:43 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Oct 08 09:00:43 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Oct 08 09:00:43 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Oct 08 09:00:43 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Oct 08 09:00:43 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Oct 08 09:00:43 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Oct 08 09:00:43 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Oct 08 09:00:43 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Oct 08 09:00:43 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Oct 08 09:00:43 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Oct 08 09:00:43 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Oct 08 09:00:43 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Oct 08 09:00:43 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Oct 08 09:00:43 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Oct 08 09:00:43 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Oct 08 09:00:43 localhost kernel: iommu: Default domain type: Translated
Oct 08 09:00:43 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Oct 08 09:00:43 localhost kernel: SCSI subsystem initialized
Oct 08 09:00:43 localhost kernel: ACPI: bus type USB registered
Oct 08 09:00:43 localhost kernel: usbcore: registered new interface driver usbfs
Oct 08 09:00:43 localhost kernel: usbcore: registered new interface driver hub
Oct 08 09:00:43 localhost kernel: usbcore: registered new device driver usb
Oct 08 09:00:43 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Oct 08 09:00:43 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Oct 08 09:00:43 localhost kernel: PTP clock support registered
Oct 08 09:00:43 localhost kernel: EDAC MC: Ver: 3.0.0
Oct 08 09:00:43 localhost kernel: NetLabel: Initializing
Oct 08 09:00:43 localhost kernel: NetLabel:  domain hash size = 128
Oct 08 09:00:43 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Oct 08 09:00:43 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Oct 08 09:00:43 localhost kernel: PCI: Using ACPI for IRQ routing
Oct 08 09:00:43 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Oct 08 09:00:43 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Oct 08 09:00:43 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Oct 08 09:00:43 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Oct 08 09:00:43 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Oct 08 09:00:43 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Oct 08 09:00:43 localhost kernel: vgaarb: loaded
Oct 08 09:00:43 localhost kernel: clocksource: Switched to clocksource kvm-clock
Oct 08 09:00:43 localhost kernel: VFS: Disk quotas dquot_6.6.0
Oct 08 09:00:43 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Oct 08 09:00:43 localhost kernel: pnp: PnP ACPI init
Oct 08 09:00:43 localhost kernel: pnp 00:03: [dma 2]
Oct 08 09:00:43 localhost kernel: pnp: PnP ACPI: found 5 devices
Oct 08 09:00:43 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Oct 08 09:00:43 localhost kernel: NET: Registered PF_INET protocol family
Oct 08 09:00:43 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Oct 08 09:00:43 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Oct 08 09:00:43 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Oct 08 09:00:43 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Oct 08 09:00:43 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Oct 08 09:00:43 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Oct 08 09:00:43 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Oct 08 09:00:43 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct 08 09:00:43 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct 08 09:00:43 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Oct 08 09:00:43 localhost kernel: NET: Registered PF_XDP protocol family
Oct 08 09:00:43 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Oct 08 09:00:43 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Oct 08 09:00:43 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Oct 08 09:00:43 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Oct 08 09:00:43 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Oct 08 09:00:43 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Oct 08 09:00:43 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Oct 08 09:00:43 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Oct 08 09:00:43 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x140 took 102619 usecs
Oct 08 09:00:43 localhost kernel: PCI: CLS 0 bytes, default 64
Oct 08 09:00:43 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Oct 08 09:00:43 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Oct 08 09:00:43 localhost kernel: ACPI: bus type thunderbolt registered
Oct 08 09:00:43 localhost kernel: Trying to unpack rootfs image as initramfs...
Oct 08 09:00:43 localhost kernel: Initialise system trusted keyrings
Oct 08 09:00:43 localhost kernel: Key type blacklist registered
Oct 08 09:00:43 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Oct 08 09:00:43 localhost kernel: zbud: loaded
Oct 08 09:00:43 localhost kernel: integrity: Platform Keyring initialized
Oct 08 09:00:43 localhost kernel: integrity: Machine keyring initialized
Oct 08 09:00:43 localhost kernel: Freeing initrd memory: 86104K
Oct 08 09:00:43 localhost kernel: NET: Registered PF_ALG protocol family
Oct 08 09:00:43 localhost kernel: xor: automatically using best checksumming function   avx       
Oct 08 09:00:43 localhost kernel: Key type asymmetric registered
Oct 08 09:00:43 localhost kernel: Asymmetric key parser 'x509' registered
Oct 08 09:00:43 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Oct 08 09:00:43 localhost kernel: io scheduler mq-deadline registered
Oct 08 09:00:43 localhost kernel: io scheduler kyber registered
Oct 08 09:00:43 localhost kernel: io scheduler bfq registered
Oct 08 09:00:43 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Oct 08 09:00:43 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Oct 08 09:00:43 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Oct 08 09:00:43 localhost kernel: ACPI: button: Power Button [PWRF]
Oct 08 09:00:43 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Oct 08 09:00:43 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Oct 08 09:00:43 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Oct 08 09:00:43 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Oct 08 09:00:43 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Oct 08 09:00:43 localhost kernel: Non-volatile memory driver v1.3
Oct 08 09:00:43 localhost kernel: rdac: device handler registered
Oct 08 09:00:43 localhost kernel: hp_sw: device handler registered
Oct 08 09:00:43 localhost kernel: emc: device handler registered
Oct 08 09:00:43 localhost kernel: alua: device handler registered
Oct 08 09:00:43 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Oct 08 09:00:43 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Oct 08 09:00:43 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Oct 08 09:00:43 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Oct 08 09:00:43 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Oct 08 09:00:43 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Oct 08 09:00:43 localhost kernel: usb usb1: Product: UHCI Host Controller
Oct 08 09:00:43 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-620.el9.x86_64 uhci_hcd
Oct 08 09:00:43 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Oct 08 09:00:43 localhost kernel: hub 1-0:1.0: USB hub found
Oct 08 09:00:43 localhost kernel: hub 1-0:1.0: 2 ports detected
Oct 08 09:00:43 localhost kernel: usbcore: registered new interface driver usbserial_generic
Oct 08 09:00:43 localhost kernel: usbserial: USB Serial support registered for generic
Oct 08 09:00:43 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Oct 08 09:00:43 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Oct 08 09:00:43 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Oct 08 09:00:43 localhost kernel: mousedev: PS/2 mouse device common for all mice
Oct 08 09:00:43 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Oct 08 09:00:43 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Oct 08 09:00:43 localhost kernel: rtc_cmos 00:04: registered as rtc0
Oct 08 09:00:43 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-10-08T09:00:42 UTC (1759914042)
Oct 08 09:00:43 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Oct 08 09:00:43 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Oct 08 09:00:43 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Oct 08 09:00:43 localhost kernel: usbcore: registered new interface driver usbhid
Oct 08 09:00:43 localhost kernel: usbhid: USB HID core driver
Oct 08 09:00:43 localhost kernel: drop_monitor: Initializing network drop monitor service
Oct 08 09:00:43 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Oct 08 09:00:43 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Oct 08 09:00:43 localhost kernel: Initializing XFRM netlink socket
Oct 08 09:00:43 localhost kernel: NET: Registered PF_INET6 protocol family
Oct 08 09:00:43 localhost kernel: Segment Routing with IPv6
Oct 08 09:00:43 localhost kernel: NET: Registered PF_PACKET protocol family
Oct 08 09:00:43 localhost kernel: mpls_gso: MPLS GSO support
Oct 08 09:00:43 localhost kernel: IPI shorthand broadcast: enabled
Oct 08 09:00:43 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Oct 08 09:00:43 localhost kernel: AES CTR mode by8 optimization enabled
Oct 08 09:00:43 localhost kernel: sched_clock: Marking stable (1241003392, 141840199)->(1497292525, -114448934)
Oct 08 09:00:43 localhost kernel: registered taskstats version 1
Oct 08 09:00:43 localhost kernel: Loading compiled-in X.509 certificates
Oct 08 09:00:43 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4ff821c4997fbb659836adb05f5bc400c914e148'
Oct 08 09:00:43 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Oct 08 09:00:43 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Oct 08 09:00:43 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Oct 08 09:00:43 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Oct 08 09:00:43 localhost kernel: Demotion targets for Node 0: null
Oct 08 09:00:43 localhost kernel: page_owner is disabled
Oct 08 09:00:43 localhost kernel: Key type .fscrypt registered
Oct 08 09:00:43 localhost kernel: Key type fscrypt-provisioning registered
Oct 08 09:00:43 localhost kernel: Key type big_key registered
Oct 08 09:00:43 localhost kernel: Key type encrypted registered
Oct 08 09:00:43 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Oct 08 09:00:43 localhost kernel: Loading compiled-in module X.509 certificates
Oct 08 09:00:43 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4ff821c4997fbb659836adb05f5bc400c914e148'
Oct 08 09:00:43 localhost kernel: ima: Allocated hash algorithm: sha256
Oct 08 09:00:43 localhost kernel: ima: No architecture policies found
Oct 08 09:00:43 localhost kernel: evm: Initialising EVM extended attributes:
Oct 08 09:00:43 localhost kernel: evm: security.selinux
Oct 08 09:00:43 localhost kernel: evm: security.SMACK64 (disabled)
Oct 08 09:00:43 localhost kernel: evm: security.SMACK64EXEC (disabled)
Oct 08 09:00:43 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Oct 08 09:00:43 localhost kernel: evm: security.SMACK64MMAP (disabled)
Oct 08 09:00:43 localhost kernel: evm: security.apparmor (disabled)
Oct 08 09:00:43 localhost kernel: evm: security.ima
Oct 08 09:00:43 localhost kernel: evm: security.capability
Oct 08 09:00:43 localhost kernel: evm: HMAC attrs: 0x1
Oct 08 09:00:43 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Oct 08 09:00:43 localhost kernel: Running certificate verification RSA selftest
Oct 08 09:00:43 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Oct 08 09:00:43 localhost kernel: Running certificate verification ECDSA selftest
Oct 08 09:00:43 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Oct 08 09:00:43 localhost kernel: clk: Disabling unused clocks
Oct 08 09:00:43 localhost kernel: Freeing unused decrypted memory: 2028K
Oct 08 09:00:43 localhost kernel: Freeing unused kernel image (initmem) memory: 4068K
Oct 08 09:00:43 localhost kernel: Write protecting the kernel read-only data: 30720k
Oct 08 09:00:43 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 340K
Oct 08 09:00:43 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Oct 08 09:00:43 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Oct 08 09:00:43 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Oct 08 09:00:43 localhost kernel: usb 1-1: Manufacturer: QEMU
Oct 08 09:00:43 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Oct 08 09:00:43 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Oct 08 09:00:43 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Oct 08 09:00:43 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Oct 08 09:00:43 localhost kernel: Run /init as init process
Oct 08 09:00:43 localhost kernel:   with arguments:
Oct 08 09:00:43 localhost kernel:     /init
Oct 08 09:00:43 localhost kernel:   with environment:
Oct 08 09:00:43 localhost kernel:     HOME=/
Oct 08 09:00:43 localhost kernel:     TERM=linux
Oct 08 09:00:43 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64
Oct 08 09:00:43 localhost systemd[1]: systemd 252-55.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct 08 09:00:43 localhost systemd[1]: Detected virtualization kvm.
Oct 08 09:00:43 localhost systemd[1]: Detected architecture x86-64.
Oct 08 09:00:43 localhost systemd[1]: Running in initrd.
Oct 08 09:00:43 localhost systemd[1]: No hostname configured, using default hostname.
Oct 08 09:00:43 localhost systemd[1]: Hostname set to <localhost>.
Oct 08 09:00:43 localhost systemd[1]: Initializing machine ID from VM UUID.
Oct 08 09:00:43 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Oct 08 09:00:43 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Oct 08 09:00:43 localhost systemd[1]: Reached target Local Encrypted Volumes.
Oct 08 09:00:43 localhost systemd[1]: Reached target Initrd /usr File System.
Oct 08 09:00:43 localhost systemd[1]: Reached target Local File Systems.
Oct 08 09:00:43 localhost systemd[1]: Reached target Path Units.
Oct 08 09:00:43 localhost systemd[1]: Reached target Slice Units.
Oct 08 09:00:43 localhost systemd[1]: Reached target Swaps.
Oct 08 09:00:43 localhost systemd[1]: Reached target Timer Units.
Oct 08 09:00:43 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct 08 09:00:43 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Oct 08 09:00:43 localhost systemd[1]: Listening on Journal Socket.
Oct 08 09:00:43 localhost systemd[1]: Listening on udev Control Socket.
Oct 08 09:00:43 localhost systemd[1]: Listening on udev Kernel Socket.
Oct 08 09:00:43 localhost systemd[1]: Reached target Socket Units.
Oct 08 09:00:43 localhost systemd[1]: Starting Create List of Static Device Nodes...
Oct 08 09:00:43 localhost systemd[1]: Starting Journal Service...
Oct 08 09:00:43 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct 08 09:00:43 localhost systemd[1]: Starting Apply Kernel Variables...
Oct 08 09:00:43 localhost systemd[1]: Starting Create System Users...
Oct 08 09:00:43 localhost systemd[1]: Starting Setup Virtual Console...
Oct 08 09:00:43 localhost systemd[1]: Finished Create List of Static Device Nodes.
Oct 08 09:00:43 localhost systemd[1]: Finished Apply Kernel Variables.
Oct 08 09:00:43 localhost systemd[1]: Finished Create System Users.
Oct 08 09:00:43 localhost systemd-journald[308]: Journal started
Oct 08 09:00:43 localhost systemd-journald[308]: Runtime Journal (/run/log/journal/a1287f1c59814c2ea0ce6a9c84016045) is 8.0M, max 153.5M, 145.5M free.
Oct 08 09:00:43 localhost systemd-sysusers[312]: Creating group 'users' with GID 100.
Oct 08 09:00:43 localhost systemd-sysusers[312]: Creating group 'dbus' with GID 81.
Oct 08 09:00:43 localhost systemd-sysusers[312]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Oct 08 09:00:43 localhost systemd[1]: Started Journal Service.
Oct 08 09:00:43 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Oct 08 09:00:43 localhost systemd[1]: Starting Create Volatile Files and Directories...
Oct 08 09:00:43 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Oct 08 09:00:43 localhost systemd[1]: Finished Create Volatile Files and Directories.
Oct 08 09:00:43 localhost systemd[1]: Finished Setup Virtual Console.
Oct 08 09:00:43 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Oct 08 09:00:43 localhost systemd[1]: Starting dracut cmdline hook...
Oct 08 09:00:43 localhost dracut-cmdline[329]: dracut-9 dracut-057-102.git20250818.el9
Oct 08 09:00:43 localhost dracut-cmdline[329]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 08 09:00:43 localhost systemd[1]: Finished dracut cmdline hook.
Oct 08 09:00:43 localhost systemd[1]: Starting dracut pre-udev hook...
Oct 08 09:00:43 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Oct 08 09:00:43 localhost kernel: device-mapper: uevent: version 1.0.3
Oct 08 09:00:43 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Oct 08 09:00:43 localhost kernel: RPC: Registered named UNIX socket transport module.
Oct 08 09:00:43 localhost kernel: RPC: Registered udp transport module.
Oct 08 09:00:43 localhost kernel: RPC: Registered tcp transport module.
Oct 08 09:00:43 localhost kernel: RPC: Registered tcp-with-tls transport module.
Oct 08 09:00:43 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Oct 08 09:00:43 localhost rpc.statd[446]: Version 2.5.4 starting
Oct 08 09:00:43 localhost rpc.statd[446]: Initializing NSM state
Oct 08 09:00:43 localhost rpc.idmapd[451]: Setting log level to 0
Oct 08 09:00:43 localhost systemd[1]: Finished dracut pre-udev hook.
Oct 08 09:00:43 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct 08 09:00:43 localhost systemd-udevd[464]: Using default interface naming scheme 'rhel-9.0'.
Oct 08 09:00:43 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct 08 09:00:43 localhost systemd[1]: Starting dracut pre-trigger hook...
Oct 08 09:00:43 localhost systemd[1]: Finished dracut pre-trigger hook.
Oct 08 09:00:44 localhost systemd[1]: Starting Coldplug All udev Devices...
Oct 08 09:00:44 localhost systemd[1]: Created slice Slice /system/modprobe.
Oct 08 09:00:44 localhost systemd[1]: Starting Load Kernel Module configfs...
Oct 08 09:00:44 localhost systemd[1]: Finished Coldplug All udev Devices.
Oct 08 09:00:44 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 08 09:00:44 localhost systemd[1]: Finished Load Kernel Module configfs.
Oct 08 09:00:44 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct 08 09:00:44 localhost systemd[1]: Reached target Network.
Oct 08 09:00:44 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct 08 09:00:44 localhost systemd[1]: Starting dracut initqueue hook...
Oct 08 09:00:44 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Oct 08 09:00:44 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Oct 08 09:00:44 localhost kernel:  vda: vda1
Oct 08 09:00:44 localhost systemd-udevd[486]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 09:00:44 localhost kernel: libata version 3.00 loaded.
Oct 08 09:00:44 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Oct 08 09:00:44 localhost kernel: scsi host0: ata_piix
Oct 08 09:00:44 localhost kernel: scsi host1: ata_piix
Oct 08 09:00:44 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Oct 08 09:00:44 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Oct 08 09:00:44 localhost systemd[1]: Mounting Kernel Configuration File System...
Oct 08 09:00:44 localhost systemd[1]: Mounted Kernel Configuration File System.
Oct 08 09:00:44 localhost systemd[1]: Found device /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458.
Oct 08 09:00:44 localhost systemd[1]: Reached target Initrd Root Device.
Oct 08 09:00:44 localhost systemd[1]: Reached target System Initialization.
Oct 08 09:00:44 localhost systemd[1]: Reached target Basic System.
Oct 08 09:00:44 localhost kernel: ata1: found unknown device (class 0)
Oct 08 09:00:44 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Oct 08 09:00:44 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Oct 08 09:00:44 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Oct 08 09:00:44 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Oct 08 09:00:44 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Oct 08 09:00:44 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Oct 08 09:00:44 localhost systemd[1]: Finished dracut initqueue hook.
Oct 08 09:00:44 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Oct 08 09:00:44 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Oct 08 09:00:44 localhost systemd[1]: Reached target Remote File Systems.
Oct 08 09:00:44 localhost systemd[1]: Starting dracut pre-mount hook...
Oct 08 09:00:44 localhost systemd[1]: Finished dracut pre-mount hook.
Oct 08 09:00:44 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458...
Oct 08 09:00:44 localhost systemd-fsck[558]: /usr/sbin/fsck.xfs: XFS file system.
Oct 08 09:00:44 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458.
Oct 08 09:00:44 localhost systemd[1]: Mounting /sysroot...
Oct 08 09:00:45 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Oct 08 09:00:45 localhost kernel: XFS (vda1): Mounting V5 Filesystem 1631a6ad-43b8-436d-ae76-16fa14b94458
Oct 08 09:00:45 localhost kernel: XFS (vda1): Ending clean mount
Oct 08 09:00:45 localhost systemd[1]: Mounted /sysroot.
Oct 08 09:00:45 localhost systemd[1]: Reached target Initrd Root File System.
Oct 08 09:00:45 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Oct 08 09:00:45 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Oct 08 09:00:45 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Oct 08 09:00:45 localhost systemd[1]: Reached target Initrd File Systems.
Oct 08 09:00:45 localhost systemd[1]: Reached target Initrd Default Target.
Oct 08 09:00:45 localhost systemd[1]: Starting dracut mount hook...
Oct 08 09:00:45 localhost systemd[1]: Finished dracut mount hook.
Oct 08 09:00:45 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Oct 08 09:00:45 localhost rpc.idmapd[451]: exiting on signal 15
Oct 08 09:00:45 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Oct 08 09:00:45 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Oct 08 09:00:45 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Oct 08 09:00:45 localhost systemd[1]: Stopped target Network.
Oct 08 09:00:45 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Oct 08 09:00:45 localhost systemd[1]: Stopped target Timer Units.
Oct 08 09:00:45 localhost systemd[1]: dbus.socket: Deactivated successfully.
Oct 08 09:00:45 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Oct 08 09:00:45 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Oct 08 09:00:45 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Oct 08 09:00:45 localhost systemd[1]: Stopped target Initrd Default Target.
Oct 08 09:00:45 localhost systemd[1]: Stopped target Basic System.
Oct 08 09:00:45 localhost systemd[1]: Stopped target Initrd Root Device.
Oct 08 09:00:45 localhost systemd[1]: Stopped target Initrd /usr File System.
Oct 08 09:00:45 localhost systemd[1]: Stopped target Path Units.
Oct 08 09:00:45 localhost systemd[1]: Stopped target Remote File Systems.
Oct 08 09:00:45 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Oct 08 09:00:45 localhost systemd[1]: Stopped target Slice Units.
Oct 08 09:00:45 localhost systemd[1]: Stopped target Socket Units.
Oct 08 09:00:45 localhost systemd[1]: Stopped target System Initialization.
Oct 08 09:00:45 localhost systemd[1]: Stopped target Local File Systems.
Oct 08 09:00:45 localhost systemd[1]: Stopped target Swaps.
Oct 08 09:00:45 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Oct 08 09:00:45 localhost systemd[1]: Stopped dracut mount hook.
Oct 08 09:00:45 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Oct 08 09:00:45 localhost systemd[1]: Stopped dracut pre-mount hook.
Oct 08 09:00:45 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Oct 08 09:00:45 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Oct 08 09:00:45 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Oct 08 09:00:45 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Oct 08 09:00:45 localhost systemd[1]: Stopped dracut initqueue hook.
Oct 08 09:00:45 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct 08 09:00:45 localhost systemd[1]: Stopped Apply Kernel Variables.
Oct 08 09:00:45 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Oct 08 09:00:45 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Oct 08 09:00:45 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Oct 08 09:00:45 localhost systemd[1]: Stopped Coldplug All udev Devices.
Oct 08 09:00:45 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Oct 08 09:00:45 localhost systemd[1]: Stopped dracut pre-trigger hook.
Oct 08 09:00:45 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Oct 08 09:00:45 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Oct 08 09:00:45 localhost systemd[1]: Stopped Setup Virtual Console.
Oct 08 09:00:45 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Oct 08 09:00:45 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct 08 09:00:45 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Oct 08 09:00:45 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Oct 08 09:00:45 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Oct 08 09:00:45 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Oct 08 09:00:45 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Oct 08 09:00:45 localhost systemd[1]: Closed udev Control Socket.
Oct 08 09:00:45 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Oct 08 09:00:45 localhost systemd[1]: Closed udev Kernel Socket.
Oct 08 09:00:45 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Oct 08 09:00:45 localhost systemd[1]: Stopped dracut pre-udev hook.
Oct 08 09:00:45 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Oct 08 09:00:45 localhost systemd[1]: Stopped dracut cmdline hook.
Oct 08 09:00:45 localhost systemd[1]: Starting Cleanup udev Database...
Oct 08 09:00:45 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Oct 08 09:00:45 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Oct 08 09:00:45 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Oct 08 09:00:45 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Oct 08 09:00:45 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Oct 08 09:00:45 localhost systemd[1]: Stopped Create System Users.
Oct 08 09:00:45 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Oct 08 09:00:45 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Oct 08 09:00:45 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Oct 08 09:00:45 localhost systemd[1]: Finished Cleanup udev Database.
Oct 08 09:00:45 localhost systemd[1]: Reached target Switch Root.
Oct 08 09:00:45 localhost systemd[1]: Starting Switch Root...
Oct 08 09:00:45 localhost systemd[1]: Switching root.
Oct 08 09:00:45 localhost systemd-journald[308]: Journal stopped
Oct 08 09:00:46 localhost systemd-journald[308]: Received SIGTERM from PID 1 (systemd).
Oct 08 09:00:46 localhost kernel: audit: type=1404 audit(1759914045.768:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Oct 08 09:00:46 localhost kernel: SELinux:  policy capability network_peer_controls=1
Oct 08 09:00:46 localhost kernel: SELinux:  policy capability open_perms=1
Oct 08 09:00:46 localhost kernel: SELinux:  policy capability extended_socket_class=1
Oct 08 09:00:46 localhost kernel: SELinux:  policy capability always_check_network=0
Oct 08 09:00:46 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 08 09:00:46 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 08 09:00:46 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 08 09:00:46 localhost kernel: audit: type=1403 audit(1759914045.931:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Oct 08 09:00:46 localhost systemd[1]: Successfully loaded SELinux policy in 167.858ms.
Oct 08 09:00:46 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 30.051ms.
Oct 08 09:00:46 localhost systemd[1]: systemd 252-55.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct 08 09:00:46 localhost systemd[1]: Detected virtualization kvm.
Oct 08 09:00:46 localhost systemd[1]: Detected architecture x86-64.
Oct 08 09:00:46 localhost systemd-rc-local-generator[637]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:00:46 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Oct 08 09:00:46 localhost systemd[1]: Stopped Switch Root.
Oct 08 09:00:46 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Oct 08 09:00:46 localhost systemd[1]: Created slice Slice /system/getty.
Oct 08 09:00:46 localhost systemd[1]: Created slice Slice /system/serial-getty.
Oct 08 09:00:46 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Oct 08 09:00:46 localhost systemd[1]: Created slice User and Session Slice.
Oct 08 09:00:46 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Oct 08 09:00:46 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Oct 08 09:00:46 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Oct 08 09:00:46 localhost systemd[1]: Reached target Local Encrypted Volumes.
Oct 08 09:00:46 localhost systemd[1]: Stopped target Switch Root.
Oct 08 09:00:46 localhost systemd[1]: Stopped target Initrd File Systems.
Oct 08 09:00:46 localhost systemd[1]: Stopped target Initrd Root File System.
Oct 08 09:00:46 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Oct 08 09:00:46 localhost systemd[1]: Reached target Path Units.
Oct 08 09:00:46 localhost systemd[1]: Reached target rpc_pipefs.target.
Oct 08 09:00:46 localhost systemd[1]: Reached target Slice Units.
Oct 08 09:00:46 localhost systemd[1]: Reached target Swaps.
Oct 08 09:00:46 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Oct 08 09:00:46 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Oct 08 09:00:46 localhost systemd[1]: Reached target RPC Port Mapper.
Oct 08 09:00:46 localhost systemd[1]: Listening on Process Core Dump Socket.
Oct 08 09:00:46 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Oct 08 09:00:46 localhost systemd[1]: Listening on udev Control Socket.
Oct 08 09:00:46 localhost systemd[1]: Listening on udev Kernel Socket.
Oct 08 09:00:46 localhost systemd[1]: Mounting Huge Pages File System...
Oct 08 09:00:46 localhost systemd[1]: Mounting POSIX Message Queue File System...
Oct 08 09:00:46 localhost systemd[1]: Mounting Kernel Debug File System...
Oct 08 09:00:46 localhost systemd[1]: Mounting Kernel Trace File System...
Oct 08 09:00:46 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct 08 09:00:46 localhost systemd[1]: Starting Create List of Static Device Nodes...
Oct 08 09:00:46 localhost systemd[1]: Starting Load Kernel Module configfs...
Oct 08 09:00:46 localhost systemd[1]: Starting Load Kernel Module drm...
Oct 08 09:00:46 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Oct 08 09:00:46 localhost systemd[1]: Starting Load Kernel Module fuse...
Oct 08 09:00:46 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Oct 08 09:00:46 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Oct 08 09:00:46 localhost systemd[1]: Stopped File System Check on Root Device.
Oct 08 09:00:46 localhost systemd[1]: Stopped Journal Service.
Oct 08 09:00:46 localhost kernel: fuse: init (API version 7.37)
Oct 08 09:00:46 localhost systemd[1]: Starting Journal Service...
Oct 08 09:00:46 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct 08 09:00:46 localhost systemd[1]: Starting Generate network units from Kernel command line...
Oct 08 09:00:46 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 08 09:00:46 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Oct 08 09:00:46 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Oct 08 09:00:46 localhost systemd[1]: Starting Apply Kernel Variables...
Oct 08 09:00:46 localhost systemd[1]: Starting Coldplug All udev Devices...
Oct 08 09:00:46 localhost systemd[1]: Mounted Huge Pages File System.
Oct 08 09:00:46 localhost systemd[1]: Mounted POSIX Message Queue File System.
Oct 08 09:00:46 localhost systemd[1]: Mounted Kernel Debug File System.
Oct 08 09:00:46 localhost systemd[1]: Mounted Kernel Trace File System.
Oct 08 09:00:46 localhost systemd[1]: Finished Create List of Static Device Nodes.
Oct 08 09:00:46 localhost systemd-journald[678]: Journal started
Oct 08 09:00:46 localhost systemd-journald[678]: Runtime Journal (/run/log/journal/42833e1b511a402df82cb9cb2fc36491) is 8.0M, max 153.5M, 145.5M free.
Oct 08 09:00:46 localhost systemd[1]: Queued start job for default target Multi-User System.
Oct 08 09:00:46 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Oct 08 09:00:46 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Oct 08 09:00:46 localhost systemd[1]: Started Journal Service.
Oct 08 09:00:46 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 08 09:00:46 localhost systemd[1]: Finished Load Kernel Module configfs.
Oct 08 09:00:46 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Oct 08 09:00:46 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Oct 08 09:00:46 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Oct 08 09:00:46 localhost systemd[1]: Finished Load Kernel Module fuse.
Oct 08 09:00:46 localhost kernel: ACPI: bus type drm_connector registered
Oct 08 09:00:46 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Oct 08 09:00:46 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Oct 08 09:00:46 localhost systemd[1]: Finished Load Kernel Module drm.
Oct 08 09:00:46 localhost systemd[1]: Finished Generate network units from Kernel command line.
Oct 08 09:00:46 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Oct 08 09:00:46 localhost systemd[1]: Finished Apply Kernel Variables.
Oct 08 09:00:46 localhost systemd[1]: Mounting FUSE Control File System...
Oct 08 09:00:46 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct 08 09:00:46 localhost systemd[1]: Starting Rebuild Hardware Database...
Oct 08 09:00:46 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Oct 08 09:00:46 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Oct 08 09:00:46 localhost systemd[1]: Starting Load/Save OS Random Seed...
Oct 08 09:00:46 localhost systemd[1]: Starting Create System Users...
Oct 08 09:00:46 localhost systemd[1]: Mounted FUSE Control File System.
Oct 08 09:00:46 localhost systemd-journald[678]: Runtime Journal (/run/log/journal/42833e1b511a402df82cb9cb2fc36491) is 8.0M, max 153.5M, 145.5M free.
Oct 08 09:00:46 localhost systemd-journald[678]: Received client request to flush runtime journal.
Oct 08 09:00:46 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Oct 08 09:00:46 localhost systemd[1]: Finished Load/Save OS Random Seed.
Oct 08 09:00:46 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct 08 09:00:46 localhost systemd[1]: Finished Create System Users.
Oct 08 09:00:46 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Oct 08 09:00:46 localhost systemd[1]: Finished Coldplug All udev Devices.
Oct 08 09:00:46 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Oct 08 09:00:46 localhost systemd[1]: Reached target Preparation for Local File Systems.
Oct 08 09:00:46 localhost systemd[1]: Reached target Local File Systems.
Oct 08 09:00:46 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Oct 08 09:00:46 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Oct 08 09:00:46 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Oct 08 09:00:46 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Oct 08 09:00:46 localhost systemd[1]: Starting Automatic Boot Loader Update...
Oct 08 09:00:46 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Oct 08 09:00:46 localhost systemd[1]: Starting Create Volatile Files and Directories...
Oct 08 09:00:46 localhost bootctl[697]: Couldn't find EFI system partition, skipping.
Oct 08 09:00:46 localhost systemd[1]: Finished Automatic Boot Loader Update.
Oct 08 09:00:46 localhost systemd[1]: Finished Create Volatile Files and Directories.
Oct 08 09:00:46 localhost systemd[1]: Starting Security Auditing Service...
Oct 08 09:00:46 localhost systemd[1]: Starting RPC Bind...
Oct 08 09:00:46 localhost systemd[1]: Starting Rebuild Journal Catalog...
Oct 08 09:00:46 localhost auditd[703]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Oct 08 09:00:46 localhost auditd[703]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Oct 08 09:00:46 localhost systemd[1]: Finished Rebuild Journal Catalog.
Oct 08 09:00:47 localhost systemd[1]: Started RPC Bind.
Oct 08 09:00:47 localhost augenrules[708]: /sbin/augenrules: No change
Oct 08 09:00:47 localhost augenrules[723]: No rules
Oct 08 09:00:47 localhost augenrules[723]: enabled 1
Oct 08 09:00:47 localhost augenrules[723]: failure 1
Oct 08 09:00:47 localhost augenrules[723]: pid 703
Oct 08 09:00:47 localhost augenrules[723]: rate_limit 0
Oct 08 09:00:47 localhost augenrules[723]: backlog_limit 8192
Oct 08 09:00:47 localhost augenrules[723]: lost 0
Oct 08 09:00:47 localhost augenrules[723]: backlog 3
Oct 08 09:00:47 localhost augenrules[723]: backlog_wait_time 60000
Oct 08 09:00:47 localhost augenrules[723]: backlog_wait_time_actual 0
Oct 08 09:00:47 localhost augenrules[723]: enabled 1
Oct 08 09:00:47 localhost augenrules[723]: failure 1
Oct 08 09:00:47 localhost augenrules[723]: pid 703
Oct 08 09:00:47 localhost augenrules[723]: rate_limit 0
Oct 08 09:00:47 localhost augenrules[723]: backlog_limit 8192
Oct 08 09:00:47 localhost augenrules[723]: lost 0
Oct 08 09:00:47 localhost augenrules[723]: backlog 0
Oct 08 09:00:47 localhost augenrules[723]: backlog_wait_time 60000
Oct 08 09:00:47 localhost augenrules[723]: backlog_wait_time_actual 0
Oct 08 09:00:47 localhost augenrules[723]: enabled 1
Oct 08 09:00:47 localhost augenrules[723]: failure 1
Oct 08 09:00:47 localhost augenrules[723]: pid 703
Oct 08 09:00:47 localhost augenrules[723]: rate_limit 0
Oct 08 09:00:47 localhost augenrules[723]: backlog_limit 8192
Oct 08 09:00:47 localhost augenrules[723]: lost 0
Oct 08 09:00:47 localhost augenrules[723]: backlog 2
Oct 08 09:00:47 localhost augenrules[723]: backlog_wait_time 60000
Oct 08 09:00:47 localhost augenrules[723]: backlog_wait_time_actual 0
Oct 08 09:00:47 localhost systemd[1]: Started Security Auditing Service.
Oct 08 09:00:47 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Oct 08 09:00:47 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Oct 08 09:00:47 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Oct 08 09:00:47 localhost systemd[1]: Finished Rebuild Hardware Database.
Oct 08 09:00:47 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct 08 09:00:47 localhost systemd[1]: Starting Update is Completed...
Oct 08 09:00:47 localhost systemd[1]: Finished Update is Completed.
Oct 08 09:00:47 localhost systemd-udevd[731]: Using default interface naming scheme 'rhel-9.0'.
Oct 08 09:00:47 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct 08 09:00:47 localhost systemd[1]: Reached target System Initialization.
Oct 08 09:00:47 localhost systemd[1]: Started dnf makecache --timer.
Oct 08 09:00:47 localhost systemd[1]: Started Daily rotation of log files.
Oct 08 09:00:47 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Oct 08 09:00:47 localhost systemd[1]: Reached target Timer Units.
Oct 08 09:00:47 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct 08 09:00:47 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Oct 08 09:00:47 localhost systemd[1]: Reached target Socket Units.
Oct 08 09:00:47 localhost systemd[1]: Starting D-Bus System Message Bus...
Oct 08 09:00:47 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 08 09:00:47 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Oct 08 09:00:47 localhost systemd[1]: Starting Load Kernel Module configfs...
Oct 08 09:00:47 localhost systemd-udevd[750]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 09:00:47 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 08 09:00:47 localhost systemd[1]: Finished Load Kernel Module configfs.
Oct 08 09:00:47 localhost systemd[1]: Started D-Bus System Message Bus.
Oct 08 09:00:47 localhost systemd[1]: Reached target Basic System.
Oct 08 09:00:47 localhost dbus-broker-lau[754]: Ready
Oct 08 09:00:47 localhost systemd[1]: Starting NTP client/server...
Oct 08 09:00:47 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Oct 08 09:00:47 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Oct 08 09:00:47 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Oct 08 09:00:47 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Oct 08 09:00:47 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Oct 08 09:00:47 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Oct 08 09:00:47 localhost systemd[1]: Starting IPv4 firewall with iptables...
Oct 08 09:00:47 localhost chronyd[791]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct 08 09:00:47 localhost chronyd[791]: Loaded 0 symmetric keys
Oct 08 09:00:47 localhost systemd[1]: Started irqbalance daemon.
Oct 08 09:00:47 localhost chronyd[791]: Using right/UTC timezone to obtain leap second data
Oct 08 09:00:47 localhost chronyd[791]: Loaded seccomp filter (level 2)
Oct 08 09:00:47 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Oct 08 09:00:47 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 08 09:00:47 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 08 09:00:47 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 08 09:00:47 localhost systemd[1]: Reached target sshd-keygen.target.
Oct 08 09:00:47 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Oct 08 09:00:47 localhost systemd[1]: Reached target User and Group Name Lookups.
Oct 08 09:00:47 localhost systemd[1]: Starting User Login Management...
Oct 08 09:00:47 localhost kernel: kvm_amd: TSC scaling supported
Oct 08 09:00:47 localhost kernel: kvm_amd: Nested Virtualization enabled
Oct 08 09:00:47 localhost kernel: kvm_amd: Nested Paging enabled
Oct 08 09:00:47 localhost kernel: kvm_amd: LBR virtualization supported
Oct 08 09:00:47 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Oct 08 09:00:47 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Oct 08 09:00:47 localhost kernel: Console: switching to colour dummy device 80x25
Oct 08 09:00:47 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Oct 08 09:00:47 localhost kernel: [drm] features: -context_init
Oct 08 09:00:47 localhost systemd[1]: Started NTP client/server.
Oct 08 09:00:47 localhost kernel: [drm] number of scanouts: 1
Oct 08 09:00:47 localhost kernel: [drm] number of cap sets: 0
Oct 08 09:00:47 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Oct 08 09:00:47 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Oct 08 09:00:47 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Oct 08 09:00:47 localhost kernel: Console: switching to colour frame buffer device 128x48
Oct 08 09:00:47 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Oct 08 09:00:47 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Oct 08 09:00:47 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Oct 08 09:00:47 localhost systemd-logind[798]: New seat seat0.
Oct 08 09:00:47 localhost systemd-logind[798]: Watching system buttons on /dev/input/event0 (Power Button)
Oct 08 09:00:47 localhost systemd-logind[798]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct 08 09:00:47 localhost systemd[1]: Started User Login Management.
Oct 08 09:00:47 localhost iptables.init[789]: iptables: Applying firewall rules: [  OK  ]
Oct 08 09:00:47 localhost systemd[1]: Finished IPv4 firewall with iptables.
Oct 08 09:00:48 localhost cloud-init[840]: Cloud-init v. 24.4-7.el9 running 'init-local' at Wed, 08 Oct 2025 09:00:48 +0000. Up 6.91 seconds.
Oct 08 09:00:48 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Oct 08 09:00:48 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Oct 08 09:00:48 localhost systemd[1]: run-cloud\x2dinit-tmp-tmp8r7joyua.mount: Deactivated successfully.
Oct 08 09:00:48 localhost systemd[1]: Starting Hostname Service...
Oct 08 09:00:48 localhost systemd[1]: Started Hostname Service.
Oct 08 09:00:48 np0005475493.novalocal systemd-hostnamed[854]: Hostname set to <np0005475493.novalocal> (static)
Oct 08 09:00:48 np0005475493.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Oct 08 09:00:48 np0005475493.novalocal systemd[1]: Reached target Preparation for Network.
Oct 08 09:00:48 np0005475493.novalocal systemd[1]: Starting Network Manager...
Oct 08 09:00:48 np0005475493.novalocal NetworkManager[858]: <info>  [1759914048.8641] NetworkManager (version 1.54.1-1.el9) is starting... (boot:82191aaa-5b9a-46b2-ace7-0656efb209fc)
Oct 08 09:00:48 np0005475493.novalocal NetworkManager[858]: <info>  [1759914048.8653] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct 08 09:00:48 np0005475493.novalocal NetworkManager[858]: <info>  [1759914048.8828] manager[0x55ef394f6080]: monitoring kernel firmware directory '/lib/firmware'.
Oct 08 09:00:48 np0005475493.novalocal NetworkManager[858]: <info>  [1759914048.8907] hostname: hostname: using hostnamed
Oct 08 09:00:48 np0005475493.novalocal NetworkManager[858]: <info>  [1759914048.8908] hostname: static hostname changed from (none) to "np0005475493.novalocal"
Oct 08 09:00:48 np0005475493.novalocal NetworkManager[858]: <info>  [1759914048.8918] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct 08 09:00:48 np0005475493.novalocal NetworkManager[858]: <info>  [1759914048.9103] manager[0x55ef394f6080]: rfkill: Wi-Fi hardware radio set enabled
Oct 08 09:00:48 np0005475493.novalocal NetworkManager[858]: <info>  [1759914048.9106] manager[0x55ef394f6080]: rfkill: WWAN hardware radio set enabled
Oct 08 09:00:48 np0005475493.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Oct 08 09:00:48 np0005475493.novalocal NetworkManager[858]: <info>  [1759914048.9227] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct 08 09:00:48 np0005475493.novalocal NetworkManager[858]: <info>  [1759914048.9229] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct 08 09:00:48 np0005475493.novalocal NetworkManager[858]: <info>  [1759914048.9230] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct 08 09:00:48 np0005475493.novalocal NetworkManager[858]: <info>  [1759914048.9232] manager: Networking is enabled by state file
Oct 08 09:00:48 np0005475493.novalocal NetworkManager[858]: <info>  [1759914048.9237] settings: Loaded settings plugin: keyfile (internal)
Oct 08 09:00:48 np0005475493.novalocal NetworkManager[858]: <info>  [1759914048.9285] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct 08 09:00:48 np0005475493.novalocal NetworkManager[858]: <info>  [1759914048.9329] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct 08 09:00:48 np0005475493.novalocal NetworkManager[858]: <info>  [1759914048.9365] dhcp: init: Using DHCP client 'internal'
Oct 08 09:00:48 np0005475493.novalocal NetworkManager[858]: <info>  [1759914048.9370] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct 08 09:00:48 np0005475493.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 08 09:00:48 np0005475493.novalocal NetworkManager[858]: <info>  [1759914048.9401] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 08 09:00:48 np0005475493.novalocal NetworkManager[858]: <info>  [1759914048.9422] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 08 09:00:48 np0005475493.novalocal NetworkManager[858]: <info>  [1759914048.9442] device (lo): Activation: starting connection 'lo' (04954bd0-4d1f-4562-9334-15a987bf371b)
Oct 08 09:00:48 np0005475493.novalocal NetworkManager[858]: <info>  [1759914048.9463] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct 08 09:00:48 np0005475493.novalocal NetworkManager[858]: <info>  [1759914048.9471] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 08 09:00:48 np0005475493.novalocal systemd[1]: Started Network Manager.
Oct 08 09:00:48 np0005475493.novalocal NetworkManager[858]: <info>  [1759914048.9525] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct 08 09:00:48 np0005475493.novalocal NetworkManager[858]: <info>  [1759914048.9534] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 08 09:00:48 np0005475493.novalocal NetworkManager[858]: <info>  [1759914048.9539] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 08 09:00:48 np0005475493.novalocal NetworkManager[858]: <info>  [1759914048.9544] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 08 09:00:48 np0005475493.novalocal NetworkManager[858]: <info>  [1759914048.9548] device (eth0): carrier: link connected
Oct 08 09:00:48 np0005475493.novalocal NetworkManager[858]: <info>  [1759914048.9556] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 08 09:00:48 np0005475493.novalocal systemd[1]: Reached target Network.
Oct 08 09:00:48 np0005475493.novalocal NetworkManager[858]: <info>  [1759914048.9569] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 08 09:00:48 np0005475493.novalocal NetworkManager[858]: <info>  [1759914048.9590] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 08 09:00:48 np0005475493.novalocal NetworkManager[858]: <info>  [1759914048.9599] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 08 09:00:48 np0005475493.novalocal NetworkManager[858]: <info>  [1759914048.9601] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 08 09:00:48 np0005475493.novalocal NetworkManager[858]: <info>  [1759914048.9607] manager: NetworkManager state is now CONNECTING
Oct 08 09:00:48 np0005475493.novalocal NetworkManager[858]: <info>  [1759914048.9611] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 08 09:00:48 np0005475493.novalocal NetworkManager[858]: <info>  [1759914048.9625] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 08 09:00:48 np0005475493.novalocal NetworkManager[858]: <info>  [1759914048.9632] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 08 09:00:48 np0005475493.novalocal NetworkManager[858]: <info>  [1759914048.9657] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 08 09:00:48 np0005475493.novalocal NetworkManager[858]: <info>  [1759914048.9660] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 08 09:00:48 np0005475493.novalocal NetworkManager[858]: <info>  [1759914048.9667] device (lo): Activation: successful, device activated.
Oct 08 09:00:48 np0005475493.novalocal systemd[1]: Starting Network Manager Wait Online...
Oct 08 09:00:48 np0005475493.novalocal NetworkManager[858]: <info>  [1759914048.9696] dhcp4 (eth0): state changed new lease, address=38.102.83.224
Oct 08 09:00:48 np0005475493.novalocal NetworkManager[858]: <info>  [1759914048.9708] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct 08 09:00:48 np0005475493.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Oct 08 09:00:48 np0005475493.novalocal NetworkManager[858]: <info>  [1759914048.9747] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 08 09:00:48 np0005475493.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 08 09:00:48 np0005475493.novalocal NetworkManager[858]: <info>  [1759914048.9776] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 08 09:00:48 np0005475493.novalocal NetworkManager[858]: <info>  [1759914048.9778] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 08 09:00:48 np0005475493.novalocal NetworkManager[858]: <info>  [1759914048.9785] manager: NetworkManager state is now CONNECTED_SITE
Oct 08 09:00:48 np0005475493.novalocal NetworkManager[858]: <info>  [1759914048.9796] device (eth0): Activation: successful, device activated.
Oct 08 09:00:48 np0005475493.novalocal NetworkManager[858]: <info>  [1759914048.9803] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct 08 09:00:48 np0005475493.novalocal NetworkManager[858]: <info>  [1759914048.9808] manager: startup complete
Oct 08 09:00:49 np0005475493.novalocal systemd[1]: Finished Network Manager Wait Online.
Oct 08 09:00:49 np0005475493.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Oct 08 09:00:49 np0005475493.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Oct 08 09:00:49 np0005475493.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct 08 09:00:49 np0005475493.novalocal systemd[1]: Reached target NFS client services.
Oct 08 09:00:49 np0005475493.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Oct 08 09:00:49 np0005475493.novalocal systemd[1]: Reached target Remote File Systems.
Oct 08 09:00:49 np0005475493.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 08 09:00:49 np0005475493.novalocal cloud-init[921]: Cloud-init v. 24.4-7.el9 running 'init' at Wed, 08 Oct 2025 09:00:49 +0000. Up 8.00 seconds.
Oct 08 09:00:49 np0005475493.novalocal cloud-init[921]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Oct 08 09:00:49 np0005475493.novalocal cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct 08 09:00:49 np0005475493.novalocal cloud-init[921]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Oct 08 09:00:49 np0005475493.novalocal cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct 08 09:00:49 np0005475493.novalocal cloud-init[921]: ci-info: |  eth0  | True |        38.102.83.224         | 255.255.255.0 | global | fa:16:3e:7c:7c:9b |
Oct 08 09:00:49 np0005475493.novalocal cloud-init[921]: ci-info: |  eth0  | True | fe80::f816:3eff:fe7c:7c9b/64 |       .       |  link  | fa:16:3e:7c:7c:9b |
Oct 08 09:00:49 np0005475493.novalocal cloud-init[921]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Oct 08 09:00:49 np0005475493.novalocal cloud-init[921]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Oct 08 09:00:49 np0005475493.novalocal cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct 08 09:00:49 np0005475493.novalocal cloud-init[921]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Oct 08 09:00:49 np0005475493.novalocal cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct 08 09:00:49 np0005475493.novalocal cloud-init[921]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Oct 08 09:00:49 np0005475493.novalocal cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct 08 09:00:49 np0005475493.novalocal cloud-init[921]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Oct 08 09:00:49 np0005475493.novalocal cloud-init[921]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Oct 08 09:00:49 np0005475493.novalocal cloud-init[921]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Oct 08 09:00:49 np0005475493.novalocal cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct 08 09:00:49 np0005475493.novalocal cloud-init[921]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Oct 08 09:00:49 np0005475493.novalocal cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 08 09:00:49 np0005475493.novalocal cloud-init[921]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Oct 08 09:00:49 np0005475493.novalocal cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 08 09:00:49 np0005475493.novalocal cloud-init[921]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Oct 08 09:00:49 np0005475493.novalocal cloud-init[921]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Oct 08 09:00:49 np0005475493.novalocal cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 08 09:00:50 np0005475493.novalocal useradd[988]: new group: name=cloud-user, GID=1001
Oct 08 09:00:50 np0005475493.novalocal useradd[988]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Oct 08 09:00:50 np0005475493.novalocal useradd[988]: add 'cloud-user' to group 'adm'
Oct 08 09:00:50 np0005475493.novalocal useradd[988]: add 'cloud-user' to group 'systemd-journal'
Oct 08 09:00:50 np0005475493.novalocal useradd[988]: add 'cloud-user' to shadow group 'adm'
Oct 08 09:00:50 np0005475493.novalocal useradd[988]: add 'cloud-user' to shadow group 'systemd-journal'
Oct 08 09:00:50 np0005475493.novalocal cloud-init[921]: Generating public/private rsa key pair.
Oct 08 09:00:50 np0005475493.novalocal cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Oct 08 09:00:50 np0005475493.novalocal cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Oct 08 09:00:50 np0005475493.novalocal cloud-init[921]: The key fingerprint is:
Oct 08 09:00:50 np0005475493.novalocal cloud-init[921]: SHA256:zkmyIan+dyRsgZ5A1mcR1AvqsuxRX5EYV9o9u87GRlc root@np0005475493.novalocal
Oct 08 09:00:50 np0005475493.novalocal cloud-init[921]: The key's randomart image is:
Oct 08 09:00:50 np0005475493.novalocal cloud-init[921]: +---[RSA 3072]----+
Oct 08 09:00:50 np0005475493.novalocal cloud-init[921]: |   . .=+...      |
Oct 08 09:00:50 np0005475493.novalocal cloud-init[921]: |  o . ++.+ .     |
Oct 08 09:00:50 np0005475493.novalocal cloud-init[921]: | o   =..+.. o    |
Oct 08 09:00:50 np0005475493.novalocal cloud-init[921]: |  . o.. ..   o  E|
Oct 08 09:00:50 np0005475493.novalocal cloud-init[921]: |   ++oo.S   .  . |
Oct 08 09:00:50 np0005475493.novalocal cloud-init[921]: |  .o+o+O..  ...  |
Oct 08 09:00:50 np0005475493.novalocal cloud-init[921]: | .oo .oo+  o..   |
Oct 08 09:00:50 np0005475493.novalocal cloud-init[921]: | .o.  . .  o+    |
Oct 08 09:00:50 np0005475493.novalocal cloud-init[921]: | .o... .   oo    |
Oct 08 09:00:50 np0005475493.novalocal cloud-init[921]: +----[SHA256]-----+
Oct 08 09:00:50 np0005475493.novalocal cloud-init[921]: Generating public/private ecdsa key pair.
Oct 08 09:00:50 np0005475493.novalocal cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Oct 08 09:00:50 np0005475493.novalocal cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Oct 08 09:00:50 np0005475493.novalocal cloud-init[921]: The key fingerprint is:
Oct 08 09:00:50 np0005475493.novalocal cloud-init[921]: SHA256:tdkYIjRLpGVMB4n+nOfKFTscibJeulw5MdgfTkVt90k root@np0005475493.novalocal
Oct 08 09:00:50 np0005475493.novalocal cloud-init[921]: The key's randomart image is:
Oct 08 09:00:50 np0005475493.novalocal cloud-init[921]: +---[ECDSA 256]---+
Oct 08 09:00:50 np0005475493.novalocal cloud-init[921]: |     =Xo. ..     |
Oct 08 09:00:50 np0005475493.novalocal cloud-init[921]: |    .*o+ .  o .E |
Oct 08 09:00:50 np0005475493.novalocal cloud-init[921]: |   .. o . +. ....|
Oct 08 09:00:50 np0005475493.novalocal cloud-init[921]: |    .o o = *   ..|
Oct 08 09:00:50 np0005475493.novalocal cloud-init[921]: |    oo=.S + .    |
Oct 08 09:00:50 np0005475493.novalocal cloud-init[921]: |     o+O.=       |
Oct 08 09:00:50 np0005475493.novalocal cloud-init[921]: |    . =oB        |
Oct 08 09:00:50 np0005475493.novalocal cloud-init[921]: |   o = o..       |
Oct 08 09:00:50 np0005475493.novalocal cloud-init[921]: |    =.o.         |
Oct 08 09:00:50 np0005475493.novalocal cloud-init[921]: +----[SHA256]-----+
Oct 08 09:00:50 np0005475493.novalocal cloud-init[921]: Generating public/private ed25519 key pair.
Oct 08 09:00:50 np0005475493.novalocal cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Oct 08 09:00:50 np0005475493.novalocal cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Oct 08 09:00:50 np0005475493.novalocal cloud-init[921]: The key fingerprint is:
Oct 08 09:00:50 np0005475493.novalocal cloud-init[921]: SHA256:tVYR/4SbaMQxTiQ1BXMWvmg17fFKyRWOinLym2kAuuE root@np0005475493.novalocal
Oct 08 09:00:50 np0005475493.novalocal cloud-init[921]: The key's randomart image is:
Oct 08 09:00:50 np0005475493.novalocal cloud-init[921]: +--[ED25519 256]--+
Oct 08 09:00:50 np0005475493.novalocal cloud-init[921]: |          .o%+=o |
Oct 08 09:00:50 np0005475493.novalocal cloud-init[921]: |           = @oo.|
Oct 08 09:00:50 np0005475493.novalocal cloud-init[921]: |          . =.*o+|
Oct 08 09:00:50 np0005475493.novalocal cloud-init[921]: |     .   ..+.= @o|
Oct 08 09:00:50 np0005475493.novalocal cloud-init[921]: |    . .oSoo.= B +|
Oct 08 09:00:50 np0005475493.novalocal cloud-init[921]: |   o   .=. o . . |
Oct 08 09:00:50 np0005475493.novalocal cloud-init[921]: |  . o   ..    .  |
Oct 08 09:00:50 np0005475493.novalocal cloud-init[921]: |   E     .+      |
Oct 08 09:00:50 np0005475493.novalocal cloud-init[921]: |        .+       |
Oct 08 09:00:50 np0005475493.novalocal cloud-init[921]: +----[SHA256]-----+
Oct 08 09:00:50 np0005475493.novalocal sm-notify[1004]: Version 2.5.4 starting
Oct 08 09:00:50 np0005475493.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Oct 08 09:00:50 np0005475493.novalocal sshd[1006]: Server listening on 0.0.0.0 port 22.
Oct 08 09:00:50 np0005475493.novalocal systemd[1]: Reached target Cloud-config availability.
Oct 08 09:00:50 np0005475493.novalocal sshd[1006]: Server listening on :: port 22.
Oct 08 09:00:50 np0005475493.novalocal systemd[1]: Reached target Network is Online.
Oct 08 09:00:50 np0005475493.novalocal crond[1008]: (CRON) STARTUP (1.5.7)
Oct 08 09:00:50 np0005475493.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Oct 08 09:00:50 np0005475493.novalocal crond[1008]: (CRON) INFO (Syslog will be used instead of sendmail.)
Oct 08 09:00:50 np0005475493.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Oct 08 09:00:50 np0005475493.novalocal crond[1008]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 61% if used.)
Oct 08 09:00:50 np0005475493.novalocal systemd[1]: Starting System Logging Service...
Oct 08 09:00:50 np0005475493.novalocal crond[1008]: (CRON) INFO (running with inotify support)
Oct 08 09:00:50 np0005475493.novalocal systemd[1]: Starting OpenSSH server daemon...
Oct 08 09:00:50 np0005475493.novalocal systemd[1]: Starting Permit User Sessions...
Oct 08 09:00:50 np0005475493.novalocal systemd[1]: Started Notify NFS peers of a restart.
Oct 08 09:00:50 np0005475493.novalocal systemd[1]: Started OpenSSH server daemon.
Oct 08 09:00:50 np0005475493.novalocal systemd[1]: Finished Permit User Sessions.
Oct 08 09:00:50 np0005475493.novalocal systemd[1]: Started Command Scheduler.
Oct 08 09:00:50 np0005475493.novalocal systemd[1]: Started Getty on tty1.
Oct 08 09:00:50 np0005475493.novalocal systemd[1]: Started Serial Getty on ttyS0.
Oct 08 09:00:50 np0005475493.novalocal systemd[1]: Reached target Login Prompts.
Oct 08 09:00:50 np0005475493.novalocal systemd[1]: Started System Logging Service.
Oct 08 09:00:50 np0005475493.novalocal rsyslogd[1005]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="1005" x-info="https://www.rsyslog.com"] start
Oct 08 09:00:50 np0005475493.novalocal rsyslogd[1005]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Oct 08 09:00:50 np0005475493.novalocal systemd[1]: Reached target Multi-User System.
Oct 08 09:00:50 np0005475493.novalocal sshd-session[1015]: Unable to negotiate with 38.102.83.114 port 35664: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Oct 08 09:00:50 np0005475493.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Oct 08 09:00:51 np0005475493.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Oct 08 09:00:51 np0005475493.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Oct 08 09:00:51 np0005475493.novalocal sshd-session[1021]: Unable to negotiate with 38.102.83.114 port 35690: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Oct 08 09:00:51 np0005475493.novalocal rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 08 09:00:51 np0005475493.novalocal sshd-session[1023]: Unable to negotiate with 38.102.83.114 port 35696: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Oct 08 09:00:51 np0005475493.novalocal sshd-session[1009]: Connection closed by 38.102.83.114 port 35652 [preauth]
Oct 08 09:00:51 np0005475493.novalocal sshd-session[1029]: Unable to negotiate with 38.102.83.114 port 35740: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Oct 08 09:00:51 np0005475493.novalocal sshd-session[1019]: Connection closed by 38.102.83.114 port 35676 [preauth]
Oct 08 09:00:51 np0005475493.novalocal sshd-session[1031]: Unable to negotiate with 38.102.83.114 port 35754: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Oct 08 09:00:51 np0005475493.novalocal sshd-session[1025]: Connection closed by 38.102.83.114 port 35708 [preauth]
Oct 08 09:00:51 np0005475493.novalocal sshd-session[1027]: Connection closed by 38.102.83.114 port 35724 [preauth]
Oct 08 09:00:51 np0005475493.novalocal cloud-init[1035]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Wed, 08 Oct 2025 09:00:51 +0000. Up 9.93 seconds.
Oct 08 09:00:51 np0005475493.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Oct 08 09:00:51 np0005475493.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Oct 08 09:00:51 np0005475493.novalocal cloud-init[1039]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Wed, 08 Oct 2025 09:00:51 +0000. Up 10.37 seconds.
Oct 08 09:00:51 np0005475493.novalocal cloud-init[1041]: #############################################################
Oct 08 09:00:51 np0005475493.novalocal cloud-init[1042]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Oct 08 09:00:51 np0005475493.novalocal cloud-init[1044]: 256 SHA256:tdkYIjRLpGVMB4n+nOfKFTscibJeulw5MdgfTkVt90k root@np0005475493.novalocal (ECDSA)
Oct 08 09:00:51 np0005475493.novalocal cloud-init[1046]: 256 SHA256:tVYR/4SbaMQxTiQ1BXMWvmg17fFKyRWOinLym2kAuuE root@np0005475493.novalocal (ED25519)
Oct 08 09:00:51 np0005475493.novalocal cloud-init[1048]: 3072 SHA256:zkmyIan+dyRsgZ5A1mcR1AvqsuxRX5EYV9o9u87GRlc root@np0005475493.novalocal (RSA)
Oct 08 09:00:51 np0005475493.novalocal cloud-init[1049]: -----END SSH HOST KEY FINGERPRINTS-----
Oct 08 09:00:51 np0005475493.novalocal cloud-init[1050]: #############################################################
Oct 08 09:00:51 np0005475493.novalocal cloud-init[1039]: Cloud-init v. 24.4-7.el9 finished at Wed, 08 Oct 2025 09:00:51 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 10.58 seconds
Oct 08 09:00:51 np0005475493.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Oct 08 09:00:51 np0005475493.novalocal systemd[1]: Reached target Cloud-init target.
Oct 08 09:00:51 np0005475493.novalocal systemd[1]: Startup finished in 1.654s (kernel) + 2.795s (initrd) + 6.225s (userspace) = 10.674s.
Oct 08 09:00:53 np0005475493.novalocal chronyd[791]: Selected source 162.159.200.1 (2.centos.pool.ntp.org)
Oct 08 09:00:53 np0005475493.novalocal chronyd[791]: System clock TAI offset set to 37 seconds
Oct 08 09:00:58 np0005475493.novalocal irqbalance[792]: Cannot change IRQ 35 affinity: Operation not permitted
Oct 08 09:00:58 np0005475493.novalocal irqbalance[792]: IRQ 35 affinity is now unmanaged
Oct 08 09:00:58 np0005475493.novalocal irqbalance[792]: Cannot change IRQ 33 affinity: Operation not permitted
Oct 08 09:00:58 np0005475493.novalocal irqbalance[792]: IRQ 33 affinity is now unmanaged
Oct 08 09:00:58 np0005475493.novalocal irqbalance[792]: Cannot change IRQ 31 affinity: Operation not permitted
Oct 08 09:00:58 np0005475493.novalocal irqbalance[792]: IRQ 31 affinity is now unmanaged
Oct 08 09:00:58 np0005475493.novalocal irqbalance[792]: Cannot change IRQ 28 affinity: Operation not permitted
Oct 08 09:00:58 np0005475493.novalocal irqbalance[792]: IRQ 28 affinity is now unmanaged
Oct 08 09:00:58 np0005475493.novalocal irqbalance[792]: Cannot change IRQ 34 affinity: Operation not permitted
Oct 08 09:00:58 np0005475493.novalocal irqbalance[792]: IRQ 34 affinity is now unmanaged
Oct 08 09:00:58 np0005475493.novalocal irqbalance[792]: Cannot change IRQ 32 affinity: Operation not permitted
Oct 08 09:00:58 np0005475493.novalocal irqbalance[792]: IRQ 32 affinity is now unmanaged
Oct 08 09:00:58 np0005475493.novalocal irqbalance[792]: Cannot change IRQ 30 affinity: Operation not permitted
Oct 08 09:00:58 np0005475493.novalocal irqbalance[792]: IRQ 30 affinity is now unmanaged
Oct 08 09:00:58 np0005475493.novalocal irqbalance[792]: Cannot change IRQ 29 affinity: Operation not permitted
Oct 08 09:00:58 np0005475493.novalocal irqbalance[792]: IRQ 29 affinity is now unmanaged
Oct 08 09:00:59 np0005475493.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 08 09:01:01 np0005475493.novalocal CROND[1055]: (root) CMD (run-parts /etc/cron.hourly)
Oct 08 09:01:01 np0005475493.novalocal run-parts[1058]: (/etc/cron.hourly) starting 0anacron
Oct 08 09:01:02 np0005475493.novalocal anacron[1066]: Anacron started on 2025-10-08
Oct 08 09:01:02 np0005475493.novalocal anacron[1066]: Will run job `cron.daily' in 19 min.
Oct 08 09:01:02 np0005475493.novalocal anacron[1066]: Will run job `cron.weekly' in 39 min.
Oct 08 09:01:02 np0005475493.novalocal anacron[1066]: Will run job `cron.monthly' in 59 min.
Oct 08 09:01:02 np0005475493.novalocal anacron[1066]: Jobs will be executed sequentially
Oct 08 09:01:02 np0005475493.novalocal run-parts[1068]: (/etc/cron.hourly) finished 0anacron
Oct 08 09:01:02 np0005475493.novalocal CROND[1054]: (root) CMDEND (run-parts /etc/cron.hourly)
Oct 08 09:01:18 np0005475493.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 08 09:06:41 np0005475493.novalocal sshd-session[1072]: Accepted publickey for zuul from 38.102.83.114 port 45290 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Oct 08 09:06:41 np0005475493.novalocal systemd[1]: Created slice User Slice of UID 1000.
Oct 08 09:06:41 np0005475493.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Oct 08 09:06:41 np0005475493.novalocal systemd-logind[798]: New session 1 of user zuul.
Oct 08 09:06:41 np0005475493.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Oct 08 09:06:41 np0005475493.novalocal systemd[1]: Starting User Manager for UID 1000...
Oct 08 09:06:41 np0005475493.novalocal systemd[1076]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 08 09:06:42 np0005475493.novalocal systemd[1076]: Queued start job for default target Main User Target.
Oct 08 09:06:42 np0005475493.novalocal systemd[1076]: Created slice User Application Slice.
Oct 08 09:06:42 np0005475493.novalocal systemd[1076]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 08 09:06:42 np0005475493.novalocal systemd[1076]: Started Daily Cleanup of User's Temporary Directories.
Oct 08 09:06:42 np0005475493.novalocal systemd[1076]: Reached target Paths.
Oct 08 09:06:42 np0005475493.novalocal systemd[1076]: Reached target Timers.
Oct 08 09:06:42 np0005475493.novalocal systemd[1076]: Starting D-Bus User Message Bus Socket...
Oct 08 09:06:42 np0005475493.novalocal systemd[1076]: Starting Create User's Volatile Files and Directories...
Oct 08 09:06:42 np0005475493.novalocal systemd[1076]: Finished Create User's Volatile Files and Directories.
Oct 08 09:06:42 np0005475493.novalocal systemd[1076]: Listening on D-Bus User Message Bus Socket.
Oct 08 09:06:42 np0005475493.novalocal systemd[1076]: Reached target Sockets.
Oct 08 09:06:42 np0005475493.novalocal systemd[1076]: Reached target Basic System.
Oct 08 09:06:42 np0005475493.novalocal systemd[1076]: Reached target Main User Target.
Oct 08 09:06:42 np0005475493.novalocal systemd[1076]: Startup finished in 148ms.
Oct 08 09:06:42 np0005475493.novalocal systemd[1]: Started User Manager for UID 1000.
Oct 08 09:06:42 np0005475493.novalocal systemd[1]: Started Session 1 of User zuul.
Oct 08 09:06:42 np0005475493.novalocal sshd-session[1072]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 08 09:06:42 np0005475493.novalocal python3[1160]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 08 09:06:45 np0005475493.novalocal python3[1188]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 08 09:06:53 np0005475493.novalocal python3[1246]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 08 09:06:54 np0005475493.novalocal python3[1286]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Oct 08 09:06:57 np0005475493.novalocal python3[1312]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDUwaJLzYFiNMxkHUdiBe5nX2QD24WnDKKnH7pPHAe2hO1x3tFKdJakzS4Bfn+9WwnlXOTdyqf0G299I1IneRKu3lN8N3LECCnsTdRIJRu5V7vlSuDb2oOMllH6OwZOlpzosOkxzyaiTlCJ8EBGkWNVPZaggh5EfmAxs8MtYtZinH3BlIW1J+SNhG3E7vCYVwtBNTBCCOf8U+pg16czZVFXrl0bKb2r5PiaOpdn2Fmlwaa1z9/bysG3rCSV5SLgUJ4R+62pk8UrzKC8r3ABILvLnkDelceMZJBXLm79ZmcSL6VZ3KKZAxM+X9gpoqi3TBSj9vB/OpdUAPz/mNonUWSU5fHkbF+UpPWYQGBgz1F1Iu3nTdgNFxA7yQ4NMbyeAA9ir1T0O18DVGRZp4xtPB6jkOSY8yzNk+VF8QSd1VWOet5cVrLOYsXfEOhgwwcl39ellVnP0jkHz6MPI3OcVtof5xX9oKTDZdRU+Fojahw6MKOJf06ThtnT07+ldpJXTG0= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 08 09:06:57 np0005475493.novalocal python3[1336]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:06:58 np0005475493.novalocal python3[1435]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 08 09:06:58 np0005475493.novalocal python3[1506]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759914418.142136-251-248779595669966/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=0e07e78396794ac580c5f2d1d33f7e10_id_rsa follow=False checksum=bf7da7a5da71175c68fe99de2c0a4da4e66ecbd4 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:06:59 np0005475493.novalocal python3[1630]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 08 09:06:59 np0005475493.novalocal python3[1701]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759914419.106228-306-142776437604594/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=0e07e78396794ac580c5f2d1d33f7e10_id_rsa.pub follow=False checksum=7e2a4273ddd70a29398d6f290ff6fb3351190f55 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:07:01 np0005475493.novalocal python3[1749]: ansible-ping Invoked with data=pong
Oct 08 09:07:02 np0005475493.novalocal python3[1773]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 08 09:07:05 np0005475493.novalocal python3[1831]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Oct 08 09:07:07 np0005475493.novalocal python3[1863]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:07:07 np0005475493.novalocal python3[1887]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:07:07 np0005475493.novalocal python3[1911]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:07:08 np0005475493.novalocal python3[1935]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:07:08 np0005475493.novalocal python3[1959]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:07:08 np0005475493.novalocal python3[1983]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:07:10 np0005475493.novalocal sudo[2007]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvldrsmdjalrbprukgimrbknitctsbmc ; /usr/bin/python3'
Oct 08 09:07:10 np0005475493.novalocal sudo[2007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:07:10 np0005475493.novalocal python3[2009]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:07:10 np0005475493.novalocal sudo[2007]: pam_unix(sudo:session): session closed for user root
Oct 08 09:07:11 np0005475493.novalocal sudo[2085]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yutsgjltylzkguqkhyxrpvmusccbotdn ; /usr/bin/python3'
Oct 08 09:07:11 np0005475493.novalocal sudo[2085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:07:11 np0005475493.novalocal python3[2087]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 08 09:07:11 np0005475493.novalocal sudo[2085]: pam_unix(sudo:session): session closed for user root
Oct 08 09:07:12 np0005475493.novalocal sudo[2158]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvsivoldujpddnvtqoypweswdbijkuaf ; /usr/bin/python3'
Oct 08 09:07:12 np0005475493.novalocal sudo[2158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:07:12 np0005475493.novalocal python3[2160]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759914431.0952034-31-198404192533432/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:07:12 np0005475493.novalocal sudo[2158]: pam_unix(sudo:session): session closed for user root
Oct 08 09:07:12 np0005475493.novalocal python3[2208]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 08 09:07:13 np0005475493.novalocal python3[2232]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 08 09:07:13 np0005475493.novalocal python3[2256]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 08 09:07:13 np0005475493.novalocal python3[2280]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 08 09:07:14 np0005475493.novalocal python3[2304]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 08 09:07:14 np0005475493.novalocal python3[2328]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 08 09:07:14 np0005475493.novalocal python3[2352]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 08 09:07:14 np0005475493.novalocal python3[2376]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 08 09:07:15 np0005475493.novalocal python3[2400]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 08 09:07:15 np0005475493.novalocal python3[2424]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 08 09:07:15 np0005475493.novalocal python3[2448]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 08 09:07:16 np0005475493.novalocal python3[2472]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 08 09:07:16 np0005475493.novalocal python3[2496]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 08 09:07:16 np0005475493.novalocal python3[2520]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 08 09:07:16 np0005475493.novalocal python3[2544]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 08 09:07:17 np0005475493.novalocal python3[2568]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 08 09:07:17 np0005475493.novalocal python3[2592]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 08 09:07:17 np0005475493.novalocal python3[2616]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 08 09:07:18 np0005475493.novalocal python3[2640]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 08 09:07:18 np0005475493.novalocal python3[2664]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 08 09:07:18 np0005475493.novalocal python3[2688]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 08 09:07:18 np0005475493.novalocal python3[2712]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 08 09:07:19 np0005475493.novalocal python3[2736]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 08 09:07:19 np0005475493.novalocal python3[2761]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 08 09:07:19 np0005475493.novalocal python3[2785]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 08 09:07:20 np0005475493.novalocal python3[2809]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 08 09:07:22 np0005475493.novalocal sudo[2833]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-goukxbiirarveqsuystjdegeigwjtcgw ; /usr/bin/python3'
Oct 08 09:07:22 np0005475493.novalocal sudo[2833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:07:22 np0005475493.novalocal python3[2835]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct 08 09:07:23 np0005475493.novalocal systemd[1]: Starting Time & Date Service...
Oct 08 09:07:23 np0005475493.novalocal systemd[1]: Started Time & Date Service.
Oct 08 09:07:23 np0005475493.novalocal systemd-timedated[2837]: Changed time zone to 'UTC' (UTC).
Oct 08 09:07:24 np0005475493.novalocal sudo[2833]: pam_unix(sudo:session): session closed for user root
Oct 08 09:07:24 np0005475493.novalocal sudo[2864]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aykykdpdgzkmqbyjgmvrykvzqylhpaib ; /usr/bin/python3'
Oct 08 09:07:24 np0005475493.novalocal sudo[2864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:07:24 np0005475493.novalocal python3[2866]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:07:24 np0005475493.novalocal sudo[2864]: pam_unix(sudo:session): session closed for user root
Oct 08 09:07:25 np0005475493.novalocal python3[2942]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 08 09:07:25 np0005475493.novalocal python3[3013]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1759914444.7618375-251-214979393493625/source _original_basename=tmp8stxha5e follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:07:25 np0005475493.novalocal python3[3113]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 08 09:07:26 np0005475493.novalocal python3[3184]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1759914445.6647723-301-271847227165989/source _original_basename=tmp0njx4hg5 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:07:27 np0005475493.novalocal sudo[3284]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juuxglppcxbcchddewuhttrgsvrsahym ; /usr/bin/python3'
Oct 08 09:07:27 np0005475493.novalocal sudo[3284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:07:27 np0005475493.novalocal python3[3286]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 08 09:07:27 np0005475493.novalocal sudo[3284]: pam_unix(sudo:session): session closed for user root
Oct 08 09:07:27 np0005475493.novalocal sudo[3357]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drccdillwasbdjvxfnxyeyzxxdmvvzir ; /usr/bin/python3'
Oct 08 09:07:27 np0005475493.novalocal sudo[3357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:07:27 np0005475493.novalocal python3[3359]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1759914446.9575613-381-66282470237247/source _original_basename=tmpksyt5gjv follow=False checksum=332c94ac911d053598365a4ff7b72c4143f36dd6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:07:27 np0005475493.novalocal sudo[3357]: pam_unix(sudo:session): session closed for user root
Oct 08 09:07:28 np0005475493.novalocal python3[3407]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:07:28 np0005475493.novalocal python3[3433]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:07:28 np0005475493.novalocal sudo[3511]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xquvilhbvlswgcrvzlkkfriigtigfszr ; /usr/bin/python3'
Oct 08 09:07:28 np0005475493.novalocal sudo[3511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:07:29 np0005475493.novalocal python3[3513]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 08 09:07:29 np0005475493.novalocal sudo[3511]: pam_unix(sudo:session): session closed for user root
Oct 08 09:07:29 np0005475493.novalocal sudo[3584]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mywpyprvnsgctxbwqeolztygyudiafyw ; /usr/bin/python3'
Oct 08 09:07:29 np0005475493.novalocal sudo[3584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:07:29 np0005475493.novalocal python3[3586]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1759914448.7362556-451-201566924643088/source _original_basename=tmpumu_mpiu follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:07:29 np0005475493.novalocal sudo[3584]: pam_unix(sudo:session): session closed for user root
Oct 08 09:07:29 np0005475493.novalocal sudo[3635]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmmanojndvxpuzvwbnhcaadejiztexkz ; /usr/bin/python3'
Oct 08 09:07:29 np0005475493.novalocal sudo[3635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:07:30 np0005475493.novalocal python3[3637]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ec2-ffbe-8cbd-24c0-00000000001f-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:07:30 np0005475493.novalocal sudo[3635]: pam_unix(sudo:session): session closed for user root
Oct 08 09:07:30 np0005475493.novalocal python3[3665]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-8cbd-24c0-000000000020-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Oct 08 09:07:32 np0005475493.novalocal python3[3693]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:07:49 np0005475493.novalocal sudo[3717]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqprbmwouioaibawtvglihyjwypepeex ; /usr/bin/python3'
Oct 08 09:07:49 np0005475493.novalocal sudo[3717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:07:49 np0005475493.novalocal python3[3719]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:07:49 np0005475493.novalocal sudo[3717]: pam_unix(sudo:session): session closed for user root
Oct 08 09:07:54 np0005475493.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 08 09:08:28 np0005475493.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct 08 09:08:28 np0005475493.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Oct 08 09:08:28 np0005475493.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Oct 08 09:08:28 np0005475493.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Oct 08 09:08:28 np0005475493.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Oct 08 09:08:28 np0005475493.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Oct 08 09:08:28 np0005475493.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Oct 08 09:08:28 np0005475493.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Oct 08 09:08:28 np0005475493.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Oct 08 09:08:28 np0005475493.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Oct 08 09:08:28 np0005475493.novalocal NetworkManager[858]: <info>  [1759914508.9499] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct 08 09:08:28 np0005475493.novalocal systemd-udevd[3722]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 09:08:28 np0005475493.novalocal NetworkManager[858]: <info>  [1759914508.9745] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 08 09:08:28 np0005475493.novalocal NetworkManager[858]: <info>  [1759914508.9769] settings: (eth1): created default wired connection 'Wired connection 1'
Oct 08 09:08:28 np0005475493.novalocal NetworkManager[858]: <info>  [1759914508.9773] device (eth1): carrier: link connected
Oct 08 09:08:28 np0005475493.novalocal NetworkManager[858]: <info>  [1759914508.9774] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 08 09:08:28 np0005475493.novalocal NetworkManager[858]: <info>  [1759914508.9779] policy: auto-activating connection 'Wired connection 1' (aa7d912d-605e-338f-afad-61058792d4cf)
Oct 08 09:08:28 np0005475493.novalocal NetworkManager[858]: <info>  [1759914508.9783] device (eth1): Activation: starting connection 'Wired connection 1' (aa7d912d-605e-338f-afad-61058792d4cf)
Oct 08 09:08:28 np0005475493.novalocal NetworkManager[858]: <info>  [1759914508.9784] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 08 09:08:28 np0005475493.novalocal NetworkManager[858]: <info>  [1759914508.9787] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 08 09:08:28 np0005475493.novalocal NetworkManager[858]: <info>  [1759914508.9792] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 08 09:08:28 np0005475493.novalocal NetworkManager[858]: <info>  [1759914508.9796] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct 08 09:08:29 np0005475493.novalocal python3[3749]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ec2-ffbe-9636-9f2e-000000000128-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:08:39 np0005475493.novalocal sudo[3828]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkocbnoznmhciofjrhrfofttkxkinghf ; OS_CLOUD=vexxhost /usr/bin/python3'
Oct 08 09:08:39 np0005475493.novalocal sudo[3828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:08:39 np0005475493.novalocal python3[3830]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 08 09:08:39 np0005475493.novalocal sudo[3828]: pam_unix(sudo:session): session closed for user root
Oct 08 09:08:40 np0005475493.novalocal sudo[3901]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkzqumzppzrnobiajlmevuringobkrbx ; OS_CLOUD=vexxhost /usr/bin/python3'
Oct 08 09:08:40 np0005475493.novalocal sudo[3901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:08:40 np0005475493.novalocal python3[3903]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759914519.61463-104-26712893709965/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=b44b64f1176e3f41f137901c4d0c65fc49f732d5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:08:40 np0005475493.novalocal sudo[3901]: pam_unix(sudo:session): session closed for user root
Oct 08 09:08:40 np0005475493.novalocal sudo[3951]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvewyuvmzokudcliwfdhvkeevpstsfwd ; OS_CLOUD=vexxhost /usr/bin/python3'
Oct 08 09:08:40 np0005475493.novalocal sudo[3951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:08:41 np0005475493.novalocal python3[3953]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 08 09:08:41 np0005475493.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Oct 08 09:08:41 np0005475493.novalocal systemd[1]: Stopped Network Manager Wait Online.
Oct 08 09:08:41 np0005475493.novalocal systemd[1]: Stopping Network Manager Wait Online...
Oct 08 09:08:41 np0005475493.novalocal systemd[1]: Stopping Network Manager...
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[858]: <info>  [1759914521.2358] caught SIGTERM, shutting down normally.
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[858]: <info>  [1759914521.2368] dhcp4 (eth0): canceled DHCP transaction
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[858]: <info>  [1759914521.2368] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[858]: <info>  [1759914521.2368] dhcp4 (eth0): state changed no lease
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[858]: <info>  [1759914521.2370] manager: NetworkManager state is now CONNECTING
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[858]: <info>  [1759914521.2560] dhcp4 (eth1): canceled DHCP transaction
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[858]: <info>  [1759914521.2561] dhcp4 (eth1): state changed no lease
Oct 08 09:08:41 np0005475493.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[858]: <info>  [1759914521.2609] exiting (success)
Oct 08 09:08:41 np0005475493.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 08 09:08:41 np0005475493.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Oct 08 09:08:41 np0005475493.novalocal systemd[1]: Stopped Network Manager.
Oct 08 09:08:41 np0005475493.novalocal systemd[1]: NetworkManager.service: Consumed 2.818s CPU time, 10.0M memory peak.
Oct 08 09:08:41 np0005475493.novalocal systemd[1]: Starting Network Manager...
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914521.3406] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:82191aaa-5b9a-46b2-ace7-0656efb209fc)
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914521.3409] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914521.3480] manager[0x556130e34070]: monitoring kernel firmware directory '/lib/firmware'.
Oct 08 09:08:41 np0005475493.novalocal systemd[1]: Starting Hostname Service...
Oct 08 09:08:41 np0005475493.novalocal systemd[1]: Started Hostname Service.
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914521.4499] hostname: hostname: using hostnamed
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914521.4502] hostname: static hostname changed from (none) to "np0005475493.novalocal"
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914521.4507] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914521.4512] manager[0x556130e34070]: rfkill: Wi-Fi hardware radio set enabled
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914521.4512] manager[0x556130e34070]: rfkill: WWAN hardware radio set enabled
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914521.4539] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914521.4539] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914521.4540] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914521.4541] manager: Networking is enabled by state file
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914521.4543] settings: Loaded settings plugin: keyfile (internal)
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914521.4547] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914521.4572] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914521.4581] dhcp: init: Using DHCP client 'internal'
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914521.4583] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914521.4586] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914521.4590] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914521.4596] device (lo): Activation: starting connection 'lo' (04954bd0-4d1f-4562-9334-15a987bf371b)
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914521.4601] device (eth0): carrier: link connected
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914521.4604] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914521.4607] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914521.4607] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914521.4611] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914521.4616] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914521.4621] device (eth1): carrier: link connected
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914521.4624] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914521.4627] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (aa7d912d-605e-338f-afad-61058792d4cf) (indicated)
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914521.4627] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914521.4631] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914521.4635] device (eth1): Activation: starting connection 'Wired connection 1' (aa7d912d-605e-338f-afad-61058792d4cf)
Oct 08 09:08:41 np0005475493.novalocal systemd[1]: Started Network Manager.
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914521.4640] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914521.4643] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914521.4645] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914521.4646] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914521.4647] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914521.4649] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914521.4651] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914521.4653] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914521.4655] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914521.4669] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914521.4671] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914521.4684] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914521.4688] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914521.4710] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914521.4712] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914521.4717] device (lo): Activation: successful, device activated.
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914521.4725] dhcp4 (eth0): state changed new lease, address=38.102.83.224
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914521.4732] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct 08 09:08:41 np0005475493.novalocal systemd[1]: Starting Network Manager Wait Online...
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914521.4900] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 08 09:08:41 np0005475493.novalocal sudo[3951]: pam_unix(sudo:session): session closed for user root
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914521.4940] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914521.4942] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914521.4947] manager: NetworkManager state is now CONNECTED_SITE
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914521.4952] device (eth0): Activation: successful, device activated.
Oct 08 09:08:41 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914521.4959] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct 08 09:08:41 np0005475493.novalocal python3[4039]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ec2-ffbe-9636-9f2e-0000000000bd-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:08:51 np0005475493.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 08 09:09:11 np0005475493.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 08 09:09:26 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914566.3104] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 08 09:09:26 np0005475493.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 08 09:09:26 np0005475493.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 08 09:09:26 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914566.3379] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 08 09:09:26 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914566.3382] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 08 09:09:26 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914566.3394] device (eth1): Activation: successful, device activated.
Oct 08 09:09:26 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914566.3405] manager: startup complete
Oct 08 09:09:26 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914566.3408] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Oct 08 09:09:26 np0005475493.novalocal NetworkManager[3964]: <warn>  [1759914566.3418] device (eth1): Activation: failed for connection 'Wired connection 1'
Oct 08 09:09:26 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914566.3428] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Oct 08 09:09:26 np0005475493.novalocal systemd[1]: Finished Network Manager Wait Online.
Oct 08 09:09:26 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914566.3541] dhcp4 (eth1): canceled DHCP transaction
Oct 08 09:09:26 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914566.3541] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct 08 09:09:26 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914566.3542] dhcp4 (eth1): state changed no lease
Oct 08 09:09:26 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914566.3568] policy: auto-activating connection 'ci-private-network' (f3e90ac0-ed6a-5434-b062-a53261128ad5)
Oct 08 09:09:26 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914566.3576] device (eth1): Activation: starting connection 'ci-private-network' (f3e90ac0-ed6a-5434-b062-a53261128ad5)
Oct 08 09:09:26 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914566.3577] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 08 09:09:26 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914566.3582] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 08 09:09:26 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914566.3595] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 08 09:09:26 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914566.3607] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 08 09:09:26 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914566.4211] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 08 09:09:26 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914566.4214] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 08 09:09:26 np0005475493.novalocal NetworkManager[3964]: <info>  [1759914566.4222] device (eth1): Activation: successful, device activated.
Oct 08 09:09:28 np0005475493.novalocal systemd[1076]: Starting Mark boot as successful...
Oct 08 09:09:28 np0005475493.novalocal systemd[1076]: Finished Mark boot as successful.
Oct 08 09:09:36 np0005475493.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 08 09:09:41 np0005475493.novalocal sshd-session[1087]: Received disconnect from 38.102.83.114 port 45290:11: disconnected by user
Oct 08 09:09:41 np0005475493.novalocal sshd-session[1087]: Disconnected from user zuul 38.102.83.114 port 45290
Oct 08 09:09:41 np0005475493.novalocal sshd-session[1072]: pam_unix(sshd:session): session closed for user zuul
Oct 08 09:09:41 np0005475493.novalocal systemd-logind[798]: Session 1 logged out. Waiting for processes to exit.
Oct 08 09:10:45 np0005475493.novalocal sshd-session[4069]: Accepted publickey for zuul from 38.102.83.114 port 51658 ssh2: RSA SHA256:gAGXrS9nBEZo6eSiaUIpvcgcfSt2T2MqoUt9m43i77Q
Oct 08 09:10:45 np0005475493.novalocal systemd-logind[798]: New session 3 of user zuul.
Oct 08 09:10:45 np0005475493.novalocal systemd[1]: Started Session 3 of User zuul.
Oct 08 09:10:45 np0005475493.novalocal sshd-session[4069]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 08 09:10:45 np0005475493.novalocal sudo[4148]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyjphassqehzilettthmrrocnfwwzeli ; OS_CLOUD=vexxhost /usr/bin/python3'
Oct 08 09:10:45 np0005475493.novalocal sudo[4148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:10:45 np0005475493.novalocal python3[4150]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 08 09:10:45 np0005475493.novalocal sudo[4148]: pam_unix(sudo:session): session closed for user root
Oct 08 09:10:46 np0005475493.novalocal sudo[4221]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amtzwwbwehztvexthfckdfkvcdpdvryy ; OS_CLOUD=vexxhost /usr/bin/python3'
Oct 08 09:10:46 np0005475493.novalocal sudo[4221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:10:46 np0005475493.novalocal python3[4223]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759914645.4919796-373-274740485622830/source _original_basename=tmpzy5f1wjf follow=False checksum=12754a60c85d51e037de99da2edf9af2b613c919 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:10:46 np0005475493.novalocal sudo[4221]: pam_unix(sudo:session): session closed for user root
Oct 08 09:10:50 np0005475493.novalocal sshd-session[4072]: Connection closed by 38.102.83.114 port 51658
Oct 08 09:10:50 np0005475493.novalocal sshd-session[4069]: pam_unix(sshd:session): session closed for user zuul
Oct 08 09:10:50 np0005475493.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Oct 08 09:10:50 np0005475493.novalocal systemd-logind[798]: Session 3 logged out. Waiting for processes to exit.
Oct 08 09:10:50 np0005475493.novalocal systemd-logind[798]: Removed session 3.
Oct 08 09:12:28 np0005475493.novalocal systemd[1076]: Created slice User Background Tasks Slice.
Oct 08 09:12:28 np0005475493.novalocal systemd[1076]: Starting Cleanup of User's Temporary Files and Directories...
Oct 08 09:12:28 np0005475493.novalocal systemd[1076]: Finished Cleanup of User's Temporary Files and Directories.
Oct 08 09:12:39 np0005475493.novalocal sshd-session[4250]: Invalid user  from 121.41.37.60 port 58968
Oct 08 09:12:46 np0005475493.novalocal sshd-session[4250]: Connection closed by invalid user  121.41.37.60 port 58968 [preauth]
Oct 08 09:14:53 np0005475493.novalocal sshd-session[4252]: Invalid user support from 78.128.112.74 port 58240
Oct 08 09:14:53 np0005475493.novalocal sshd-session[4252]: Connection closed by invalid user support 78.128.112.74 port 58240 [preauth]
Oct 08 09:15:55 np0005475493.novalocal systemd[1]: Starting Cleanup of Temporary Directories...
Oct 08 09:15:55 np0005475493.novalocal sshd-session[4257]: Accepted publickey for zuul from 38.102.83.114 port 41276 ssh2: RSA SHA256:gAGXrS9nBEZo6eSiaUIpvcgcfSt2T2MqoUt9m43i77Q
Oct 08 09:15:55 np0005475493.novalocal systemd-logind[798]: New session 4 of user zuul.
Oct 08 09:15:55 np0005475493.novalocal systemd[1]: Started Session 4 of User zuul.
Oct 08 09:15:55 np0005475493.novalocal sshd-session[4257]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 08 09:15:55 np0005475493.novalocal systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Oct 08 09:15:55 np0005475493.novalocal systemd[1]: Finished Cleanup of Temporary Directories.
Oct 08 09:15:55 np0005475493.novalocal systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Oct 08 09:15:55 np0005475493.novalocal sudo[4287]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulqyaqmsfdiwoeksiojugxxhkoximlno ; /usr/bin/python3'
Oct 08 09:15:55 np0005475493.novalocal sudo[4287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:15:56 np0005475493.novalocal python3[4289]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-1895-3e92-000000001cfa-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:15:56 np0005475493.novalocal sudo[4287]: pam_unix(sudo:session): session closed for user root
Oct 08 09:15:56 np0005475493.novalocal sudo[4315]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgrcbxaxfgynlgjqmhebudblfyzyxxxh ; /usr/bin/python3'
Oct 08 09:15:56 np0005475493.novalocal sudo[4315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:15:56 np0005475493.novalocal python3[4317]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:15:56 np0005475493.novalocal sudo[4315]: pam_unix(sudo:session): session closed for user root
Oct 08 09:15:56 np0005475493.novalocal sudo[4341]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unalmwwsueailfeojbcwdkomvinfcpqj ; /usr/bin/python3'
Oct 08 09:15:56 np0005475493.novalocal sudo[4341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:15:56 np0005475493.novalocal python3[4344]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:15:56 np0005475493.novalocal sudo[4341]: pam_unix(sudo:session): session closed for user root
Oct 08 09:15:56 np0005475493.novalocal sudo[4368]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mruipwknqywsrtvggubxtyuieqtbriwj ; /usr/bin/python3'
Oct 08 09:15:56 np0005475493.novalocal sudo[4368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:15:57 np0005475493.novalocal python3[4370]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:15:57 np0005475493.novalocal sudo[4368]: pam_unix(sudo:session): session closed for user root
Oct 08 09:15:57 np0005475493.novalocal sudo[4394]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvltidptfcslveaokpwzuhwcxrgfkxay ; /usr/bin/python3'
Oct 08 09:15:57 np0005475493.novalocal sudo[4394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:15:57 np0005475493.novalocal python3[4396]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:15:57 np0005475493.novalocal sudo[4394]: pam_unix(sudo:session): session closed for user root
Oct 08 09:15:57 np0005475493.novalocal sudo[4420]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkflyukifbaffmyhqyyordwtrgzobgod ; /usr/bin/python3'
Oct 08 09:15:57 np0005475493.novalocal sudo[4420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:15:57 np0005475493.novalocal python3[4422]: ansible-ansible.builtin.lineinfile Invoked with path=/etc/systemd/system.conf regexp=^#DefaultIOAccounting=no line=DefaultIOAccounting=yes state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:15:57 np0005475493.novalocal python3[4422]: ansible-ansible.builtin.lineinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Oct 08 09:15:57 np0005475493.novalocal sudo[4420]: pam_unix(sudo:session): session closed for user root
Oct 08 09:15:58 np0005475493.novalocal sudo[4446]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvauhyabzzyghzjpovtdwbuouvrzgygk ; /usr/bin/python3'
Oct 08 09:15:58 np0005475493.novalocal sudo[4446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:15:59 np0005475493.novalocal python3[4448]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 08 09:15:59 np0005475493.novalocal systemd[1]: Reloading.
Oct 08 09:15:59 np0005475493.novalocal systemd-rc-local-generator[4466]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:15:59 np0005475493.novalocal systemd[1]: Starting dnf makecache...
Oct 08 09:15:59 np0005475493.novalocal sudo[4446]: pam_unix(sudo:session): session closed for user root
Oct 08 09:15:59 np0005475493.novalocal dnf[4479]: Failed determining last makecache time.
Oct 08 09:16:00 np0005475493.novalocal dnf[4479]: CentOS Stream 9 - BaseOS                         24 kB/s | 6.7 kB     00:00
Oct 08 09:16:00 np0005475493.novalocal dnf[4479]: CentOS Stream 9 - AppStream                      63 kB/s | 6.8 kB     00:00
Oct 08 09:16:00 np0005475493.novalocal dnf[4479]: CentOS Stream 9 - CRB                            75 kB/s | 6.6 kB     00:00
Oct 08 09:16:00 np0005475493.novalocal sudo[4509]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cudoogsrlspjtjjybihlqwqrncdwbibn ; /usr/bin/python3'
Oct 08 09:16:00 np0005475493.novalocal sudo[4509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:16:00 np0005475493.novalocal dnf[4479]: CentOS Stream 9 - Extras packages                74 kB/s | 8.0 kB     00:00
Oct 08 09:16:00 np0005475493.novalocal python3[4511]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Oct 08 09:16:00 np0005475493.novalocal sudo[4509]: pam_unix(sudo:session): session closed for user root
Oct 08 09:16:01 np0005475493.novalocal dnf[4479]: Metadata cache created.
Oct 08 09:16:01 np0005475493.novalocal systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct 08 09:16:01 np0005475493.novalocal systemd[1]: Finished dnf makecache.
Oct 08 09:16:01 np0005475493.novalocal sudo[4536]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcsvfthlgtwhjuezpsvyrtrcevgcjkpc ; /usr/bin/python3'
Oct 08 09:16:01 np0005475493.novalocal sudo[4536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:16:01 np0005475493.novalocal python3[4538]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:16:01 np0005475493.novalocal sudo[4536]: pam_unix(sudo:session): session closed for user root
Oct 08 09:16:01 np0005475493.novalocal sudo[4564]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbzkfwflxqtwrzlaideeagmfzuzrfjmz ; /usr/bin/python3'
Oct 08 09:16:01 np0005475493.novalocal sudo[4564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:16:01 np0005475493.novalocal python3[4566]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:16:01 np0005475493.novalocal sudo[4564]: pam_unix(sudo:session): session closed for user root
Oct 08 09:16:01 np0005475493.novalocal sudo[4592]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zidticacurajtklqtdapeayoeekvshqj ; /usr/bin/python3'
Oct 08 09:16:01 np0005475493.novalocal sudo[4592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:16:01 np0005475493.novalocal python3[4594]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:16:01 np0005475493.novalocal sudo[4592]: pam_unix(sudo:session): session closed for user root
Oct 08 09:16:01 np0005475493.novalocal sudo[4620]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-paohhvinyufnzmbtpcxebeajfeeloruo ; /usr/bin/python3'
Oct 08 09:16:01 np0005475493.novalocal sudo[4620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:16:02 np0005475493.novalocal python3[4622]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:16:02 np0005475493.novalocal sudo[4620]: pam_unix(sudo:session): session closed for user root
Oct 08 09:16:02 np0005475493.novalocal python3[4649]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-1895-3e92-000000001d00-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:16:03 np0005475493.novalocal python3[4679]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 09:16:05 np0005475493.novalocal sshd-session[4262]: Connection closed by 38.102.83.114 port 41276
Oct 08 09:16:05 np0005475493.novalocal sshd-session[4257]: pam_unix(sshd:session): session closed for user zuul
Oct 08 09:16:05 np0005475493.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Oct 08 09:16:05 np0005475493.novalocal systemd[1]: session-4.scope: Consumed 3.638s CPU time.
Oct 08 09:16:05 np0005475493.novalocal systemd-logind[798]: Session 4 logged out. Waiting for processes to exit.
Oct 08 09:16:05 np0005475493.novalocal systemd-logind[798]: Removed session 4.
Oct 08 09:16:07 np0005475493.novalocal sshd-session[4685]: Accepted publickey for zuul from 38.102.83.114 port 37026 ssh2: RSA SHA256:gAGXrS9nBEZo6eSiaUIpvcgcfSt2T2MqoUt9m43i77Q
Oct 08 09:16:07 np0005475493.novalocal systemd-logind[798]: New session 5 of user zuul.
Oct 08 09:16:07 np0005475493.novalocal systemd[1]: Started Session 5 of User zuul.
Oct 08 09:16:07 np0005475493.novalocal sshd-session[4685]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 08 09:16:07 np0005475493.novalocal sudo[4712]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-digkpeotjkgvrfgqbxwooxfmppomecwe ; /usr/bin/python3'
Oct 08 09:16:07 np0005475493.novalocal sudo[4712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:16:07 np0005475493.novalocal python3[4714]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 08 09:16:21 np0005475493.novalocal kernel: SELinux:  Converting 365 SID table entries...
Oct 08 09:16:21 np0005475493.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Oct 08 09:16:21 np0005475493.novalocal kernel: SELinux:  policy capability open_perms=1
Oct 08 09:16:21 np0005475493.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Oct 08 09:16:21 np0005475493.novalocal kernel: SELinux:  policy capability always_check_network=0
Oct 08 09:16:21 np0005475493.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 08 09:16:21 np0005475493.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 08 09:16:21 np0005475493.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 08 09:16:30 np0005475493.novalocal kernel: SELinux:  Converting 365 SID table entries...
Oct 08 09:16:30 np0005475493.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Oct 08 09:16:30 np0005475493.novalocal kernel: SELinux:  policy capability open_perms=1
Oct 08 09:16:30 np0005475493.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Oct 08 09:16:30 np0005475493.novalocal kernel: SELinux:  policy capability always_check_network=0
Oct 08 09:16:30 np0005475493.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 08 09:16:30 np0005475493.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 08 09:16:30 np0005475493.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 08 09:16:39 np0005475493.novalocal kernel: SELinux:  Converting 365 SID table entries...
Oct 08 09:16:39 np0005475493.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Oct 08 09:16:39 np0005475493.novalocal kernel: SELinux:  policy capability open_perms=1
Oct 08 09:16:39 np0005475493.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Oct 08 09:16:39 np0005475493.novalocal kernel: SELinux:  policy capability always_check_network=0
Oct 08 09:16:39 np0005475493.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 08 09:16:39 np0005475493.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 08 09:16:39 np0005475493.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 08 09:16:40 np0005475493.novalocal setsebool[4774]: The virt_use_nfs policy boolean was changed to 1 by root
Oct 08 09:16:40 np0005475493.novalocal setsebool[4774]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Oct 08 09:16:50 np0005475493.novalocal kernel: SELinux:  Converting 368 SID table entries...
Oct 08 09:16:50 np0005475493.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Oct 08 09:16:50 np0005475493.novalocal kernel: SELinux:  policy capability open_perms=1
Oct 08 09:16:50 np0005475493.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Oct 08 09:16:50 np0005475493.novalocal kernel: SELinux:  policy capability always_check_network=0
Oct 08 09:16:50 np0005475493.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 08 09:16:50 np0005475493.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 08 09:16:50 np0005475493.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 08 09:17:08 np0005475493.novalocal dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct 08 09:17:08 np0005475493.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 08 09:17:08 np0005475493.novalocal systemd[1]: Starting man-db-cache-update.service...
Oct 08 09:17:08 np0005475493.novalocal systemd[1]: Reloading.
Oct 08 09:17:08 np0005475493.novalocal systemd-rc-local-generator[5530]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:17:08 np0005475493.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Oct 08 09:17:09 np0005475493.novalocal systemd[1]: Starting PackageKit Daemon...
Oct 08 09:17:09 np0005475493.novalocal PackageKit[6426]: daemon start
Oct 08 09:17:09 np0005475493.novalocal systemd[1]: Starting Authorization Manager...
Oct 08 09:17:09 np0005475493.novalocal polkitd[6524]: Started polkitd version 0.117
Oct 08 09:17:09 np0005475493.novalocal polkitd[6524]: Loading rules from directory /etc/polkit-1/rules.d
Oct 08 09:17:09 np0005475493.novalocal polkitd[6524]: Loading rules from directory /usr/share/polkit-1/rules.d
Oct 08 09:17:09 np0005475493.novalocal polkitd[6524]: Finished loading, compiling and executing 3 rules
Oct 08 09:17:09 np0005475493.novalocal systemd[1]: Started Authorization Manager.
Oct 08 09:17:09 np0005475493.novalocal polkitd[6524]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Oct 08 09:17:09 np0005475493.novalocal systemd[1]: Started PackageKit Daemon.
Oct 08 09:17:09 np0005475493.novalocal sudo[4712]: pam_unix(sudo:session): session closed for user root
Oct 08 09:17:19 np0005475493.novalocal python3[12623]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163ec2-ffbe-de16-2c75-00000000000c-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:17:20 np0005475493.novalocal kernel: evm: overlay not supported
Oct 08 09:17:20 np0005475493.novalocal systemd[1076]: Starting D-Bus User Message Bus...
Oct 08 09:17:20 np0005475493.novalocal dbus-broker-launch[13083]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Oct 08 09:17:20 np0005475493.novalocal dbus-broker-launch[13083]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Oct 08 09:17:20 np0005475493.novalocal systemd[1076]: Started D-Bus User Message Bus.
Oct 08 09:17:20 np0005475493.novalocal dbus-broker-lau[13083]: Ready
Oct 08 09:17:20 np0005475493.novalocal systemd[1076]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct 08 09:17:20 np0005475493.novalocal systemd[1076]: Created slice Slice /user.
Oct 08 09:17:20 np0005475493.novalocal systemd[1076]: podman-13017.scope: unit configures an IP firewall, but not running as root.
Oct 08 09:17:20 np0005475493.novalocal systemd[1076]: (This warning is only shown for the first unit using IP firewalling.)
Oct 08 09:17:20 np0005475493.novalocal systemd[1076]: Started podman-13017.scope.
Oct 08 09:17:20 np0005475493.novalocal systemd[1076]: Started podman-pause-6c5a7e9b.scope.
Oct 08 09:17:21 np0005475493.novalocal sshd-session[4688]: Connection closed by 38.102.83.114 port 37026
Oct 08 09:17:21 np0005475493.novalocal sshd-session[4685]: pam_unix(sshd:session): session closed for user zuul
Oct 08 09:17:21 np0005475493.novalocal systemd-logind[798]: Session 5 logged out. Waiting for processes to exit.
Oct 08 09:17:21 np0005475493.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Oct 08 09:17:21 np0005475493.novalocal systemd[1]: session-5.scope: Consumed 58.328s CPU time.
Oct 08 09:17:21 np0005475493.novalocal systemd-logind[798]: Removed session 5.
Oct 08 09:17:36 np0005475493.novalocal sshd-session[19929]: Connection closed by 38.102.83.97 port 56478 [preauth]
Oct 08 09:17:36 np0005475493.novalocal sshd-session[19932]: Connection closed by 38.102.83.97 port 56482 [preauth]
Oct 08 09:17:36 np0005475493.novalocal sshd-session[19937]: Unable to negotiate with 38.102.83.97 port 56484: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Oct 08 09:17:36 np0005475493.novalocal sshd-session[19934]: Unable to negotiate with 38.102.83.97 port 56488: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Oct 08 09:17:36 np0005475493.novalocal sshd-session[19940]: Unable to negotiate with 38.102.83.97 port 56486: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Oct 08 09:17:41 np0005475493.novalocal sshd-session[21745]: Accepted publickey for zuul from 38.102.83.114 port 50728 ssh2: RSA SHA256:gAGXrS9nBEZo6eSiaUIpvcgcfSt2T2MqoUt9m43i77Q
Oct 08 09:17:41 np0005475493.novalocal systemd-logind[798]: New session 6 of user zuul.
Oct 08 09:17:41 np0005475493.novalocal systemd[1]: Started Session 6 of User zuul.
Oct 08 09:17:41 np0005475493.novalocal sshd-session[21745]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 08 09:17:41 np0005475493.novalocal python3[21853]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFow9zNj0F2oq3a4hO/hQaH1lByiJoA0MoTlM589f3ghYSo6Jcv/wEhMSCUcvqB63vjWwEbrK0sbWxkmWWzauzE= zuul@np0005475492.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 08 09:17:41 np0005475493.novalocal sudo[22065]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shotcgtujlfacsdaizffrnwlvebxphsf ; /usr/bin/python3'
Oct 08 09:17:41 np0005475493.novalocal sudo[22065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:17:42 np0005475493.novalocal python3[22080]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFow9zNj0F2oq3a4hO/hQaH1lByiJoA0MoTlM589f3ghYSo6Jcv/wEhMSCUcvqB63vjWwEbrK0sbWxkmWWzauzE= zuul@np0005475492.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 08 09:17:42 np0005475493.novalocal sudo[22065]: pam_unix(sudo:session): session closed for user root
Oct 08 09:17:42 np0005475493.novalocal sudo[22532]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkdglnqgnreewawxlvfmgftbohnznzdc ; /usr/bin/python3'
Oct 08 09:17:42 np0005475493.novalocal sudo[22532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:17:43 np0005475493.novalocal python3[22545]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005475493.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Oct 08 09:17:43 np0005475493.novalocal useradd[22620]: new group: name=cloud-admin, GID=1002
Oct 08 09:17:43 np0005475493.novalocal useradd[22620]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Oct 08 09:17:43 np0005475493.novalocal sudo[22532]: pam_unix(sudo:session): session closed for user root
Oct 08 09:17:43 np0005475493.novalocal sudo[22803]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpcxmlvateqzvdyhfyyncmzlcweujvce ; /usr/bin/python3'
Oct 08 09:17:43 np0005475493.novalocal sudo[22803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:17:43 np0005475493.novalocal python3[22812]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFow9zNj0F2oq3a4hO/hQaH1lByiJoA0MoTlM589f3ghYSo6Jcv/wEhMSCUcvqB63vjWwEbrK0sbWxkmWWzauzE= zuul@np0005475492.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 08 09:17:43 np0005475493.novalocal sudo[22803]: pam_unix(sudo:session): session closed for user root
Oct 08 09:17:43 np0005475493.novalocal sudo[23105]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvxnrmwbawavxfjugjhumyjymtzzkier ; /usr/bin/python3'
Oct 08 09:17:43 np0005475493.novalocal sudo[23105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:17:44 np0005475493.novalocal python3[23115]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 08 09:17:44 np0005475493.novalocal sudo[23105]: pam_unix(sudo:session): session closed for user root
Oct 08 09:17:44 np0005475493.novalocal sudo[23362]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evlpcvjumckeripgkanvgpgldcxwzoxv ; /usr/bin/python3'
Oct 08 09:17:44 np0005475493.novalocal sudo[23362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:17:44 np0005475493.novalocal python3[23370]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1759915063.821309-150-172706630808486/source _original_basename=tmph4l_6nk3 follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:17:44 np0005475493.novalocal sudo[23362]: pam_unix(sudo:session): session closed for user root
Oct 08 09:17:45 np0005475493.novalocal sudo[23774]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbwclemmjjayamixiunvdbwbwgclomvv ; /usr/bin/python3'
Oct 08 09:17:45 np0005475493.novalocal sudo[23774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:17:45 np0005475493.novalocal python3[23783]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Oct 08 09:17:45 np0005475493.novalocal systemd[1]: Starting Hostname Service...
Oct 08 09:17:45 np0005475493.novalocal systemd[1]: Started Hostname Service.
Oct 08 09:17:45 np0005475493.novalocal systemd-hostnamed[23904]: Changed pretty hostname to 'compute-0'
Oct 08 09:17:45 compute-0 systemd-hostnamed[23904]: Hostname set to <compute-0> (static)
Oct 08 09:17:45 compute-0 NetworkManager[3964]: <info>  [1759915065.5703] hostname: static hostname changed from "np0005475493.novalocal" to "compute-0"
Oct 08 09:17:45 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 08 09:17:45 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 08 09:17:45 compute-0 sudo[23774]: pam_unix(sudo:session): session closed for user root
Oct 08 09:17:46 compute-0 sshd-session[21798]: Connection closed by 38.102.83.114 port 50728
Oct 08 09:17:46 compute-0 sshd-session[21745]: pam_unix(sshd:session): session closed for user zuul
Oct 08 09:17:46 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Oct 08 09:17:46 compute-0 systemd[1]: session-6.scope: Consumed 2.031s CPU time.
Oct 08 09:17:46 compute-0 systemd-logind[798]: Session 6 logged out. Waiting for processes to exit.
Oct 08 09:17:46 compute-0 systemd-logind[798]: Removed session 6.
Oct 08 09:17:52 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 08 09:17:52 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 08 09:17:52 compute-0 systemd[1]: man-db-cache-update.service: Consumed 53.006s CPU time.
Oct 08 09:17:52 compute-0 systemd[1]: run-r332a0f7ba49b44cf913cf9270793d67b.service: Deactivated successfully.
Oct 08 09:17:55 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 08 09:18:15 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 08 09:20:02 compute-0 anacron[1066]: Job `cron.daily' started
Oct 08 09:20:02 compute-0 anacron[1066]: Job `cron.daily' terminated
Oct 08 09:21:15 compute-0 sshd-session[26574]: Accepted publickey for zuul from 38.102.83.97 port 57132 ssh2: RSA SHA256:gAGXrS9nBEZo6eSiaUIpvcgcfSt2T2MqoUt9m43i77Q
Oct 08 09:21:15 compute-0 systemd-logind[798]: New session 7 of user zuul.
Oct 08 09:21:15 compute-0 systemd[1]: Started Session 7 of User zuul.
Oct 08 09:21:15 compute-0 sshd-session[26574]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 08 09:21:16 compute-0 python3[26650]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 08 09:21:18 compute-0 sudo[26764]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwcrtivoshsbtonmwkyrdxisgmnirivf ; /usr/bin/python3'
Oct 08 09:21:18 compute-0 sudo[26764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:21:18 compute-0 python3[26766]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 08 09:21:18 compute-0 sudo[26764]: pam_unix(sudo:session): session closed for user root
Oct 08 09:21:18 compute-0 sudo[26837]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvmzosthjiclcwntmggdviwpidmofjiw ; /usr/bin/python3'
Oct 08 09:21:18 compute-0 sudo[26837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:21:18 compute-0 python3[26839]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759915278.0419018-30577-222677633616895/source mode=0755 _original_basename=delorean.repo follow=False checksum=c02c26d38f431b15f6463fc53c3d93ed5138ff07 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:21:18 compute-0 sudo[26837]: pam_unix(sudo:session): session closed for user root
Oct 08 09:21:18 compute-0 sudo[26863]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lobfdkcjdqhhtwqjgnmgvicaksowhmpd ; /usr/bin/python3'
Oct 08 09:21:18 compute-0 sudo[26863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:21:19 compute-0 python3[26865]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 08 09:21:19 compute-0 sudo[26863]: pam_unix(sudo:session): session closed for user root
Oct 08 09:21:19 compute-0 sudo[26936]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qatntrszbngklxvyfyhmkrevccvwpqxf ; /usr/bin/python3'
Oct 08 09:21:19 compute-0 sudo[26936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:21:19 compute-0 python3[26938]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759915278.0419018-30577-222677633616895/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:21:19 compute-0 sudo[26936]: pam_unix(sudo:session): session closed for user root
Oct 08 09:21:19 compute-0 sudo[26962]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iykjmxsldhoqdufzqifhzsbhkjrcssgn ; /usr/bin/python3'
Oct 08 09:21:19 compute-0 sudo[26962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:21:19 compute-0 python3[26964]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 08 09:21:19 compute-0 sudo[26962]: pam_unix(sudo:session): session closed for user root
Oct 08 09:21:19 compute-0 sudo[27035]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apuxvtmainqlruhxkbixjzklesnaduhg ; /usr/bin/python3'
Oct 08 09:21:19 compute-0 sudo[27035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:21:20 compute-0 python3[27037]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759915278.0419018-30577-222677633616895/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:21:20 compute-0 sudo[27035]: pam_unix(sudo:session): session closed for user root
Oct 08 09:21:20 compute-0 sudo[27061]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phgymzpchepkwuhovevcowouizowyjbe ; /usr/bin/python3'
Oct 08 09:21:20 compute-0 sudo[27061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:21:20 compute-0 python3[27063]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 08 09:21:20 compute-0 sudo[27061]: pam_unix(sudo:session): session closed for user root
Oct 08 09:21:20 compute-0 sudo[27134]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omidpneyvlgfnmlodesustbjinvfhczb ; /usr/bin/python3'
Oct 08 09:21:20 compute-0 sudo[27134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:21:20 compute-0 python3[27136]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759915278.0419018-30577-222677633616895/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:21:20 compute-0 sudo[27134]: pam_unix(sudo:session): session closed for user root
Oct 08 09:21:20 compute-0 sudo[27160]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgvcmzkxehrhltwonisapazgxvqefznk ; /usr/bin/python3'
Oct 08 09:21:20 compute-0 sudo[27160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:21:20 compute-0 python3[27162]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 08 09:21:20 compute-0 sudo[27160]: pam_unix(sudo:session): session closed for user root
Oct 08 09:21:21 compute-0 sudo[27233]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjzrqpnvhicsyzqrdtpaekgsyhvsqyap ; /usr/bin/python3'
Oct 08 09:21:21 compute-0 sudo[27233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:21:21 compute-0 python3[27235]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759915278.0419018-30577-222677633616895/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:21:21 compute-0 sudo[27233]: pam_unix(sudo:session): session closed for user root
Oct 08 09:21:21 compute-0 sudo[27259]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xazaffmuaylaaxbpvtbmqdgbgbubedre ; /usr/bin/python3'
Oct 08 09:21:21 compute-0 sudo[27259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:21:21 compute-0 python3[27261]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 08 09:21:21 compute-0 sudo[27259]: pam_unix(sudo:session): session closed for user root
Oct 08 09:21:21 compute-0 sudo[27332]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvnupchgfdzfstjcyekcwuypyegoydkg ; /usr/bin/python3'
Oct 08 09:21:21 compute-0 sudo[27332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:21:21 compute-0 python3[27334]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759915278.0419018-30577-222677633616895/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:21:21 compute-0 sudo[27332]: pam_unix(sudo:session): session closed for user root
Oct 08 09:21:21 compute-0 sudo[27358]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xywnbdcsdavrmdxrhickpktasmbgucxg ; /usr/bin/python3'
Oct 08 09:21:21 compute-0 sudo[27358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:21:22 compute-0 python3[27360]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 08 09:21:22 compute-0 sudo[27358]: pam_unix(sudo:session): session closed for user root
Oct 08 09:21:22 compute-0 sudo[27431]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tucpvcnwazazohmjwfkchtcphaoxliod ; /usr/bin/python3'
Oct 08 09:21:22 compute-0 sudo[27431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:21:22 compute-0 python3[27433]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759915278.0419018-30577-222677633616895/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=75ca8f9fe9a538824fd094f239c30e8ce8652e8a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:21:22 compute-0 sudo[27431]: pam_unix(sudo:session): session closed for user root
Oct 08 09:21:25 compute-0 sshd-session[27459]: Connection closed by 192.168.122.11 port 32836 [preauth]
Oct 08 09:21:25 compute-0 sshd-session[27462]: Connection closed by 192.168.122.11 port 32842 [preauth]
Oct 08 09:21:25 compute-0 sshd-session[27458]: Unable to negotiate with 192.168.122.11 port 32848: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Oct 08 09:21:25 compute-0 sshd-session[27461]: Unable to negotiate with 192.168.122.11 port 32850: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Oct 08 09:21:25 compute-0 sshd-session[27460]: Unable to negotiate with 192.168.122.11 port 32864: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Oct 08 09:21:34 compute-0 python3[27491]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:22:15 compute-0 PackageKit[6426]: daemon quit
Oct 08 09:22:15 compute-0 systemd[1]: packagekit.service: Deactivated successfully.
Oct 08 09:26:34 compute-0 sshd-session[26577]: Received disconnect from 38.102.83.97 port 57132:11: disconnected by user
Oct 08 09:26:34 compute-0 sshd-session[26577]: Disconnected from user zuul 38.102.83.97 port 57132
Oct 08 09:26:34 compute-0 sshd-session[26574]: pam_unix(sshd:session): session closed for user zuul
Oct 08 09:26:34 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Oct 08 09:26:34 compute-0 systemd-logind[798]: Session 7 logged out. Waiting for processes to exit.
Oct 08 09:26:34 compute-0 systemd[1]: session-7.scope: Consumed 4.760s CPU time.
Oct 08 09:26:34 compute-0 systemd-logind[798]: Removed session 7.
Oct 08 09:27:48 compute-0 sshd[1006]: Timeout before authentication for connection from 101.126.149.19 to 38.102.83.224, pid = 27498
Oct 08 09:27:54 compute-0 sshd-session[27500]: banner exchange: Connection from 195.178.110.15 port 46710: invalid format
Oct 08 09:27:54 compute-0 sshd-session[27501]: banner exchange: Connection from 195.178.110.15 port 46720: invalid format
Oct 08 09:33:21 compute-0 sshd-session[27503]: Accepted publickey for zuul from 192.168.122.30 port 40520 ssh2: ECDSA SHA256:7LvTHAj52RCSkKXOsIbzSDrEw7lwj23D0dAJ0Qgx0Rg
Oct 08 09:33:21 compute-0 systemd-logind[798]: New session 8 of user zuul.
Oct 08 09:33:21 compute-0 systemd[1]: Started Session 8 of User zuul.
Oct 08 09:33:21 compute-0 sshd-session[27503]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 08 09:33:22 compute-0 python3.9[27656]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 08 09:33:23 compute-0 sudo[27835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhhtljcgsqcfcfvdsgrsgyfrainppcjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916003.37955-56-126864287924984/AnsiballZ_command.py'
Oct 08 09:33:23 compute-0 sudo[27835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:33:23 compute-0 python3.9[27837]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:33:30 compute-0 sudo[27835]: pam_unix(sudo:session): session closed for user root
Oct 08 09:33:31 compute-0 sshd-session[27506]: Connection closed by 192.168.122.30 port 40520
Oct 08 09:33:31 compute-0 sshd-session[27503]: pam_unix(sshd:session): session closed for user zuul
Oct 08 09:33:31 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Oct 08 09:33:31 compute-0 systemd[1]: session-8.scope: Consumed 7.599s CPU time.
Oct 08 09:33:31 compute-0 systemd-logind[798]: Session 8 logged out. Waiting for processes to exit.
Oct 08 09:33:31 compute-0 systemd-logind[798]: Removed session 8.
Oct 08 09:33:39 compute-0 sshd-session[27897]: error: kex_exchange_identification: read: Connection reset by peer
Oct 08 09:33:39 compute-0 sshd-session[27897]: Connection reset by 45.140.17.97 port 54399
Oct 08 09:33:47 compute-0 sshd-session[27898]: Accepted publickey for zuul from 192.168.122.30 port 38568 ssh2: ECDSA SHA256:7LvTHAj52RCSkKXOsIbzSDrEw7lwj23D0dAJ0Qgx0Rg
Oct 08 09:33:47 compute-0 systemd-logind[798]: New session 9 of user zuul.
Oct 08 09:33:47 compute-0 systemd[1]: Started Session 9 of User zuul.
Oct 08 09:33:47 compute-0 sshd-session[27898]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 08 09:33:48 compute-0 python3.9[28051]: ansible-ansible.legacy.ping Invoked with data=pong
Oct 08 09:33:49 compute-0 python3.9[28225]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 08 09:33:50 compute-0 sudo[28375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edpkesibpgkrigcvybqewxqnotaiipfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916030.152472-93-31260223176003/AnsiballZ_command.py'
Oct 08 09:33:50 compute-0 sudo[28375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:33:50 compute-0 python3.9[28377]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:33:50 compute-0 sudo[28375]: pam_unix(sudo:session): session closed for user root
Oct 08 09:33:51 compute-0 sudo[28528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjrsmuojelhaehbatjvucmzwduznlcum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916031.2486613-129-155178998525798/AnsiballZ_stat.py'
Oct 08 09:33:51 compute-0 sudo[28528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:33:51 compute-0 python3.9[28530]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 09:33:51 compute-0 sudo[28528]: pam_unix(sudo:session): session closed for user root
Oct 08 09:33:52 compute-0 sudo[28680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hywbvqwhpjwruikeccdxnpekzdcrtsya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916032.1335473-153-273005848711548/AnsiballZ_file.py'
Oct 08 09:33:52 compute-0 sudo[28680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:33:52 compute-0 python3.9[28682]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:33:52 compute-0 sudo[28680]: pam_unix(sudo:session): session closed for user root
Oct 08 09:33:53 compute-0 sudo[28832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfgikzdzsrcttockwbxawxykhlmsdvte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916032.9931462-177-150272107467709/AnsiballZ_stat.py'
Oct 08 09:33:53 compute-0 sudo[28832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:33:53 compute-0 python3.9[28834]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:33:53 compute-0 sudo[28832]: pam_unix(sudo:session): session closed for user root
Oct 08 09:33:53 compute-0 sudo[28955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvsarbdrfrjlexeqqoqslhpaxqxptzcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916032.9931462-177-150272107467709/AnsiballZ_copy.py'
Oct 08 09:33:53 compute-0 sudo[28955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:33:54 compute-0 python3.9[28957]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1759916032.9931462-177-150272107467709/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:33:54 compute-0 sudo[28955]: pam_unix(sudo:session): session closed for user root
Oct 08 09:33:54 compute-0 sudo[29107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cffwytnlqakfncottjafgyhksjjaqadf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916034.4063227-222-194476004258317/AnsiballZ_setup.py'
Oct 08 09:33:54 compute-0 sudo[29107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:33:54 compute-0 python3.9[29109]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 08 09:33:55 compute-0 sudo[29107]: pam_unix(sudo:session): session closed for user root
Oct 08 09:33:55 compute-0 sudo[29263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjducfkkolqfszkevjehdylvrbmsjest ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916035.4595895-246-78996514133171/AnsiballZ_file.py'
Oct 08 09:33:55 compute-0 sudo[29263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:33:55 compute-0 python3.9[29265]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:33:55 compute-0 sudo[29263]: pam_unix(sudo:session): session closed for user root
Oct 08 09:33:56 compute-0 python3.9[29415]: ansible-ansible.builtin.service_facts Invoked
Oct 08 09:34:00 compute-0 python3.9[29670]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:34:01 compute-0 python3.9[29820]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 08 09:34:02 compute-0 python3.9[29974]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 08 09:34:03 compute-0 sudo[30130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucfomokqvfnsbjounqstonwtbbmuuwxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916042.952923-390-173404733290030/AnsiballZ_setup.py'
Oct 08 09:34:03 compute-0 sudo[30130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:34:03 compute-0 python3.9[30132]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 08 09:34:03 compute-0 sudo[30130]: pam_unix(sudo:session): session closed for user root
Oct 08 09:34:04 compute-0 sudo[30214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmaibaxiupniiaxrwbxxujxjssgdruth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916042.952923-390-173404733290030/AnsiballZ_dnf.py'
Oct 08 09:34:04 compute-0 sudo[30214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:34:04 compute-0 python3.9[30216]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 08 09:34:47 compute-0 systemd[1]: Reloading.
Oct 08 09:34:47 compute-0 systemd-rc-local-generator[30407]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:34:48 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Oct 08 09:34:48 compute-0 systemd[1]: Reloading.
Oct 08 09:34:48 compute-0 systemd-rc-local-generator[30453]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:34:48 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Oct 08 09:34:48 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Oct 08 09:34:48 compute-0 systemd[1]: Reloading.
Oct 08 09:34:48 compute-0 systemd-rc-local-generator[30493]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:34:48 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Oct 08 09:34:49 compute-0 dbus-broker-launch[754]: Noticed file-system modification, trigger reload.
Oct 08 09:34:49 compute-0 dbus-broker-launch[754]: Noticed file-system modification, trigger reload.
Oct 08 09:34:49 compute-0 dbus-broker-launch[754]: Noticed file-system modification, trigger reload.
Oct 08 09:35:48 compute-0 kernel: SELinux:  Converting 2714 SID table entries...
Oct 08 09:35:48 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 08 09:35:48 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 08 09:35:48 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 08 09:35:48 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 08 09:35:48 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 08 09:35:48 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 08 09:35:48 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 08 09:35:48 compute-0 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Oct 08 09:35:48 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 08 09:35:48 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 08 09:35:48 compute-0 systemd[1]: Reloading.
Oct 08 09:35:48 compute-0 systemd-rc-local-generator[30801]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:35:49 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 08 09:35:49 compute-0 systemd[1]: Starting PackageKit Daemon...
Oct 08 09:35:49 compute-0 PackageKit[31040]: daemon start
Oct 08 09:35:49 compute-0 systemd[1]: Started PackageKit Daemon.
Oct 08 09:35:49 compute-0 sudo[30214]: pam_unix(sudo:session): session closed for user root
Oct 08 09:35:49 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 08 09:35:49 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 08 09:35:49 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.187s CPU time.
Oct 08 09:35:49 compute-0 systemd[1]: run-r8b2586d03d284a82b696869aad06d2e0.service: Deactivated successfully.
Oct 08 09:36:02 compute-0 sudo[31721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehsjsxrjbhdjyjfudhzzhybbkclovpbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916161.9041362-426-247145334411021/AnsiballZ_command.py'
Oct 08 09:36:02 compute-0 sudo[31721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:36:02 compute-0 python3.9[31723]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:36:03 compute-0 sudo[31721]: pam_unix(sudo:session): session closed for user root
Oct 08 09:36:05 compute-0 sudo[32002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzphkcwzxzetfodejlholglekhlzakns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916164.2323215-450-238612458209228/AnsiballZ_selinux.py'
Oct 08 09:36:05 compute-0 sudo[32002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:36:05 compute-0 python3.9[32004]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Oct 08 09:36:05 compute-0 sudo[32002]: pam_unix(sudo:session): session closed for user root
Oct 08 09:36:06 compute-0 sudo[32154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibaaioclxbgbdmvgejfdouigbilpdzow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916166.024527-483-67736806271643/AnsiballZ_command.py'
Oct 08 09:36:06 compute-0 sudo[32154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:36:06 compute-0 python3.9[32156]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Oct 08 09:36:07 compute-0 sudo[32154]: pam_unix(sudo:session): session closed for user root
Oct 08 09:36:07 compute-0 sudo[32308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eknfvmqqjangxkcfkcvwjqjpxlstaiwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916167.7667844-507-123791164564446/AnsiballZ_file.py'
Oct 08 09:36:07 compute-0 sudo[32308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:36:08 compute-0 python3.9[32310]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:36:08 compute-0 sudo[32308]: pam_unix(sudo:session): session closed for user root
Oct 08 09:36:11 compute-0 sudo[32460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbkjyqwdeffekfavdoiwgyhzlwmtoyba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916170.886964-531-180842356408962/AnsiballZ_mount.py'
Oct 08 09:36:11 compute-0 sudo[32460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:36:11 compute-0 python3.9[32462]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Oct 08 09:36:11 compute-0 sudo[32460]: pam_unix(sudo:session): session closed for user root
Oct 08 09:36:12 compute-0 sudo[32612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjbnyquwvgvyvkalpllzpedouqmmfzki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916172.6252866-615-28101301934779/AnsiballZ_file.py'
Oct 08 09:36:12 compute-0 sudo[32612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:36:14 compute-0 python3.9[32614]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:36:14 compute-0 sudo[32612]: pam_unix(sudo:session): session closed for user root
Oct 08 09:36:19 compute-0 sudo[32765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phwbzrmckmjszzjpvfmimyasncqqdoan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916178.975777-639-81543345732600/AnsiballZ_stat.py'
Oct 08 09:36:19 compute-0 sudo[32765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:36:19 compute-0 python3.9[32767]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:36:19 compute-0 sudo[32765]: pam_unix(sudo:session): session closed for user root
Oct 08 09:36:19 compute-0 sudo[32888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-detfbalbbyszbvnnltxcdtfdzwufmzpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916178.975777-639-81543345732600/AnsiballZ_copy.py'
Oct 08 09:36:19 compute-0 sudo[32888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:36:19 compute-0 python3.9[32890]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759916178.975777-639-81543345732600/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=9b1ec9ef1baf0871d11fb19dd2fc6e37ec07cf31 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:36:20 compute-0 sudo[32888]: pam_unix(sudo:session): session closed for user root
Oct 08 09:36:21 compute-0 sudo[33040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emzausamngwvkyysgrobnyimbvcqehhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916181.3891258-720-102831991720984/AnsiballZ_getent.py'
Oct 08 09:36:21 compute-0 sudo[33040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:36:22 compute-0 python3.9[33042]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Oct 08 09:36:22 compute-0 sudo[33040]: pam_unix(sudo:session): session closed for user root
Oct 08 09:36:22 compute-0 sudo[33193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwytnihiwpyptlbxhnuzspuwoqtuqwgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916182.3489451-744-110442583857436/AnsiballZ_group.py'
Oct 08 09:36:22 compute-0 sudo[33193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:36:23 compute-0 python3.9[33195]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 08 09:36:23 compute-0 groupadd[33196]: group added to /etc/group: name=qemu, GID=107
Oct 08 09:36:23 compute-0 groupadd[33196]: group added to /etc/gshadow: name=qemu
Oct 08 09:36:23 compute-0 groupadd[33196]: new group: name=qemu, GID=107
Oct 08 09:36:23 compute-0 sudo[33193]: pam_unix(sudo:session): session closed for user root
Oct 08 09:36:23 compute-0 sudo[33351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwjgjbflzuxtgomdmbzokieozpisjtoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916183.348816-768-121250818240165/AnsiballZ_user.py'
Oct 08 09:36:23 compute-0 sudo[33351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:36:24 compute-0 python3.9[33353]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 08 09:36:24 compute-0 useradd[33355]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Oct 08 09:36:24 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 08 09:36:24 compute-0 sudo[33351]: pam_unix(sudo:session): session closed for user root
Oct 08 09:36:24 compute-0 sudo[33512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmogngtdouczaeaeqjegpyobqrfohxwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916184.3988292-792-261459920053945/AnsiballZ_getent.py'
Oct 08 09:36:24 compute-0 sudo[33512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:36:24 compute-0 python3.9[33514]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Oct 08 09:36:24 compute-0 sudo[33512]: pam_unix(sudo:session): session closed for user root
Oct 08 09:36:25 compute-0 sudo[33665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcubpgcwwvidovstbmhfllokpklcrdby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916185.1770642-816-84472068463082/AnsiballZ_group.py'
Oct 08 09:36:25 compute-0 sudo[33665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:36:25 compute-0 python3.9[33667]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 08 09:36:25 compute-0 groupadd[33668]: group added to /etc/group: name=hugetlbfs, GID=42477
Oct 08 09:36:25 compute-0 groupadd[33668]: group added to /etc/gshadow: name=hugetlbfs
Oct 08 09:36:25 compute-0 groupadd[33668]: new group: name=hugetlbfs, GID=42477
Oct 08 09:36:25 compute-0 sudo[33665]: pam_unix(sudo:session): session closed for user root
Oct 08 09:36:26 compute-0 sudo[33823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yshpxppwxmdwrlbtvaihepxnofahzpja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916186.0225155-843-78209717846767/AnsiballZ_file.py'
Oct 08 09:36:26 compute-0 sudo[33823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:36:26 compute-0 python3.9[33825]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Oct 08 09:36:26 compute-0 sudo[33823]: pam_unix(sudo:session): session closed for user root
Oct 08 09:36:27 compute-0 sudo[33975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auxyhodjldjdjaojbrvlxefwbatyylcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916187.052679-876-138303792022103/AnsiballZ_dnf.py'
Oct 08 09:36:27 compute-0 sudo[33975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:36:27 compute-0 python3.9[33977]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 08 09:36:29 compute-0 sudo[33975]: pam_unix(sudo:session): session closed for user root
Oct 08 09:36:29 compute-0 sudo[34128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzkynrfkydeauhawfotuaesjpyrgkwzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916189.5942416-900-8863767700494/AnsiballZ_file.py'
Oct 08 09:36:29 compute-0 sudo[34128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:36:30 compute-0 python3.9[34130]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:36:30 compute-0 sudo[34128]: pam_unix(sudo:session): session closed for user root
Oct 08 09:36:30 compute-0 sudo[34280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgsiwiooueovdfpxtxmgffjbsmtpfakj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916190.3191996-924-181292877450218/AnsiballZ_stat.py'
Oct 08 09:36:30 compute-0 sudo[34280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:36:30 compute-0 python3.9[34282]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:36:30 compute-0 sudo[34280]: pam_unix(sudo:session): session closed for user root
Oct 08 09:36:31 compute-0 sudo[34403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zaugkfsnpsnuonvhjjgveoqnhhllyahc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916190.3191996-924-181292877450218/AnsiballZ_copy.py'
Oct 08 09:36:31 compute-0 sudo[34403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:36:31 compute-0 python3.9[34405]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759916190.3191996-924-181292877450218/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:36:31 compute-0 sudo[34403]: pam_unix(sudo:session): session closed for user root
Oct 08 09:36:32 compute-0 sudo[34555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtewfedfzjilsbsygqbfcsjadpwhaubg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916191.6980762-969-263975634180723/AnsiballZ_systemd.py'
Oct 08 09:36:32 compute-0 sudo[34555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:36:32 compute-0 python3.9[34557]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 08 09:36:32 compute-0 systemd[1]: Starting Load Kernel Modules...
Oct 08 09:36:32 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Oct 08 09:36:32 compute-0 kernel: Bridge firewalling registered
Oct 08 09:36:32 compute-0 systemd-modules-load[34561]: Inserted module 'br_netfilter'
Oct 08 09:36:32 compute-0 systemd[1]: Finished Load Kernel Modules.
Oct 08 09:36:32 compute-0 sudo[34555]: pam_unix(sudo:session): session closed for user root
Oct 08 09:36:33 compute-0 sudo[34716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npbnesuzufsymfaqsgbuzbyqltwyersq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916193.0597684-993-82201197601429/AnsiballZ_stat.py'
Oct 08 09:36:33 compute-0 sudo[34716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:36:33 compute-0 python3.9[34718]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:36:33 compute-0 sudo[34716]: pam_unix(sudo:session): session closed for user root
Oct 08 09:36:33 compute-0 sudo[34839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbzwkgpfrobsundqwnsjbgwsjyekmzvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916193.0597684-993-82201197601429/AnsiballZ_copy.py'
Oct 08 09:36:33 compute-0 sudo[34839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:36:34 compute-0 python3.9[34841]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759916193.0597684-993-82201197601429/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:36:34 compute-0 sudo[34839]: pam_unix(sudo:session): session closed for user root
Oct 08 09:36:34 compute-0 sudo[34991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzmdaoapeiddkdzkfnrzamupmbwwriom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916194.6903605-1047-167365766788632/AnsiballZ_dnf.py'
Oct 08 09:36:34 compute-0 sudo[34991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:36:35 compute-0 python3.9[34993]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 08 09:36:38 compute-0 dbus-broker-launch[754]: Noticed file-system modification, trigger reload.
Oct 08 09:36:38 compute-0 dbus-broker-launch[754]: Noticed file-system modification, trigger reload.
Oct 08 09:36:38 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 08 09:36:38 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 08 09:36:38 compute-0 systemd[1]: Reloading.
Oct 08 09:36:38 compute-0 systemd-rc-local-generator[35056]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:36:38 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 08 09:36:39 compute-0 sudo[34991]: pam_unix(sudo:session): session closed for user root
Oct 08 09:36:40 compute-0 python3.9[37077]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 09:36:41 compute-0 python3.9[38197]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Oct 08 09:36:42 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 08 09:36:42 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 08 09:36:42 compute-0 systemd[1]: man-db-cache-update.service: Consumed 4.415s CPU time.
Oct 08 09:36:42 compute-0 systemd[1]: run-r64aa309c0fe649c490af704593ff1ca8.service: Deactivated successfully.
Oct 08 09:36:42 compute-0 python3.9[39004]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 09:36:43 compute-0 sudo[39155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbyoyvwycsjecvhkuhtxdirszahriwru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916202.7505226-1164-280150939723754/AnsiballZ_command.py'
Oct 08 09:36:43 compute-0 sudo[39155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:36:43 compute-0 python3.9[39157]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:36:43 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct 08 09:36:43 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Oct 08 09:36:43 compute-0 sudo[39155]: pam_unix(sudo:session): session closed for user root
Oct 08 09:36:44 compute-0 sudo[39528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdysljpcdivfugkpjgzvxqsfhgtiedng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916204.28571-1191-39551873610589/AnsiballZ_systemd.py'
Oct 08 09:36:44 compute-0 sudo[39528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:36:44 compute-0 python3.9[39530]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 09:36:44 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct 08 09:36:45 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Oct 08 09:36:45 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct 08 09:36:45 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct 08 09:36:45 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Oct 08 09:36:45 compute-0 sudo[39528]: pam_unix(sudo:session): session closed for user root
Oct 08 09:36:45 compute-0 python3.9[39691]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Oct 08 09:36:49 compute-0 sudo[39841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crihsrmkxaevmrrmskqgjlbxgnrgwoil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916209.2086072-1362-86692071138304/AnsiballZ_systemd.py'
Oct 08 09:36:49 compute-0 sudo[39841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:36:50 compute-0 python3.9[39843]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 09:36:50 compute-0 systemd[1]: Reloading.
Oct 08 09:36:50 compute-0 systemd-rc-local-generator[39873]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:36:50 compute-0 sudo[39841]: pam_unix(sudo:session): session closed for user root
Oct 08 09:36:50 compute-0 sudo[40030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oupdyffotvknfiujlvbnfaictuxduygp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916210.427257-1362-113643861455299/AnsiballZ_systemd.py'
Oct 08 09:36:50 compute-0 sudo[40030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:36:50 compute-0 python3.9[40032]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 09:36:51 compute-0 systemd[1]: Reloading.
Oct 08 09:36:51 compute-0 systemd-rc-local-generator[40056]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:36:51 compute-0 sudo[40030]: pam_unix(sudo:session): session closed for user root
Oct 08 09:36:52 compute-0 sudo[40218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvvzkbnmjyfztegdzwuabypexlwxknng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916211.881644-1410-96365111071249/AnsiballZ_command.py'
Oct 08 09:36:52 compute-0 sudo[40218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:36:52 compute-0 python3.9[40220]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:36:52 compute-0 sudo[40218]: pam_unix(sudo:session): session closed for user root
Oct 08 09:36:52 compute-0 sudo[40371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bceecbgwpykyymkpueypuvwdbciqcslv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916212.6135266-1434-21252170695978/AnsiballZ_command.py'
Oct 08 09:36:52 compute-0 sudo[40371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:36:53 compute-0 python3.9[40373]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:36:53 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Oct 08 09:36:53 compute-0 sudo[40371]: pam_unix(sudo:session): session closed for user root
Oct 08 09:36:53 compute-0 sudo[40524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbgobdifftnkdijrptjhcfyuyxareqve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916213.3281114-1458-91329809456277/AnsiballZ_command.py'
Oct 08 09:36:53 compute-0 sudo[40524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:36:53 compute-0 python3.9[40526]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:36:55 compute-0 sudo[40524]: pam_unix(sudo:session): session closed for user root
Oct 08 09:36:55 compute-0 sudo[40686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfbzuhrggprrsbfgvnyundkcqfzjfvvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916215.546982-1482-258448057250454/AnsiballZ_command.py'
Oct 08 09:36:55 compute-0 sudo[40686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:36:55 compute-0 python3.9[40688]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:36:55 compute-0 sudo[40686]: pam_unix(sudo:session): session closed for user root
Oct 08 09:36:56 compute-0 sudo[40839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afdbfqxgbyzvyxtjszokxthlyxffzyrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916216.374325-1506-67969248956240/AnsiballZ_systemd.py'
Oct 08 09:36:56 compute-0 sudo[40839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:36:57 compute-0 python3.9[40841]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 08 09:36:57 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct 08 09:36:57 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Oct 08 09:36:57 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Oct 08 09:36:57 compute-0 systemd[1]: Starting Apply Kernel Variables...
Oct 08 09:36:57 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct 08 09:36:57 compute-0 systemd[1]: Finished Apply Kernel Variables.
Oct 08 09:36:57 compute-0 sudo[40839]: pam_unix(sudo:session): session closed for user root
Oct 08 09:36:57 compute-0 sshd-session[27901]: Connection closed by 192.168.122.30 port 38568
Oct 08 09:36:57 compute-0 sshd-session[27898]: pam_unix(sshd:session): session closed for user zuul
Oct 08 09:36:57 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Oct 08 09:36:57 compute-0 systemd-logind[798]: Session 9 logged out. Waiting for processes to exit.
Oct 08 09:36:57 compute-0 systemd[1]: session-9.scope: Consumed 2min 7.380s CPU time.
Oct 08 09:36:57 compute-0 systemd-logind[798]: Removed session 9.
Oct 08 09:37:04 compute-0 sshd-session[40871]: Accepted publickey for zuul from 192.168.122.30 port 48946 ssh2: ECDSA SHA256:7LvTHAj52RCSkKXOsIbzSDrEw7lwj23D0dAJ0Qgx0Rg
Oct 08 09:37:04 compute-0 systemd-logind[798]: New session 10 of user zuul.
Oct 08 09:37:04 compute-0 systemd[1]: Started Session 10 of User zuul.
Oct 08 09:37:04 compute-0 sshd-session[40871]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 08 09:37:05 compute-0 python3.9[41024]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 08 09:37:06 compute-0 sudo[41178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybvmffgkddpkbjixxexrkwnugyuzafxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916225.957323-68-269623680677647/AnsiballZ_getent.py'
Oct 08 09:37:06 compute-0 sudo[41178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:37:06 compute-0 python3.9[41180]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Oct 08 09:37:06 compute-0 sudo[41178]: pam_unix(sudo:session): session closed for user root
Oct 08 09:37:07 compute-0 sudo[41331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idahmecrcnlmatuowmwqsdxyuobhwqmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916227.156157-92-205335797211522/AnsiballZ_group.py'
Oct 08 09:37:07 compute-0 sudo[41331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:37:07 compute-0 python3.9[41333]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 08 09:37:07 compute-0 groupadd[41334]: group added to /etc/group: name=openvswitch, GID=42476
Oct 08 09:37:07 compute-0 groupadd[41334]: group added to /etc/gshadow: name=openvswitch
Oct 08 09:37:07 compute-0 groupadd[41334]: new group: name=openvswitch, GID=42476
Oct 08 09:37:07 compute-0 sudo[41331]: pam_unix(sudo:session): session closed for user root
Oct 08 09:37:08 compute-0 sudo[41489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypchttlubfcstcqslummxnialimnlbzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916228.1607795-116-195295350850726/AnsiballZ_user.py'
Oct 08 09:37:08 compute-0 sudo[41489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:37:08 compute-0 python3.9[41491]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 08 09:37:08 compute-0 useradd[41493]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Oct 08 09:37:08 compute-0 useradd[41493]: add 'openvswitch' to group 'hugetlbfs'
Oct 08 09:37:08 compute-0 useradd[41493]: add 'openvswitch' to shadow group 'hugetlbfs'
Oct 08 09:37:08 compute-0 sudo[41489]: pam_unix(sudo:session): session closed for user root
Oct 08 09:37:09 compute-0 sudo[41649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uantrgwiavjimqkqqgauxpdfnsohuwnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916229.3466578-146-258287280448679/AnsiballZ_setup.py'
Oct 08 09:37:09 compute-0 sudo[41649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:37:09 compute-0 python3.9[41651]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 08 09:37:10 compute-0 sudo[41649]: pam_unix(sudo:session): session closed for user root
Oct 08 09:37:10 compute-0 sudo[41733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqjaevhhxrlgmloqxmrufxvfkhevcjei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916229.3466578-146-258287280448679/AnsiballZ_dnf.py'
Oct 08 09:37:10 compute-0 sudo[41733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:37:10 compute-0 python3.9[41735]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 08 09:37:12 compute-0 sudo[41733]: pam_unix(sudo:session): session closed for user root
Oct 08 09:37:13 compute-0 sudo[41897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivklxgvkcbqffxkubeqoowvczdgugwli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916233.215102-188-253545315925666/AnsiballZ_dnf.py'
Oct 08 09:37:13 compute-0 sudo[41897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:37:13 compute-0 python3.9[41899]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 08 09:37:25 compute-0 kernel: SELinux:  Converting 2724 SID table entries...
Oct 08 09:37:25 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 08 09:37:25 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 08 09:37:25 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 08 09:37:25 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 08 09:37:25 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 08 09:37:25 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 08 09:37:25 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 08 09:37:25 compute-0 groupadd[41922]: group added to /etc/group: name=unbound, GID=993
Oct 08 09:37:25 compute-0 groupadd[41922]: group added to /etc/gshadow: name=unbound
Oct 08 09:37:25 compute-0 groupadd[41922]: new group: name=unbound, GID=993
Oct 08 09:37:25 compute-0 useradd[41929]: new user: name=unbound, UID=993, GID=993, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Oct 08 09:37:25 compute-0 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Oct 08 09:37:25 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Oct 08 09:37:26 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 08 09:37:26 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 08 09:37:26 compute-0 systemd[1]: Reloading.
Oct 08 09:37:26 compute-0 systemd-rc-local-generator[42425]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:37:26 compute-0 systemd-sysv-generator[42428]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:37:26 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 08 09:37:27 compute-0 sudo[41897]: pam_unix(sudo:session): session closed for user root
Oct 08 09:37:27 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 08 09:37:27 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 08 09:37:27 compute-0 systemd[1]: run-r1d590a782f0d4d958d645f584de39c78.service: Deactivated successfully.
Oct 08 09:37:29 compute-0 sudo[42999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhoyphowpeezvzzchsxppkvxrliphimy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916248.6776311-212-140911862788951/AnsiballZ_systemd.py'
Oct 08 09:37:29 compute-0 sudo[42999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:37:29 compute-0 python3.9[43001]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 08 09:37:29 compute-0 systemd[1]: Reloading.
Oct 08 09:37:29 compute-0 systemd-sysv-generator[43035]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:37:29 compute-0 systemd-rc-local-generator[43032]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:37:29 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Oct 08 09:37:29 compute-0 chown[43043]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Oct 08 09:37:29 compute-0 ovs-ctl[43048]: /etc/openvswitch/conf.db does not exist ... (warning).
Oct 08 09:37:29 compute-0 ovs-ctl[43048]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Oct 08 09:37:29 compute-0 ovs-ctl[43048]: Starting ovsdb-server [  OK  ]
Oct 08 09:37:29 compute-0 ovs-vsctl[43097]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Oct 08 09:37:30 compute-0 ovs-vsctl[43116]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"26869918-b723-425c-a2e1-0d697f3d0fec\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Oct 08 09:37:30 compute-0 ovs-ctl[43048]: Configuring Open vSwitch system IDs [  OK  ]
Oct 08 09:37:30 compute-0 ovs-vsctl[43122]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Oct 08 09:37:30 compute-0 ovs-ctl[43048]: Enabling remote OVSDB managers [  OK  ]
Oct 08 09:37:30 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Oct 08 09:37:30 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Oct 08 09:37:30 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Oct 08 09:37:30 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Oct 08 09:37:30 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Oct 08 09:37:30 compute-0 ovs-ctl[43166]: Inserting openvswitch module [  OK  ]
Oct 08 09:37:30 compute-0 ovs-ctl[43135]: Starting ovs-vswitchd [  OK  ]
Oct 08 09:37:30 compute-0 ovs-vsctl[43184]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Oct 08 09:37:30 compute-0 ovs-ctl[43135]: Enabling remote OVSDB managers [  OK  ]
Oct 08 09:37:30 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Oct 08 09:37:30 compute-0 systemd[1]: Starting Open vSwitch...
Oct 08 09:37:30 compute-0 systemd[1]: Finished Open vSwitch.
Oct 08 09:37:30 compute-0 sudo[42999]: pam_unix(sudo:session): session closed for user root
Oct 08 09:37:31 compute-0 python3.9[43335]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 08 09:37:32 compute-0 sudo[43485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsswizeewpjghpyduijbevedsscixxdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916252.1633909-266-85297429693165/AnsiballZ_sefcontext.py'
Oct 08 09:37:32 compute-0 sudo[43485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:37:32 compute-0 python3.9[43487]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Oct 08 09:37:33 compute-0 kernel: SELinux:  Converting 2738 SID table entries...
Oct 08 09:37:33 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 08 09:37:33 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 08 09:37:33 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 08 09:37:33 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 08 09:37:33 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 08 09:37:33 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 08 09:37:33 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 08 09:37:34 compute-0 sudo[43485]: pam_unix(sudo:session): session closed for user root
Oct 08 09:37:35 compute-0 python3.9[43643]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 08 09:37:36 compute-0 sudo[43799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwwrznzetlxetxearpzmudjygjzbrsku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916255.8852184-320-129892615458642/AnsiballZ_dnf.py'
Oct 08 09:37:36 compute-0 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Oct 08 09:37:36 compute-0 sudo[43799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:37:36 compute-0 python3.9[43801]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 08 09:37:37 compute-0 sudo[43799]: pam_unix(sudo:session): session closed for user root
Oct 08 09:37:38 compute-0 sudo[43952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdzimbbfahdhsqcpkvbfrklhqlfysgtv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916258.0280316-344-161444701541909/AnsiballZ_command.py'
Oct 08 09:37:38 compute-0 sudo[43952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:37:38 compute-0 python3.9[43954]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:37:39 compute-0 sudo[43952]: pam_unix(sudo:session): session closed for user root
Oct 08 09:37:40 compute-0 sudo[44239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdmgqnzghklwqcamtqvyrxfgmxlpqgsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916259.6748118-368-240686439405734/AnsiballZ_file.py'
Oct 08 09:37:40 compute-0 sudo[44239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:37:40 compute-0 python3.9[44241]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 08 09:37:40 compute-0 sudo[44239]: pam_unix(sudo:session): session closed for user root
Oct 08 09:37:41 compute-0 python3.9[44391]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 09:37:41 compute-0 sudo[44543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juppjkjkmyzmercwuyojdfownhkulaxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916261.497325-416-188325971208899/AnsiballZ_dnf.py'
Oct 08 09:37:41 compute-0 sudo[44543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:37:41 compute-0 python3.9[44545]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 08 09:37:43 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 08 09:37:43 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 08 09:37:43 compute-0 systemd[1]: Reloading.
Oct 08 09:37:43 compute-0 systemd-rc-local-generator[44586]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:37:43 compute-0 systemd-sysv-generator[44589]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:37:43 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 08 09:37:44 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 08 09:37:44 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 08 09:37:44 compute-0 systemd[1]: run-rb05b86f4cf4040ecba4eae06f91c9fc0.service: Deactivated successfully.
Oct 08 09:37:44 compute-0 sudo[44543]: pam_unix(sudo:session): session closed for user root
Oct 08 09:37:45 compute-0 sudo[44860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xttjxnfrslcyhvxlxsmqbwbjmhiyyhix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916264.8740222-440-277181187920209/AnsiballZ_systemd.py'
Oct 08 09:37:45 compute-0 sudo[44860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:37:45 compute-0 python3.9[44862]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 08 09:37:45 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Oct 08 09:37:45 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Oct 08 09:37:45 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Oct 08 09:37:45 compute-0 systemd[1]: Stopping Network Manager...
Oct 08 09:37:45 compute-0 NetworkManager[3964]: <info>  [1759916265.4880] caught SIGTERM, shutting down normally.
Oct 08 09:37:45 compute-0 NetworkManager[3964]: <info>  [1759916265.4892] dhcp4 (eth0): canceled DHCP transaction
Oct 08 09:37:45 compute-0 NetworkManager[3964]: <info>  [1759916265.4892] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 08 09:37:45 compute-0 NetworkManager[3964]: <info>  [1759916265.4892] dhcp4 (eth0): state changed no lease
Oct 08 09:37:45 compute-0 NetworkManager[3964]: <info>  [1759916265.4894] manager: NetworkManager state is now CONNECTED_SITE
Oct 08 09:37:45 compute-0 NetworkManager[3964]: <info>  [1759916265.4952] exiting (success)
Oct 08 09:37:45 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 08 09:37:45 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 08 09:37:45 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Oct 08 09:37:45 compute-0 systemd[1]: Stopped Network Manager.
Oct 08 09:37:45 compute-0 systemd[1]: NetworkManager.service: Consumed 8.594s CPU time, 4.3M memory peak, read 0B from disk, written 15.0K to disk.
Oct 08 09:37:45 compute-0 systemd[1]: Starting Network Manager...
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.5412] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:82191aaa-5b9a-46b2-ace7-0656efb209fc)
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.5414] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.5469] manager[0x5577884d6090]: monitoring kernel firmware directory '/lib/firmware'.
Oct 08 09:37:45 compute-0 systemd[1]: Starting Hostname Service...
Oct 08 09:37:45 compute-0 systemd[1]: Started Hostname Service.
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.6252] hostname: hostname: using hostnamed
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.6253] hostname: static hostname changed from (none) to "compute-0"
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.6257] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.6262] manager[0x5577884d6090]: rfkill: Wi-Fi hardware radio set enabled
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.6263] manager[0x5577884d6090]: rfkill: WWAN hardware radio set enabled
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.6285] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.6294] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.6295] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.6295] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.6296] manager: Networking is enabled by state file
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.6298] settings: Loaded settings plugin: keyfile (internal)
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.6301] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.6321] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.6329] dhcp: init: Using DHCP client 'internal'
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.6331] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.6335] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.6339] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.6346] device (lo): Activation: starting connection 'lo' (04954bd0-4d1f-4562-9334-15a987bf371b)
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.6351] device (eth0): carrier: link connected
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.6354] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.6358] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.6358] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.6364] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.6368] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.6373] device (eth1): carrier: link connected
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.6376] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.6380] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (f3e90ac0-ed6a-5434-b062-a53261128ad5) (indicated)
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.6381] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.6385] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.6390] device (eth1): Activation: starting connection 'ci-private-network' (f3e90ac0-ed6a-5434-b062-a53261128ad5)
Oct 08 09:37:45 compute-0 systemd[1]: Started Network Manager.
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.6401] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.6943] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.6946] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.6948] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.6950] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.6953] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.6955] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.6958] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.6964] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.6970] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.6972] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.6982] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.6994] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.7005] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.7006] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.7011] device (lo): Activation: successful, device activated.
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.7017] dhcp4 (eth0): state changed new lease, address=38.102.83.224
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.7021] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct 08 09:37:45 compute-0 systemd[1]: Starting Network Manager Wait Online...
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.7076] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.7083] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.7087] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.7090] manager: NetworkManager state is now CONNECTED_LOCAL
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.7095] device (eth1): Activation: successful, device activated.
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.7105] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.7107] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.7110] manager: NetworkManager state is now CONNECTED_SITE
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.7115] device (eth0): Activation: successful, device activated.
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.7119] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct 08 09:37:45 compute-0 NetworkManager[44872]: <info>  [1759916265.7124] manager: startup complete
Oct 08 09:37:45 compute-0 sudo[44860]: pam_unix(sudo:session): session closed for user root
Oct 08 09:37:45 compute-0 systemd[1]: Finished Network Manager Wait Online.
Oct 08 09:37:46 compute-0 sudo[45087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utyenmkuivixvfjncsclvifditaogcjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916265.9841766-464-8442070885950/AnsiballZ_dnf.py'
Oct 08 09:37:46 compute-0 sudo[45087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:37:46 compute-0 python3.9[45089]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 08 09:37:51 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 08 09:37:51 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 08 09:37:51 compute-0 systemd[1]: Reloading.
Oct 08 09:37:51 compute-0 systemd-sysv-generator[45145]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:37:51 compute-0 systemd-rc-local-generator[45142]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:37:51 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 08 09:37:52 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 08 09:37:52 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 08 09:37:52 compute-0 systemd[1]: run-rb2ae27d1dfe74f7a9ee49228583760ae.service: Deactivated successfully.
Oct 08 09:37:52 compute-0 sudo[45087]: pam_unix(sudo:session): session closed for user root
Oct 08 09:37:55 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 08 09:37:56 compute-0 sudo[45550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flpbzlgktgdbkoucnzhygxwvupcclhhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916276.1544216-500-98903035235032/AnsiballZ_stat.py'
Oct 08 09:37:56 compute-0 sudo[45550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:37:56 compute-0 python3.9[45552]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 09:37:56 compute-0 sudo[45550]: pam_unix(sudo:session): session closed for user root
Oct 08 09:37:57 compute-0 sudo[45702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gumlywdzhpvjzxarqvxmwwcbihgrwbbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916277.0497906-527-70306130168789/AnsiballZ_ini_file.py'
Oct 08 09:37:57 compute-0 sudo[45702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:37:57 compute-0 python3.9[45704]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:37:58 compute-0 sudo[45702]: pam_unix(sudo:session): session closed for user root
Oct 08 09:37:58 compute-0 sudo[45856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhthrgehgwxzvxdpxwtqdmqqqwaaecdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916278.4605098-557-74965623312934/AnsiballZ_ini_file.py'
Oct 08 09:37:58 compute-0 sudo[45856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:37:58 compute-0 python3.9[45858]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:37:58 compute-0 sudo[45856]: pam_unix(sudo:session): session closed for user root
Oct 08 09:37:59 compute-0 sudo[46008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcxgxfacfamsdjpfyrflyjsdnwnfltny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916279.068906-557-214251493397515/AnsiballZ_ini_file.py'
Oct 08 09:37:59 compute-0 sudo[46008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:37:59 compute-0 python3.9[46010]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:37:59 compute-0 sudo[46008]: pam_unix(sudo:session): session closed for user root
Oct 08 09:38:00 compute-0 sudo[46160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrpaixexepautlxhrepbczuiemejxhuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916279.8757946-602-120643753973787/AnsiballZ_ini_file.py'
Oct 08 09:38:00 compute-0 sudo[46160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:38:00 compute-0 python3.9[46162]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:38:00 compute-0 sudo[46160]: pam_unix(sudo:session): session closed for user root
Oct 08 09:38:00 compute-0 sudo[46312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brhramtcblejtsvqjeuwktwatsrpaxje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916280.4452574-602-224052554314468/AnsiballZ_ini_file.py'
Oct 08 09:38:00 compute-0 sudo[46312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:38:00 compute-0 python3.9[46314]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:38:00 compute-0 sudo[46312]: pam_unix(sudo:session): session closed for user root
Oct 08 09:38:01 compute-0 sudo[46464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgnndwqaabotktznwhamkvdbrlcjnvqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916281.5762358-647-140299354159452/AnsiballZ_stat.py'
Oct 08 09:38:01 compute-0 sudo[46464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:38:02 compute-0 python3.9[46466]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:38:02 compute-0 sudo[46464]: pam_unix(sudo:session): session closed for user root
Oct 08 09:38:02 compute-0 sudo[46588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujcpbdpahesvujxuhsfqljlnyezdskrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916281.5762358-647-140299354159452/AnsiballZ_copy.py'
Oct 08 09:38:02 compute-0 sudo[46588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:38:02 compute-0 python3.9[46590]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1759916281.5762358-647-140299354159452/.source _original_basename=.6atzymsv follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:38:02 compute-0 sudo[46588]: pam_unix(sudo:session): session closed for user root
Oct 08 09:38:03 compute-0 sudo[46740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlcswxmjatwmgpmgcqrocilfzehmhmyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916283.0293057-692-113356952080423/AnsiballZ_file.py'
Oct 08 09:38:03 compute-0 sudo[46740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:38:03 compute-0 python3.9[46742]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:38:03 compute-0 sudo[46740]: pam_unix(sudo:session): session closed for user root
Oct 08 09:38:04 compute-0 sudo[46892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpjpcnhlklmzfekofwpejeteionvqkpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916283.753278-716-260970514236142/AnsiballZ_edpm_os_net_config_mappings.py'
Oct 08 09:38:04 compute-0 sudo[46892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:38:04 compute-0 python3.9[46894]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Oct 08 09:38:04 compute-0 sudo[46892]: pam_unix(sudo:session): session closed for user root
Oct 08 09:38:04 compute-0 sudo[47044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uasvsscuuptszwzbvtwxpaaftvxcojfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916284.6858945-743-74457707529567/AnsiballZ_file.py'
Oct 08 09:38:04 compute-0 sudo[47044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:38:05 compute-0 python3.9[47046]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:38:05 compute-0 sudo[47044]: pam_unix(sudo:session): session closed for user root
Oct 08 09:38:06 compute-0 sudo[47196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yiaqgdlgtnzhqyxxkbrhbgakkconklop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916285.838575-773-147142115589624/AnsiballZ_stat.py'
Oct 08 09:38:06 compute-0 sudo[47196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:38:06 compute-0 sudo[47196]: pam_unix(sudo:session): session closed for user root
Oct 08 09:38:06 compute-0 sudo[47319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eaifnclfoeokfehjhsjcrpziwbtkmmqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916285.838575-773-147142115589624/AnsiballZ_copy.py'
Oct 08 09:38:06 compute-0 sudo[47319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:38:07 compute-0 sudo[47319]: pam_unix(sudo:session): session closed for user root
Oct 08 09:38:07 compute-0 sudo[47471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lenonyauctcmorehomnuohdaiixkgyeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916287.267796-818-3402372018096/AnsiballZ_slurp.py'
Oct 08 09:38:07 compute-0 sudo[47471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:38:07 compute-0 python3.9[47473]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Oct 08 09:38:07 compute-0 sudo[47471]: pam_unix(sudo:session): session closed for user root
Oct 08 09:38:08 compute-0 sudo[47646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scjhtczijzfbhvkswpccogdtlompfnyu ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916288.202535-845-261026550580536/async_wrapper.py j124972591745 300 /home/zuul/.ansible/tmp/ansible-tmp-1759916288.202535-845-261026550580536/AnsiballZ_edpm_os_net_config.py _'
Oct 08 09:38:08 compute-0 sudo[47646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:38:09 compute-0 ansible-async_wrapper.py[47648]: Invoked with j124972591745 300 /home/zuul/.ansible/tmp/ansible-tmp-1759916288.202535-845-261026550580536/AnsiballZ_edpm_os_net_config.py _
Oct 08 09:38:09 compute-0 ansible-async_wrapper.py[47651]: Starting module and watcher
Oct 08 09:38:09 compute-0 ansible-async_wrapper.py[47651]: Start watching 47652 (300)
Oct 08 09:38:09 compute-0 ansible-async_wrapper.py[47652]: Start module (47652)
Oct 08 09:38:09 compute-0 ansible-async_wrapper.py[47648]: Return async_wrapper task started.
Oct 08 09:38:09 compute-0 sudo[47646]: pam_unix(sudo:session): session closed for user root
Oct 08 09:38:09 compute-0 python3.9[47653]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Oct 08 09:38:09 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Oct 08 09:38:09 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Oct 08 09:38:09 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Oct 08 09:38:09 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Oct 08 09:38:09 compute-0 kernel: cfg80211: failed to load regulatory.db
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.8136] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47654 uid=0 result="success"
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.8157] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47654 uid=0 result="success"
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.8642] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.8644] audit: op="connection-add" uuid="d1ef9515-d92f-45d1-94ba-eab87c3ebbc3" name="br-ex-br" pid=47654 uid=0 result="success"
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.8660] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.8662] audit: op="connection-add" uuid="ee7778aa-9726-4f40-b3e1-89de1d61b1e9" name="br-ex-port" pid=47654 uid=0 result="success"
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.8673] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.8675] audit: op="connection-add" uuid="05a658b1-434f-4d26-b5c3-25062d421ffd" name="eth1-port" pid=47654 uid=0 result="success"
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.8686] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.8687] audit: op="connection-add" uuid="0aec10a1-4bab-4b88-b026-e73e6cbe621b" name="vlan20-port" pid=47654 uid=0 result="success"
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.8698] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.8700] audit: op="connection-add" uuid="ec7377c4-9b96-44a5-b55f-39624ce8ce0f" name="vlan21-port" pid=47654 uid=0 result="success"
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.8711] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.8713] audit: op="connection-add" uuid="adfe5585-e7bb-479a-a1a4-3f6af82efe8d" name="vlan22-port" pid=47654 uid=0 result="success"
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.8723] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.8725] audit: op="connection-add" uuid="481b305d-7d8a-4521-b8ec-5eeaa72834b0" name="vlan23-port" pid=47654 uid=0 result="success"
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.8743] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="802-3-ethernet.mtu,ipv4.dhcp-client-id,ipv4.dhcp-timeout,connection.autoconnect-priority,connection.timestamp,ipv6.method,ipv6.addr-gen-mode,ipv6.dhcp-timeout" pid=47654 uid=0 result="success"
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.8759] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.8761] audit: op="connection-add" uuid="2303ad94-5cc0-4641-9983-0a2eee400b01" name="br-ex-if" pid=47654 uid=0 result="success"
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.8790] audit: op="connection-update" uuid="f3e90ac0-ed6a-5434-b062-a53261128ad5" name="ci-private-network" args="ovs-interface.type,ipv4.method,ipv4.dns,ipv4.routing-rules,ipv4.routes,ipv4.addresses,ipv4.never-default,connection.slave-type,connection.timestamp,connection.controller,connection.master,connection.port-type,ipv6.method,ipv6.dns,ipv6.routing-rules,ipv6.routes,ipv6.addr-gen-mode,ipv6.addresses,ovs-external-ids.data" pid=47654 uid=0 result="success"
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.8805] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.8807] audit: op="connection-add" uuid="855c26f1-c03b-4b2e-827d-6aebda727c18" name="vlan20-if" pid=47654 uid=0 result="success"
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.8821] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.8823] audit: op="connection-add" uuid="832b4d99-c665-4b2d-8400-188b1077c45a" name="vlan21-if" pid=47654 uid=0 result="success"
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.8837] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.8839] audit: op="connection-add" uuid="18043826-267e-49c3-9d2c-5885a3457256" name="vlan22-if" pid=47654 uid=0 result="success"
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.8854] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.8856] audit: op="connection-add" uuid="dbd7dc91-45d8-4d7a-9896-ebb9c31fadaa" name="vlan23-if" pid=47654 uid=0 result="success"
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.8866] audit: op="connection-delete" uuid="aa7d912d-605e-338f-afad-61058792d4cf" name="Wired connection 1" pid=47654 uid=0 result="success"
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.8877] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.8887] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.8892] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (d1ef9515-d92f-45d1-94ba-eab87c3ebbc3)
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.8893] audit: op="connection-activate" uuid="d1ef9515-d92f-45d1-94ba-eab87c3ebbc3" name="br-ex-br" pid=47654 uid=0 result="success"
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.8895] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.8903] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.8907] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (ee7778aa-9726-4f40-b3e1-89de1d61b1e9)
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.8909] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.8915] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.8919] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (05a658b1-434f-4d26-b5c3-25062d421ffd)
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.8921] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.8927] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.8931] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (0aec10a1-4bab-4b88-b026-e73e6cbe621b)
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.8933] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.8939] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.8944] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (ec7377c4-9b96-44a5-b55f-39624ce8ce0f)
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.8946] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.8954] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.8958] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (adfe5585-e7bb-479a-a1a4-3f6af82efe8d)
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.8960] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.8966] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.8971] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (481b305d-7d8a-4521-b8ec-5eeaa72834b0)
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.8972] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.8974] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.8976] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.8982] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.8986] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.8991] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (2303ad94-5cc0-4641-9983-0a2eee400b01)
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.8992] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.8996] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.8998] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.8999] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9001] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9011] device (eth1): disconnecting for new activation request.
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9012] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9023] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9025] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9026] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9028] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9037] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9039] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (855c26f1-c03b-4b2e-827d-6aebda727c18)
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9040] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9043] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9045] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9046] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9048] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9052] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9055] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (832b4d99-c665-4b2d-8400-188b1077c45a)
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9056] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9059] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9061] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9062] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9065] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9069] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9073] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (18043826-267e-49c3-9d2c-5885a3457256)
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9074] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9077] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9078] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9079] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9082] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9087] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9091] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (dbd7dc91-45d8-4d7a-9896-ebb9c31fadaa)
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9092] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9094] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9096] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9097] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9099] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9109] audit: op="device-reapply" interface="eth0" ifindex=2 args="802-3-ethernet.mtu,ipv4.dhcp-client-id,ipv4.dhcp-timeout,connection.autoconnect-priority,ipv6.method,ipv6.addr-gen-mode" pid=47654 uid=0 result="success"
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9111] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9115] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9116] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9122] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9126] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9129] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9132] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9134] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9139] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 kernel: ovs-system: entered promiscuous mode
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9143] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9146] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9148] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 systemd-udevd[47658]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 09:38:10 compute-0 kernel: Timeout policy base is empty
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9177] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9181] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9185] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9186] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9192] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9195] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9198] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9200] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9206] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9212] dhcp4 (eth0): canceled DHCP transaction
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9213] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9213] dhcp4 (eth0): state changed no lease
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9216] dhcp4 (eth0): activation: beginning transaction (no timeout)
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9234] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9238] audit: op="device-reapply" interface="eth1" ifindex=3 pid=47654 uid=0 result="fail" reason="Device is not activated"
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9245] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9279] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9284] dhcp4 (eth0): state changed new lease, address=38.102.83.224
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9288] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9320] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9327] device (eth1): disconnecting for new activation request.
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9328] audit: op="connection-activate" uuid="f3e90ac0-ed6a-5434-b062-a53261128ad5" name="ci-private-network" pid=47654 uid=0 result="success"
Oct 08 09:38:10 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9343] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9466] device (eth1): Activation: starting connection 'ci-private-network' (f3e90ac0-ed6a-5434-b062-a53261128ad5)
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9482] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9486] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9491] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9493] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47654 uid=0 result="success"
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9494] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9496] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9497] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9499] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9501] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9502] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9506] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9513] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9518] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9524] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9530] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9534] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9540] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9544] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9549] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Oct 08 09:38:10 compute-0 kernel: br-ex: entered promiscuous mode
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9554] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9559] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9564] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9570] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9574] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9578] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Oct 08 09:38:10 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9588] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9592] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9649] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9650] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9656] device (eth1): Activation: successful, device activated.
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9669] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Oct 08 09:38:10 compute-0 kernel: vlan22: entered promiscuous mode
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9692] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 systemd-udevd[47660]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9725] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9727] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9731] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 08 09:38:10 compute-0 kernel: vlan23: entered promiscuous mode
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9810] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Oct 08 09:38:10 compute-0 kernel: vlan20: entered promiscuous mode
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9832] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 systemd-udevd[47767]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9849] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9854] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9866] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 08 09:38:10 compute-0 kernel: vlan21: entered promiscuous mode
Oct 08 09:38:10 compute-0 systemd-udevd[47659]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9895] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9919] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9996] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Oct 08 09:38:10 compute-0 NetworkManager[44872]: <info>  [1759916290.9997] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 08 09:38:11 compute-0 NetworkManager[44872]: <info>  [1759916291.0004] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Oct 08 09:38:11 compute-0 NetworkManager[44872]: <info>  [1759916291.0008] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 08 09:38:11 compute-0 NetworkManager[44872]: <info>  [1759916291.0013] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 08 09:38:11 compute-0 NetworkManager[44872]: <info>  [1759916291.0044] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 08 09:38:11 compute-0 NetworkManager[44872]: <info>  [1759916291.0051] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 08 09:38:11 compute-0 NetworkManager[44872]: <info>  [1759916291.0090] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 08 09:38:11 compute-0 NetworkManager[44872]: <info>  [1759916291.0092] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 08 09:38:11 compute-0 NetworkManager[44872]: <info>  [1759916291.0094] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 08 09:38:11 compute-0 NetworkManager[44872]: <info>  [1759916291.0100] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 08 09:38:11 compute-0 NetworkManager[44872]: <info>  [1759916291.0105] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 08 09:38:11 compute-0 NetworkManager[44872]: <info>  [1759916291.0113] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 08 09:38:12 compute-0 NetworkManager[44872]: <info>  [1759916292.1195] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47654 uid=0 result="success"
Oct 08 09:38:12 compute-0 NetworkManager[44872]: <info>  [1759916292.2780] checkpoint[0x5577884ab950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Oct 08 09:38:12 compute-0 NetworkManager[44872]: <info>  [1759916292.2782] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47654 uid=0 result="success"
Oct 08 09:38:12 compute-0 NetworkManager[44872]: <info>  [1759916292.5729] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47654 uid=0 result="success"
Oct 08 09:38:12 compute-0 NetworkManager[44872]: <info>  [1759916292.5738] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47654 uid=0 result="success"
Oct 08 09:38:12 compute-0 sudo[48012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifmgfxthcfdwhjrjllxmnhzewqdssify ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916292.334193-845-252361219724899/AnsiballZ_async_status.py'
Oct 08 09:38:12 compute-0 sudo[48012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:38:12 compute-0 NetworkManager[44872]: <info>  [1759916292.7873] audit: op="networking-control" arg="global-dns-configuration" pid=47654 uid=0 result="success"
Oct 08 09:38:12 compute-0 NetworkManager[44872]: <info>  [1759916292.7901] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Oct 08 09:38:12 compute-0 NetworkManager[44872]: <info>  [1759916292.7934] audit: op="networking-control" arg="global-dns-configuration" pid=47654 uid=0 result="success"
Oct 08 09:38:12 compute-0 NetworkManager[44872]: <info>  [1759916292.7965] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47654 uid=0 result="success"
Oct 08 09:38:12 compute-0 NetworkManager[44872]: <info>  [1759916292.9147] checkpoint[0x5577884aba20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Oct 08 09:38:12 compute-0 NetworkManager[44872]: <info>  [1759916292.9152] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47654 uid=0 result="success"
Oct 08 09:38:12 compute-0 python3.9[48014]: ansible-ansible.legacy.async_status Invoked with jid=j124972591745.47648 mode=status _async_dir=/root/.ansible_async
Oct 08 09:38:12 compute-0 sudo[48012]: pam_unix(sudo:session): session closed for user root
Oct 08 09:38:12 compute-0 ansible-async_wrapper.py[47652]: Module complete (47652)
Oct 08 09:38:14 compute-0 ansible-async_wrapper.py[47651]: Done in kid B.
Oct 08 09:38:15 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 08 09:38:16 compute-0 sudo[48119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tykbexxvuyksscrbazbwqybygdrtgtlz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916292.334193-845-252361219724899/AnsiballZ_async_status.py'
Oct 08 09:38:16 compute-0 sudo[48119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:38:16 compute-0 python3.9[48121]: ansible-ansible.legacy.async_status Invoked with jid=j124972591745.47648 mode=status _async_dir=/root/.ansible_async
Oct 08 09:38:16 compute-0 sudo[48119]: pam_unix(sudo:session): session closed for user root
Oct 08 09:38:16 compute-0 sudo[48218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mweluynrvdhfsnhxtsnccohxgvsjrrvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916292.334193-845-252361219724899/AnsiballZ_async_status.py'
Oct 08 09:38:16 compute-0 sudo[48218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:38:17 compute-0 python3.9[48220]: ansible-ansible.legacy.async_status Invoked with jid=j124972591745.47648 mode=cleanup _async_dir=/root/.ansible_async
Oct 08 09:38:17 compute-0 sudo[48218]: pam_unix(sudo:session): session closed for user root
Oct 08 09:38:17 compute-0 sudo[48370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqhqtgbullrqwvksmqxlqrcpirhgadie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916297.5545144-926-136862017873719/AnsiballZ_stat.py'
Oct 08 09:38:17 compute-0 sudo[48370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:38:18 compute-0 python3.9[48372]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:38:18 compute-0 sudo[48370]: pam_unix(sudo:session): session closed for user root
Oct 08 09:38:18 compute-0 sudo[48493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axwgtlozxtggbrtihhjzrrvsgdcfuamc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916297.5545144-926-136862017873719/AnsiballZ_copy.py'
Oct 08 09:38:18 compute-0 sudo[48493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:38:18 compute-0 python3.9[48495]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759916297.5545144-926-136862017873719/.source.returncode _original_basename=.2k2s838c follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:38:18 compute-0 sudo[48493]: pam_unix(sudo:session): session closed for user root
Oct 08 09:38:19 compute-0 sudo[48645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxhebltvalduxlwttgxttsplmgqtipvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916299.0006137-974-21997374334111/AnsiballZ_stat.py'
Oct 08 09:38:19 compute-0 sudo[48645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:38:19 compute-0 python3.9[48647]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:38:19 compute-0 sudo[48645]: pam_unix(sudo:session): session closed for user root
Oct 08 09:38:19 compute-0 sudo[48769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmbypofrygldmuneqqbtsisbktkvnfcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916299.0006137-974-21997374334111/AnsiballZ_copy.py'
Oct 08 09:38:19 compute-0 sudo[48769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:38:19 compute-0 python3.9[48771]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759916299.0006137-974-21997374334111/.source.cfg _original_basename=.zjz3s1nd follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:38:19 compute-0 sudo[48769]: pam_unix(sudo:session): session closed for user root
Oct 08 09:38:20 compute-0 sudo[48921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oftgdnmmtrwdrzfpkoejrcfxthavgnap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916300.2957218-1019-9444717889808/AnsiballZ_systemd.py'
Oct 08 09:38:20 compute-0 sudo[48921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:38:20 compute-0 python3.9[48923]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 08 09:38:20 compute-0 systemd[1]: Reloading Network Manager...
Oct 08 09:38:20 compute-0 NetworkManager[44872]: <info>  [1759916300.9157] audit: op="reload" arg="0" pid=48927 uid=0 result="success"
Oct 08 09:38:20 compute-0 NetworkManager[44872]: <info>  [1759916300.9163] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Oct 08 09:38:20 compute-0 systemd[1]: Reloaded Network Manager.
Oct 08 09:38:20 compute-0 sudo[48921]: pam_unix(sudo:session): session closed for user root
Oct 08 09:38:21 compute-0 sshd-session[40874]: Connection closed by 192.168.122.30 port 48946
Oct 08 09:38:21 compute-0 sshd-session[40871]: pam_unix(sshd:session): session closed for user zuul
Oct 08 09:38:21 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Oct 08 09:38:21 compute-0 systemd[1]: session-10.scope: Consumed 47.223s CPU time.
Oct 08 09:38:21 compute-0 systemd-logind[798]: Session 10 logged out. Waiting for processes to exit.
Oct 08 09:38:21 compute-0 systemd-logind[798]: Removed session 10.
Oct 08 09:38:27 compute-0 sshd-session[48958]: Accepted publickey for zuul from 192.168.122.30 port 37650 ssh2: ECDSA SHA256:7LvTHAj52RCSkKXOsIbzSDrEw7lwj23D0dAJ0Qgx0Rg
Oct 08 09:38:27 compute-0 systemd-logind[798]: New session 11 of user zuul.
Oct 08 09:38:27 compute-0 systemd[1]: Started Session 11 of User zuul.
Oct 08 09:38:27 compute-0 sshd-session[48958]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 08 09:38:28 compute-0 python3.9[49111]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 08 09:38:29 compute-0 python3.9[49266]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 08 09:38:30 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 08 09:38:31 compute-0 python3.9[49459]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:38:31 compute-0 sshd-session[48961]: Connection closed by 192.168.122.30 port 37650
Oct 08 09:38:31 compute-0 sshd-session[48958]: pam_unix(sshd:session): session closed for user zuul
Oct 08 09:38:31 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Oct 08 09:38:31 compute-0 systemd[1]: session-11.scope: Consumed 2.304s CPU time.
Oct 08 09:38:31 compute-0 systemd-logind[798]: Session 11 logged out. Waiting for processes to exit.
Oct 08 09:38:31 compute-0 systemd-logind[798]: Removed session 11.
Oct 08 09:38:36 compute-0 sshd-session[49488]: Accepted publickey for zuul from 192.168.122.30 port 32830 ssh2: ECDSA SHA256:7LvTHAj52RCSkKXOsIbzSDrEw7lwj23D0dAJ0Qgx0Rg
Oct 08 09:38:36 compute-0 systemd-logind[798]: New session 12 of user zuul.
Oct 08 09:38:36 compute-0 systemd[1]: Started Session 12 of User zuul.
Oct 08 09:38:36 compute-0 sshd-session[49488]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 08 09:38:37 compute-0 python3.9[49641]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 08 09:38:38 compute-0 python3.9[49795]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 08 09:38:39 compute-0 sudo[49950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxujcrwhvhanvjaqpyxhqyohhukstssc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916319.0926993-80-160085138416705/AnsiballZ_setup.py'
Oct 08 09:38:39 compute-0 sudo[49950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:38:39 compute-0 python3.9[49952]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 08 09:38:39 compute-0 sudo[49950]: pam_unix(sudo:session): session closed for user root
Oct 08 09:38:40 compute-0 sudo[50034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njhvncpensyjtpeonebyfjihqtppvoln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916319.0926993-80-160085138416705/AnsiballZ_dnf.py'
Oct 08 09:38:40 compute-0 sudo[50034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:38:40 compute-0 python3.9[50036]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 08 09:38:41 compute-0 sudo[50034]: pam_unix(sudo:session): session closed for user root
Oct 08 09:38:42 compute-0 sudo[50188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqmmtiqvvpuchosekcfyrdmlbsytjmfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916321.9919024-116-223111472044382/AnsiballZ_setup.py'
Oct 08 09:38:42 compute-0 sudo[50188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:38:42 compute-0 python3.9[50190]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 08 09:38:42 compute-0 sudo[50188]: pam_unix(sudo:session): session closed for user root
Oct 08 09:38:43 compute-0 sudo[50383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eazovqcigqfldxhvtzpjlxpmixjdkmrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916323.297733-149-14931109469877/AnsiballZ_file.py'
Oct 08 09:38:43 compute-0 sudo[50383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:38:43 compute-0 python3.9[50385]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:38:43 compute-0 sudo[50383]: pam_unix(sudo:session): session closed for user root
Oct 08 09:38:44 compute-0 sudo[50535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkgvnjtuzbnvlavkegxbuybqnonaxyax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916324.2140884-173-238406389935318/AnsiballZ_command.py'
Oct 08 09:38:44 compute-0 sudo[50535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:38:44 compute-0 python3.9[50537]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:38:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-metacopy\x2dcheck1012587662-merged.mount: Deactivated successfully.
Oct 08 09:38:44 compute-0 podman[50538]: 2025-10-08 09:38:44.890967057 +0000 UTC m=+0.045581011 system refresh
Oct 08 09:38:44 compute-0 sudo[50535]: pam_unix(sudo:session): session closed for user root
Oct 08 09:38:45 compute-0 sudo[50698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksyquaodqszcwmixycmpxrbrsadvidri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916325.1675496-197-112371516454510/AnsiballZ_stat.py'
Oct 08 09:38:45 compute-0 sudo[50698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:38:45 compute-0 python3.9[50700]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:38:45 compute-0 sudo[50698]: pam_unix(sudo:session): session closed for user root
Oct 08 09:38:45 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 08 09:38:46 compute-0 sudo[50821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hksxriicnkkqtybwflzkysihfysbbago ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916325.1675496-197-112371516454510/AnsiballZ_copy.py'
Oct 08 09:38:46 compute-0 sudo[50821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:38:46 compute-0 python3.9[50823]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759916325.1675496-197-112371516454510/.source.json follow=False _original_basename=podman_network_config.j2 checksum=51cae438ebb1fc11044e40e0585a1b8c3a148f17 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:38:46 compute-0 sudo[50821]: pam_unix(sudo:session): session closed for user root
Oct 08 09:38:47 compute-0 sudo[50973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eimahlbfjtiwrvmrfjfgmfpbllogvspu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916326.82971-242-185420978338250/AnsiballZ_stat.py'
Oct 08 09:38:47 compute-0 sudo[50973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:38:47 compute-0 python3.9[50975]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:38:47 compute-0 sudo[50973]: pam_unix(sudo:session): session closed for user root
Oct 08 09:38:47 compute-0 sudo[51096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzvkffryzimkyexhfdcxmodlbytowzig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916326.82971-242-185420978338250/AnsiballZ_copy.py'
Oct 08 09:38:47 compute-0 sudo[51096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:38:47 compute-0 python3.9[51098]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759916326.82971-242-185420978338250/.source.conf follow=False _original_basename=registries.conf.j2 checksum=804a0d01b832e60d20f779a331306df708c87b02 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:38:47 compute-0 sudo[51096]: pam_unix(sudo:session): session closed for user root
Oct 08 09:38:48 compute-0 sudo[51248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-keukikhkmvwbirhqjlocfylckhajjvbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916328.1422853-290-157681397648084/AnsiballZ_ini_file.py'
Oct 08 09:38:48 compute-0 sudo[51248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:38:48 compute-0 python3.9[51250]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:38:48 compute-0 sudo[51248]: pam_unix(sudo:session): session closed for user root
Oct 08 09:38:49 compute-0 sudo[51400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wefksbasvclwhbweekmcntwpjbdluysv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916328.9162228-290-253198592754115/AnsiballZ_ini_file.py'
Oct 08 09:38:49 compute-0 sudo[51400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:38:49 compute-0 python3.9[51402]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:38:49 compute-0 sudo[51400]: pam_unix(sudo:session): session closed for user root
Oct 08 09:38:49 compute-0 sudo[51552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chqlhlelrfpcamnpojkvqwpycwwkjqon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916329.468178-290-248932010607063/AnsiballZ_ini_file.py'
Oct 08 09:38:49 compute-0 sudo[51552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:38:49 compute-0 python3.9[51554]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:38:49 compute-0 sudo[51552]: pam_unix(sudo:session): session closed for user root
Oct 08 09:38:50 compute-0 sudo[51704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twyrimcibdvwaqxkrnlefejpfllrvzvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916330.0055242-290-205796116306576/AnsiballZ_ini_file.py'
Oct 08 09:38:50 compute-0 sudo[51704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:38:50 compute-0 python3.9[51706]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:38:50 compute-0 sudo[51704]: pam_unix(sudo:session): session closed for user root
Oct 08 09:38:51 compute-0 sudo[51856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahnwikwupotzdhqfrvqoqfjborxutwwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916330.992232-383-226166322582680/AnsiballZ_dnf.py'
Oct 08 09:38:51 compute-0 sudo[51856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:38:51 compute-0 python3.9[51858]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 08 09:38:52 compute-0 sudo[51856]: pam_unix(sudo:session): session closed for user root
Oct 08 09:38:53 compute-0 sudo[52009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uquennfyuogcxpdiwfrjfczpqycmrqnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916333.2727063-416-9250960285865/AnsiballZ_setup.py'
Oct 08 09:38:53 compute-0 sudo[52009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:38:53 compute-0 python3.9[52011]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 08 09:38:53 compute-0 sudo[52009]: pam_unix(sudo:session): session closed for user root
Oct 08 09:38:54 compute-0 sudo[52163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uamcbivpcrszimpefoktxmptvialvbxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916334.1367452-440-204478912204883/AnsiballZ_stat.py'
Oct 08 09:38:54 compute-0 sudo[52163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:38:54 compute-0 python3.9[52165]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 09:38:54 compute-0 sudo[52163]: pam_unix(sudo:session): session closed for user root
Oct 08 09:38:55 compute-0 sudo[52315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqljbxntcyqbtjiwqundtivdahdujllv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916334.939072-467-80202215597404/AnsiballZ_stat.py'
Oct 08 09:38:55 compute-0 sudo[52315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:38:55 compute-0 python3.9[52317]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 09:38:55 compute-0 sudo[52315]: pam_unix(sudo:session): session closed for user root
Oct 08 09:38:56 compute-0 sudo[52467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qudfsyzweodumplxowlvgsximmoxvgmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916335.7733634-497-114934475316807/AnsiballZ_service_facts.py'
Oct 08 09:38:56 compute-0 sudo[52467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:38:56 compute-0 python3.9[52469]: ansible-service_facts Invoked
Oct 08 09:38:56 compute-0 network[52486]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 08 09:38:56 compute-0 network[52487]: 'network-scripts' will be removed from distribution in near future.
Oct 08 09:38:56 compute-0 network[52488]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 08 09:38:59 compute-0 sudo[52467]: pam_unix(sudo:session): session closed for user root
Oct 08 09:39:02 compute-0 sudo[52773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtgijbnoutswaehdfcsjmluxhswwwost ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1759916342.1687624-536-178637871637905/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1759916342.1687624-536-178637871637905/args'
Oct 08 09:39:02 compute-0 sudo[52773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:39:02 compute-0 sudo[52773]: pam_unix(sudo:session): session closed for user root
Oct 08 09:39:03 compute-0 sudo[52940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnvcnmrlmnkmfsefeuiopfrcwjxldidh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916342.9464612-569-507894232276/AnsiballZ_dnf.py'
Oct 08 09:39:03 compute-0 sudo[52940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:39:03 compute-0 python3.9[52942]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 08 09:39:04 compute-0 sudo[52940]: pam_unix(sudo:session): session closed for user root
Oct 08 09:39:06 compute-0 sudo[53093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gruenwiulubzeajbkiraoepkpiutvujt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916345.592568-608-171202970168802/AnsiballZ_package_facts.py'
Oct 08 09:39:06 compute-0 sudo[53093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:39:06 compute-0 python3.9[53095]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Oct 08 09:39:06 compute-0 sudo[53093]: pam_unix(sudo:session): session closed for user root
Oct 08 09:39:07 compute-0 sudo[53245]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gptxtsgyuhjgumzzivdbuwtzudtetyqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916347.4793897-638-67779334586111/AnsiballZ_stat.py'
Oct 08 09:39:07 compute-0 sudo[53245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:39:07 compute-0 python3.9[53247]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:39:08 compute-0 sudo[53245]: pam_unix(sudo:session): session closed for user root
Oct 08 09:39:08 compute-0 sudo[53370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-moqbzafdvhfxiyxxgizxagtktaelwtxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916347.4793897-638-67779334586111/AnsiballZ_copy.py'
Oct 08 09:39:08 compute-0 sudo[53370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:39:08 compute-0 python3.9[53372]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759916347.4793897-638-67779334586111/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:39:08 compute-0 sudo[53370]: pam_unix(sudo:session): session closed for user root
Oct 08 09:39:09 compute-0 sudo[53524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcwjqnfvzunmxovfsrwdfmpdzbdqtkrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916348.9954884-683-59412474072056/AnsiballZ_stat.py'
Oct 08 09:39:09 compute-0 sudo[53524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:39:09 compute-0 python3.9[53526]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:39:09 compute-0 sudo[53524]: pam_unix(sudo:session): session closed for user root
Oct 08 09:39:09 compute-0 sudo[53649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tleavksswjcgdzqbyshhnznrmpbvdznd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916348.9954884-683-59412474072056/AnsiballZ_copy.py'
Oct 08 09:39:09 compute-0 sudo[53649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:39:10 compute-0 python3.9[53651]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759916348.9954884-683-59412474072056/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:39:10 compute-0 sudo[53649]: pam_unix(sudo:session): session closed for user root
Oct 08 09:39:11 compute-0 sudo[53803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyqrxsapjxnabutkyzvrdldecmgyilor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916351.15113-746-129035444806824/AnsiballZ_lineinfile.py'
Oct 08 09:39:11 compute-0 sudo[53803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:39:11 compute-0 python3.9[53805]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:39:11 compute-0 sudo[53803]: pam_unix(sudo:session): session closed for user root
Oct 08 09:39:13 compute-0 sudo[53957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofxzrrmiopbawcavkjdrzemgqevxotzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916353.0005915-791-83459575224330/AnsiballZ_setup.py'
Oct 08 09:39:13 compute-0 sudo[53957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:39:13 compute-0 python3.9[53959]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 08 09:39:13 compute-0 sudo[53957]: pam_unix(sudo:session): session closed for user root
Oct 08 09:39:14 compute-0 sudo[54041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsgwfgiuzbdbeogadtqegpzfpskzfclm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916353.0005915-791-83459575224330/AnsiballZ_systemd.py'
Oct 08 09:39:14 compute-0 sudo[54041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:39:14 compute-0 python3.9[54043]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 09:39:14 compute-0 sudo[54041]: pam_unix(sudo:session): session closed for user root
Oct 08 09:39:15 compute-0 sudo[54195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fosqupwtawhyanmplytjlzpfbgxfvhsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916355.6602275-839-129929727709672/AnsiballZ_setup.py'
Oct 08 09:39:15 compute-0 sudo[54195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:39:16 compute-0 python3.9[54197]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 08 09:39:16 compute-0 sudo[54195]: pam_unix(sudo:session): session closed for user root
Oct 08 09:39:16 compute-0 sudo[54279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljbosoxdnskljiamiiancyratpwsrfky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916355.6602275-839-129929727709672/AnsiballZ_systemd.py'
Oct 08 09:39:16 compute-0 sudo[54279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:39:17 compute-0 python3.9[54281]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 08 09:39:17 compute-0 systemd[1]: Stopping NTP client/server...
Oct 08 09:39:17 compute-0 chronyd[791]: chronyd exiting
Oct 08 09:39:17 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Oct 08 09:39:17 compute-0 systemd[1]: Stopped NTP client/server.
Oct 08 09:39:17 compute-0 systemd[1]: Starting NTP client/server...
Oct 08 09:39:17 compute-0 chronyd[54290]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct 08 09:39:17 compute-0 chronyd[54290]: Frequency -32.293 +/- 0.081 ppm read from /var/lib/chrony/drift
Oct 08 09:39:17 compute-0 chronyd[54290]: Loaded seccomp filter (level 2)
Oct 08 09:39:17 compute-0 systemd[1]: Started NTP client/server.
Oct 08 09:39:17 compute-0 sudo[54279]: pam_unix(sudo:session): session closed for user root
Oct 08 09:39:18 compute-0 sshd-session[49491]: Connection closed by 192.168.122.30 port 32830
Oct 08 09:39:18 compute-0 sshd-session[49488]: pam_unix(sshd:session): session closed for user zuul
Oct 08 09:39:18 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Oct 08 09:39:18 compute-0 systemd[1]: session-12.scope: Consumed 24.121s CPU time.
Oct 08 09:39:18 compute-0 systemd-logind[798]: Session 12 logged out. Waiting for processes to exit.
Oct 08 09:39:18 compute-0 systemd-logind[798]: Removed session 12.
Oct 08 09:39:24 compute-0 sshd-session[54316]: Accepted publickey for zuul from 192.168.122.30 port 52856 ssh2: ECDSA SHA256:7LvTHAj52RCSkKXOsIbzSDrEw7lwj23D0dAJ0Qgx0Rg
Oct 08 09:39:24 compute-0 systemd-logind[798]: New session 13 of user zuul.
Oct 08 09:39:24 compute-0 systemd[1]: Started Session 13 of User zuul.
Oct 08 09:39:24 compute-0 sshd-session[54316]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 08 09:39:24 compute-0 sudo[54469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjyiylkvdnnqenqvkofoyulsxankloqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916364.38294-26-210936944093404/AnsiballZ_file.py'
Oct 08 09:39:24 compute-0 sudo[54469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:39:25 compute-0 python3.9[54471]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:39:25 compute-0 sudo[54469]: pam_unix(sudo:session): session closed for user root
Oct 08 09:39:25 compute-0 sudo[54621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osknscsopgcozleolvwaulqzklzgkdyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916365.3358562-62-96549051267412/AnsiballZ_stat.py'
Oct 08 09:39:25 compute-0 sudo[54621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:39:26 compute-0 python3.9[54623]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:39:26 compute-0 sudo[54621]: pam_unix(sudo:session): session closed for user root
Oct 08 09:39:26 compute-0 sudo[54744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwcirfqlrwcmobnyxyifglokceizsilm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916365.3358562-62-96549051267412/AnsiballZ_copy.py'
Oct 08 09:39:26 compute-0 sudo[54744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:39:26 compute-0 python3.9[54746]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759916365.3358562-62-96549051267412/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:39:26 compute-0 sudo[54744]: pam_unix(sudo:session): session closed for user root
Oct 08 09:39:27 compute-0 sshd-session[54319]: Connection closed by 192.168.122.30 port 52856
Oct 08 09:39:27 compute-0 sshd-session[54316]: pam_unix(sshd:session): session closed for user zuul
Oct 08 09:39:27 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Oct 08 09:39:27 compute-0 systemd[1]: session-13.scope: Consumed 1.687s CPU time.
Oct 08 09:39:27 compute-0 systemd-logind[798]: Session 13 logged out. Waiting for processes to exit.
Oct 08 09:39:27 compute-0 systemd-logind[798]: Removed session 13.
Oct 08 09:39:32 compute-0 sshd-session[54771]: Accepted publickey for zuul from 192.168.122.30 port 52872 ssh2: ECDSA SHA256:7LvTHAj52RCSkKXOsIbzSDrEw7lwj23D0dAJ0Qgx0Rg
Oct 08 09:39:32 compute-0 systemd-logind[798]: New session 14 of user zuul.
Oct 08 09:39:32 compute-0 systemd[1]: Started Session 14 of User zuul.
Oct 08 09:39:32 compute-0 sshd-session[54771]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 08 09:39:33 compute-0 python3.9[54924]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 08 09:39:34 compute-0 sudo[55078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkathnwjuvugodcszsrtbkchfjferhga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916373.9398317-59-52261171231362/AnsiballZ_file.py'
Oct 08 09:39:34 compute-0 sudo[55078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:39:34 compute-0 python3.9[55080]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:39:34 compute-0 sudo[55078]: pam_unix(sudo:session): session closed for user root
Oct 08 09:39:35 compute-0 sudo[55253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llfvhwjobppmttivnspvumuwmssjwhwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916374.8710663-83-186075391440293/AnsiballZ_stat.py'
Oct 08 09:39:35 compute-0 sudo[55253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:39:35 compute-0 python3.9[55255]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:39:35 compute-0 sudo[55253]: pam_unix(sudo:session): session closed for user root
Oct 08 09:39:36 compute-0 sudo[55376]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plrkapedjfzeoudjresorhazxoaoewch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916374.8710663-83-186075391440293/AnsiballZ_copy.py'
Oct 08 09:39:36 compute-0 sudo[55376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:39:36 compute-0 python3.9[55378]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1759916374.8710663-83-186075391440293/.source.json _original_basename=.sld731j1 follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:39:36 compute-0 sudo[55376]: pam_unix(sudo:session): session closed for user root
Oct 08 09:39:37 compute-0 sudo[55528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkccwajypqdmkhyarjfuynqhvzjwjjyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916376.782785-152-264008644138162/AnsiballZ_stat.py'
Oct 08 09:39:37 compute-0 sudo[55528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:39:37 compute-0 python3.9[55530]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:39:37 compute-0 sudo[55528]: pam_unix(sudo:session): session closed for user root
Oct 08 09:39:37 compute-0 sudo[55651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khlgvpaflsyoaaxmbfrmfvdcgbrhcyfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916376.782785-152-264008644138162/AnsiballZ_copy.py'
Oct 08 09:39:37 compute-0 sudo[55651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:39:37 compute-0 python3.9[55653]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759916376.782785-152-264008644138162/.source _original_basename=.rzdvx5mn follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:39:37 compute-0 sudo[55651]: pam_unix(sudo:session): session closed for user root
Oct 08 09:39:38 compute-0 sudo[55803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wggdhwtsiflenqvzzltbejrrtcuppckc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916378.1421497-200-63620884378668/AnsiballZ_file.py'
Oct 08 09:39:38 compute-0 sudo[55803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:39:38 compute-0 python3.9[55805]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:39:38 compute-0 sudo[55803]: pam_unix(sudo:session): session closed for user root
Oct 08 09:39:39 compute-0 sudo[55955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-offakmtdgkgpdxqjmimagqnssiploxgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916378.905387-224-261350587608458/AnsiballZ_stat.py'
Oct 08 09:39:39 compute-0 sudo[55955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:39:39 compute-0 python3.9[55957]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:39:39 compute-0 sudo[55955]: pam_unix(sudo:session): session closed for user root
Oct 08 09:39:39 compute-0 sudo[56078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzltyedabsqijgaqvwsjzvlfdwvrtaow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916378.905387-224-261350587608458/AnsiballZ_copy.py'
Oct 08 09:39:39 compute-0 sudo[56078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:39:39 compute-0 python3.9[56080]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759916378.905387-224-261350587608458/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:39:39 compute-0 sudo[56078]: pam_unix(sudo:session): session closed for user root
Oct 08 09:39:40 compute-0 sudo[56230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxnpyuqirbjgzevveuswlpqzsqxgccxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916379.988106-224-194189057684414/AnsiballZ_stat.py'
Oct 08 09:39:40 compute-0 sudo[56230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:39:40 compute-0 python3.9[56232]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:39:40 compute-0 sudo[56230]: pam_unix(sudo:session): session closed for user root
Oct 08 09:39:40 compute-0 sudo[56353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmekwqfnbxgqxhydkeeuvhifyjmfmxbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916379.988106-224-194189057684414/AnsiballZ_copy.py'
Oct 08 09:39:40 compute-0 sudo[56353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:39:41 compute-0 python3.9[56355]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759916379.988106-224-194189057684414/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:39:41 compute-0 sudo[56353]: pam_unix(sudo:session): session closed for user root
Oct 08 09:39:41 compute-0 sudo[56505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmheuhxlnluxlcekdghemrsmpjclruoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916381.3787463-311-59796903774567/AnsiballZ_file.py'
Oct 08 09:39:41 compute-0 sudo[56505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:39:41 compute-0 python3.9[56507]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:39:41 compute-0 sudo[56505]: pam_unix(sudo:session): session closed for user root
Oct 08 09:39:42 compute-0 sudo[56657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rekanuxcoamxipllidxsqqekfjfaolsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916382.0254693-335-64124465464662/AnsiballZ_stat.py'
Oct 08 09:39:42 compute-0 sudo[56657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:39:42 compute-0 python3.9[56659]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:39:42 compute-0 sudo[56657]: pam_unix(sudo:session): session closed for user root
Oct 08 09:39:42 compute-0 sudo[56780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwxshzgcwkdjcdijqnopdtyzgotudkex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916382.0254693-335-64124465464662/AnsiballZ_copy.py'
Oct 08 09:39:42 compute-0 sudo[56780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:39:42 compute-0 python3.9[56782]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759916382.0254693-335-64124465464662/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:39:42 compute-0 sudo[56780]: pam_unix(sudo:session): session closed for user root
Oct 08 09:39:43 compute-0 sudo[56932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlvxnmyrlgrroxsgdqzegbkhbrowrycv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916383.30562-380-239234759531957/AnsiballZ_stat.py'
Oct 08 09:39:43 compute-0 sudo[56932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:39:43 compute-0 python3.9[56934]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:39:43 compute-0 sudo[56932]: pam_unix(sudo:session): session closed for user root
Oct 08 09:39:44 compute-0 sudo[57055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vocxwrndysmwnbcplogkdbtfguwnkgct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916383.30562-380-239234759531957/AnsiballZ_copy.py'
Oct 08 09:39:44 compute-0 sudo[57055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:39:44 compute-0 python3.9[57057]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759916383.30562-380-239234759531957/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:39:44 compute-0 sudo[57055]: pam_unix(sudo:session): session closed for user root
Oct 08 09:39:45 compute-0 sudo[57207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpawjipduvdihjhgropgsvibixefwppb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916384.8162074-425-142054348321601/AnsiballZ_systemd.py'
Oct 08 09:39:45 compute-0 sudo[57207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:39:45 compute-0 python3.9[57209]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 09:39:45 compute-0 systemd[1]: Reloading.
Oct 08 09:39:45 compute-0 systemd-rc-local-generator[57234]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:39:45 compute-0 systemd-sysv-generator[57239]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:39:45 compute-0 systemd[1]: Reloading.
Oct 08 09:39:46 compute-0 systemd-rc-local-generator[57274]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:39:46 compute-0 systemd-sysv-generator[57277]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:39:46 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Oct 08 09:39:46 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Oct 08 09:39:46 compute-0 sudo[57207]: pam_unix(sudo:session): session closed for user root
Oct 08 09:39:46 compute-0 sudo[57435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrjrjzysujxxbiadyzxankxxrvodcsiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916386.4359822-449-78234202941183/AnsiballZ_stat.py'
Oct 08 09:39:46 compute-0 sudo[57435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:39:46 compute-0 python3.9[57437]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:39:46 compute-0 sudo[57435]: pam_unix(sudo:session): session closed for user root
Oct 08 09:39:47 compute-0 sudo[57558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jewqwtemowghrkadvtmcttuafqwleodm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916386.4359822-449-78234202941183/AnsiballZ_copy.py'
Oct 08 09:39:47 compute-0 sudo[57558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:39:47 compute-0 python3.9[57560]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759916386.4359822-449-78234202941183/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:39:47 compute-0 sudo[57558]: pam_unix(sudo:session): session closed for user root
Oct 08 09:39:47 compute-0 sudo[57710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkjqswtwcszbxzhheqgppsdqzcoqfriy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916387.6816137-494-77405419769508/AnsiballZ_stat.py'
Oct 08 09:39:47 compute-0 sudo[57710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:39:48 compute-0 python3.9[57712]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:39:48 compute-0 sudo[57710]: pam_unix(sudo:session): session closed for user root
Oct 08 09:39:48 compute-0 sudo[57833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-neowqlgwvhpqqvrsbjxazzarpgrokgvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916387.6816137-494-77405419769508/AnsiballZ_copy.py'
Oct 08 09:39:48 compute-0 sudo[57833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:39:48 compute-0 python3.9[57835]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759916387.6816137-494-77405419769508/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:39:48 compute-0 sudo[57833]: pam_unix(sudo:session): session closed for user root
Oct 08 09:39:49 compute-0 sudo[57985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oclmcmfrmxztbhomajhwtrxgeslwshgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916388.89649-539-59815335601439/AnsiballZ_systemd.py'
Oct 08 09:39:49 compute-0 sudo[57985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:39:49 compute-0 python3.9[57987]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 09:39:49 compute-0 systemd[1]: Reloading.
Oct 08 09:39:49 compute-0 systemd-rc-local-generator[58012]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:39:49 compute-0 systemd-sysv-generator[58015]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:39:49 compute-0 systemd[1]: Reloading.
Oct 08 09:39:49 compute-0 systemd-rc-local-generator[58052]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:39:49 compute-0 systemd-sysv-generator[58055]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:39:49 compute-0 systemd[1]: Starting Create netns directory...
Oct 08 09:39:49 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 08 09:39:49 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 08 09:39:49 compute-0 systemd[1]: Finished Create netns directory.
Oct 08 09:39:49 compute-0 sudo[57985]: pam_unix(sudo:session): session closed for user root
Oct 08 09:39:50 compute-0 python3.9[58213]: ansible-ansible.builtin.service_facts Invoked
Oct 08 09:39:50 compute-0 network[58230]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 08 09:39:50 compute-0 network[58231]: 'network-scripts' will be removed from distribution in near future.
Oct 08 09:39:50 compute-0 network[58232]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 08 09:39:54 compute-0 sudo[58494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbmexvujexrunmywzhaoaqvvufpjdcgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916393.997296-587-214135857983983/AnsiballZ_systemd.py'
Oct 08 09:39:54 compute-0 sudo[58494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:39:54 compute-0 python3.9[58496]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 09:39:54 compute-0 systemd[1]: Reloading.
Oct 08 09:39:54 compute-0 systemd-rc-local-generator[58523]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:39:54 compute-0 systemd-sysv-generator[58528]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:39:54 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Oct 08 09:39:55 compute-0 iptables.init[58536]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Oct 08 09:39:55 compute-0 iptables.init[58536]: iptables: Flushing firewall rules: [  OK  ]
Oct 08 09:39:55 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Oct 08 09:39:55 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Oct 08 09:39:55 compute-0 sudo[58494]: pam_unix(sudo:session): session closed for user root
Oct 08 09:39:55 compute-0 sudo[58731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlazrmctqfxdhuuagzbncncfmehkizjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916395.3597655-587-118321998874164/AnsiballZ_systemd.py'
Oct 08 09:39:55 compute-0 sudo[58731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:39:55 compute-0 python3.9[58733]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 09:39:56 compute-0 sudo[58731]: pam_unix(sudo:session): session closed for user root
Oct 08 09:39:56 compute-0 sudo[58885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekrorltjuggmfydtzeaonzyuoniodkih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916396.5355227-635-173498417501100/AnsiballZ_systemd.py'
Oct 08 09:39:56 compute-0 sudo[58885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:39:57 compute-0 python3.9[58887]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 09:39:57 compute-0 systemd[1]: Reloading.
Oct 08 09:39:57 compute-0 systemd-sysv-generator[58919]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:39:57 compute-0 systemd-rc-local-generator[58912]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:39:57 compute-0 systemd[1]: Starting Netfilter Tables...
Oct 08 09:39:57 compute-0 systemd[1]: Finished Netfilter Tables.
Oct 08 09:39:57 compute-0 sudo[58885]: pam_unix(sudo:session): session closed for user root
Oct 08 09:39:58 compute-0 sudo[59077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stmxijnwxicquvcxlvbecruirozcnwdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916397.6741204-659-83300665607748/AnsiballZ_command.py'
Oct 08 09:39:58 compute-0 sudo[59077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:39:58 compute-0 python3.9[59079]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:39:58 compute-0 sudo[59077]: pam_unix(sudo:session): session closed for user root
Oct 08 09:39:59 compute-0 sudo[59230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmkstjbfoaypexfhngvfkegkjfjsskia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916398.9007385-701-11858153407798/AnsiballZ_stat.py'
Oct 08 09:39:59 compute-0 sudo[59230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:39:59 compute-0 python3.9[59232]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:39:59 compute-0 sudo[59230]: pam_unix(sudo:session): session closed for user root
Oct 08 09:39:59 compute-0 sudo[59355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlksaqwcdjjpqmjqefuwcpjvusdgjgyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916398.9007385-701-11858153407798/AnsiballZ_copy.py'
Oct 08 09:39:59 compute-0 sudo[59355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:39:59 compute-0 python3.9[59357]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759916398.9007385-701-11858153407798/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=4729b6ffc5b555fa142bf0b6e6dc15609cb89a22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:39:59 compute-0 sudo[59355]: pam_unix(sudo:session): session closed for user root
Oct 08 09:40:00 compute-0 python3.9[59508]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 08 09:40:00 compute-0 polkitd[6524]: Registered Authentication Agent for unix-process:59510:235943 (system bus name :1.523 [/usr/bin/pkttyagent --notify-fd 5 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Oct 08 09:40:02 compute-0 anacron[1066]: Job `cron.weekly' started
Oct 08 09:40:02 compute-0 anacron[1066]: Job `cron.weekly' terminated
Oct 08 09:40:25 compute-0 polkit-agent-helper-1[59522]: pam_unix(polkit-1:auth): conversation failed
Oct 08 09:40:25 compute-0 polkit-agent-helper-1[59522]: pam_unix(polkit-1:auth): auth could not identify password for [root]
Oct 08 09:40:25 compute-0 polkitd[6524]: Unregistered Authentication Agent for unix-process:59510:235943 (system bus name :1.523, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Oct 08 09:40:25 compute-0 polkitd[6524]: Operator of unix-process:59510:235943 FAILED to authenticate to gain authorization for action org.freedesktop.systemd1.manage-units for system-bus-name::1.522 [<unknown>] (owned by unix-user:zuul)
Oct 08 09:40:26 compute-0 sshd-session[54774]: Connection closed by 192.168.122.30 port 52872
Oct 08 09:40:26 compute-0 sshd-session[54771]: pam_unix(sshd:session): session closed for user zuul
Oct 08 09:40:26 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Oct 08 09:40:26 compute-0 systemd[1]: session-14.scope: Consumed 18.957s CPU time.
Oct 08 09:40:26 compute-0 systemd-logind[798]: Session 14 logged out. Waiting for processes to exit.
Oct 08 09:40:26 compute-0 systemd-logind[798]: Removed session 14.
Oct 08 09:40:38 compute-0 sshd-session[59550]: Accepted publickey for zuul from 192.168.122.30 port 50378 ssh2: ECDSA SHA256:7LvTHAj52RCSkKXOsIbzSDrEw7lwj23D0dAJ0Qgx0Rg
Oct 08 09:40:38 compute-0 systemd-logind[798]: New session 15 of user zuul.
Oct 08 09:40:38 compute-0 systemd[1]: Started Session 15 of User zuul.
Oct 08 09:40:38 compute-0 sshd-session[59550]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 08 09:40:39 compute-0 python3.9[59703]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 08 09:40:40 compute-0 sudo[59857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urwqwctvzmfeckmejdswdhlfulczowrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916440.4935195-59-1110333725139/AnsiballZ_file.py'
Oct 08 09:40:40 compute-0 sudo[59857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:40:41 compute-0 python3.9[59859]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:40:41 compute-0 sudo[59857]: pam_unix(sudo:session): session closed for user root
Oct 08 09:40:41 compute-0 sudo[60032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kualstijmksiguebhwxgdfwjvidcilee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916441.4138727-83-66732939668806/AnsiballZ_stat.py'
Oct 08 09:40:41 compute-0 sudo[60032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:40:42 compute-0 python3.9[60034]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:40:42 compute-0 sudo[60032]: pam_unix(sudo:session): session closed for user root
Oct 08 09:40:42 compute-0 sudo[60110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdvmtklblbnpoeqvryqgtzqddoerirsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916441.4138727-83-66732939668806/AnsiballZ_file.py'
Oct 08 09:40:42 compute-0 sudo[60110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:40:42 compute-0 python3.9[60112]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.q5bnxeta recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:40:42 compute-0 sudo[60110]: pam_unix(sudo:session): session closed for user root
Oct 08 09:40:43 compute-0 sudo[60262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwkiyswgdvriikbqocewpahyldkgblxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916443.1127021-143-116774479297749/AnsiballZ_stat.py'
Oct 08 09:40:43 compute-0 sudo[60262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:40:43 compute-0 python3.9[60264]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:40:43 compute-0 sudo[60262]: pam_unix(sudo:session): session closed for user root
Oct 08 09:40:43 compute-0 sudo[60340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lanxpoavyjrmqvavlndjnztgshrwulba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916443.1127021-143-116774479297749/AnsiballZ_file.py'
Oct 08 09:40:43 compute-0 sudo[60340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:40:44 compute-0 python3.9[60342]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.s9oazx91 recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:40:44 compute-0 sudo[60340]: pam_unix(sudo:session): session closed for user root
Oct 08 09:40:44 compute-0 sudo[60492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzeqzwgivkiqcabntyfangxpymmwmdvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916444.3445153-182-137312665418259/AnsiballZ_file.py'
Oct 08 09:40:44 compute-0 sudo[60492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:40:44 compute-0 python3.9[60494]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:40:44 compute-0 sudo[60492]: pam_unix(sudo:session): session closed for user root
Oct 08 09:40:45 compute-0 sudo[60644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwsxdcdckphlhxkckniuvrgvciwtwirw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916445.0834124-206-256091092530142/AnsiballZ_stat.py'
Oct 08 09:40:45 compute-0 sudo[60644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:40:45 compute-0 python3.9[60646]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:40:45 compute-0 sudo[60644]: pam_unix(sudo:session): session closed for user root
Oct 08 09:40:45 compute-0 sudo[60722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-coelbrxwdhrxwkzqldyrhooefqtqpcbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916445.0834124-206-256091092530142/AnsiballZ_file.py'
Oct 08 09:40:45 compute-0 sudo[60722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:40:45 compute-0 python3.9[60724]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:40:45 compute-0 sudo[60722]: pam_unix(sudo:session): session closed for user root
Oct 08 09:40:46 compute-0 sudo[60874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfvuccgcapdtpbvwkrpnjposfplughji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916446.11619-206-80642403450331/AnsiballZ_stat.py'
Oct 08 09:40:46 compute-0 sudo[60874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:40:46 compute-0 python3.9[60876]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:40:46 compute-0 sudo[60874]: pam_unix(sudo:session): session closed for user root
Oct 08 09:40:46 compute-0 sudo[60952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfcnxzxycebspbjeaunpkzxffibcapds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916446.11619-206-80642403450331/AnsiballZ_file.py'
Oct 08 09:40:46 compute-0 sudo[60952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:40:47 compute-0 python3.9[60954]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:40:47 compute-0 sudo[60952]: pam_unix(sudo:session): session closed for user root
Oct 08 09:40:47 compute-0 sudo[61104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uosqjtdxoufercoiovdouafiqpqnzenq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916447.5419457-275-203084754647485/AnsiballZ_file.py'
Oct 08 09:40:47 compute-0 sudo[61104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:40:47 compute-0 python3.9[61106]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:40:47 compute-0 sudo[61104]: pam_unix(sudo:session): session closed for user root
Oct 08 09:40:48 compute-0 sudo[61256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvsvbhkvymmlqesacagsfiebrirlbzmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916448.2859626-299-175023977619921/AnsiballZ_stat.py'
Oct 08 09:40:48 compute-0 sudo[61256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:40:48 compute-0 python3.9[61258]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:40:48 compute-0 sudo[61256]: pam_unix(sudo:session): session closed for user root
Oct 08 09:40:49 compute-0 sudo[61334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knvjfsteovzdqsdgpfqxwtocymilftok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916448.2859626-299-175023977619921/AnsiballZ_file.py'
Oct 08 09:40:49 compute-0 sudo[61334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:40:49 compute-0 python3.9[61336]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:40:49 compute-0 sudo[61334]: pam_unix(sudo:session): session closed for user root
Oct 08 09:40:49 compute-0 sudo[61486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezusytpjnmusaigfukctxfsmmymqumis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916449.4984806-335-37641871898085/AnsiballZ_stat.py'
Oct 08 09:40:49 compute-0 sudo[61486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:40:49 compute-0 python3.9[61488]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:40:50 compute-0 sudo[61486]: pam_unix(sudo:session): session closed for user root
Oct 08 09:40:50 compute-0 sudo[61564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svshqoaemmqwjsenikpuuzzvguyfoenn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916449.4984806-335-37641871898085/AnsiballZ_file.py'
Oct 08 09:40:50 compute-0 sudo[61564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:40:50 compute-0 python3.9[61566]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:40:50 compute-0 sudo[61564]: pam_unix(sudo:session): session closed for user root
Oct 08 09:40:51 compute-0 sudo[61716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asodchddrboxicmewtpqwekxngegfowz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916450.6985786-371-264960874732470/AnsiballZ_systemd.py'
Oct 08 09:40:51 compute-0 sudo[61716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:40:51 compute-0 python3.9[61718]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 09:40:51 compute-0 systemd[1]: Reloading.
Oct 08 09:40:51 compute-0 systemd-sysv-generator[61749]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:40:51 compute-0 systemd-rc-local-generator[61742]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:40:51 compute-0 sudo[61716]: pam_unix(sudo:session): session closed for user root
Oct 08 09:40:52 compute-0 sudo[61906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcdvzshzpeytonsedcgrwygatuhsomcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916452.1415288-395-179823223571171/AnsiballZ_stat.py'
Oct 08 09:40:52 compute-0 sudo[61906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:40:52 compute-0 python3.9[61908]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:40:52 compute-0 sudo[61906]: pam_unix(sudo:session): session closed for user root
Oct 08 09:40:52 compute-0 sudo[61984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obczphhhpruuftamplrtpercmjvbllrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916452.1415288-395-179823223571171/AnsiballZ_file.py'
Oct 08 09:40:52 compute-0 sudo[61984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:40:53 compute-0 python3.9[61986]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:40:53 compute-0 sudo[61984]: pam_unix(sudo:session): session closed for user root
Oct 08 09:40:53 compute-0 sudo[62136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxvatgzflwdglhjwvyjkkmvidysjhsuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916453.3265595-431-36099541744364/AnsiballZ_stat.py'
Oct 08 09:40:53 compute-0 sudo[62136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:40:53 compute-0 python3.9[62138]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:40:53 compute-0 sudo[62136]: pam_unix(sudo:session): session closed for user root
Oct 08 09:40:54 compute-0 sudo[62214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdhwdaknpvbzihzzbunpwlkcraveywjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916453.3265595-431-36099541744364/AnsiballZ_file.py'
Oct 08 09:40:54 compute-0 sudo[62214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:40:54 compute-0 python3.9[62216]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:40:54 compute-0 sudo[62214]: pam_unix(sudo:session): session closed for user root
Oct 08 09:40:54 compute-0 sudo[62366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sidyqjxgzclvxgrleyezvtywhkwwcvjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916454.561696-467-279185855027513/AnsiballZ_systemd.py'
Oct 08 09:40:54 compute-0 sudo[62366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:40:55 compute-0 python3.9[62368]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 09:40:55 compute-0 systemd[1]: Reloading.
Oct 08 09:40:55 compute-0 systemd-rc-local-generator[62396]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:40:55 compute-0 systemd-sysv-generator[62400]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:40:55 compute-0 systemd[1]: Starting Create netns directory...
Oct 08 09:40:55 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 08 09:40:55 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 08 09:40:55 compute-0 systemd[1]: Finished Create netns directory.
Oct 08 09:40:55 compute-0 sudo[62366]: pam_unix(sudo:session): session closed for user root
Oct 08 09:40:56 compute-0 python3.9[62559]: ansible-ansible.builtin.service_facts Invoked
Oct 08 09:40:56 compute-0 network[62576]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 08 09:40:56 compute-0 network[62577]: 'network-scripts' will be removed from distribution in near future.
Oct 08 09:40:56 compute-0 network[62578]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 08 09:41:00 compute-0 sudo[62839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpqwbpcemkarafkovcfebqvekjzarmec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916460.0572686-545-10649160258065/AnsiballZ_stat.py'
Oct 08 09:41:00 compute-0 sudo[62839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:41:00 compute-0 python3.9[62841]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:41:00 compute-0 sudo[62839]: pam_unix(sudo:session): session closed for user root
Oct 08 09:41:00 compute-0 sudo[62917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmghxyxlszkuqgabsqwqupliulqknxtt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916460.0572686-545-10649160258065/AnsiballZ_file.py'
Oct 08 09:41:00 compute-0 sudo[62917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:41:00 compute-0 python3.9[62919]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:41:00 compute-0 sudo[62917]: pam_unix(sudo:session): session closed for user root
Oct 08 09:41:01 compute-0 sudo[63069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tptxuyldwysvmnorlfmpfmnaqbnslsqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916461.3568025-584-251076024842193/AnsiballZ_file.py'
Oct 08 09:41:01 compute-0 sudo[63069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:41:01 compute-0 python3.9[63071]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:41:01 compute-0 sudo[63069]: pam_unix(sudo:session): session closed for user root
Oct 08 09:41:02 compute-0 sudo[63221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcteyjuyskznheamoqxguwrwaxkyqljc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916462.0393167-608-67828803085351/AnsiballZ_stat.py'
Oct 08 09:41:02 compute-0 sudo[63221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:41:02 compute-0 python3.9[63223]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:41:02 compute-0 sudo[63221]: pam_unix(sudo:session): session closed for user root
Oct 08 09:41:03 compute-0 sudo[63344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsyoayjmnmffhtpqljvdhwqewziyguxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916462.0393167-608-67828803085351/AnsiballZ_copy.py'
Oct 08 09:41:03 compute-0 sudo[63344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:41:03 compute-0 python3.9[63346]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759916462.0393167-608-67828803085351/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:41:03 compute-0 sudo[63344]: pam_unix(sudo:session): session closed for user root
Oct 08 09:41:04 compute-0 sudo[63496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omukzptluvzufxjazyazpshzxxcosxct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916463.7040024-662-30634631914231/AnsiballZ_timezone.py'
Oct 08 09:41:04 compute-0 sudo[63496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:41:04 compute-0 python3.9[63498]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct 08 09:41:04 compute-0 systemd[1]: Starting Time & Date Service...
Oct 08 09:41:04 compute-0 systemd[1]: Started Time & Date Service.
Oct 08 09:41:04 compute-0 sudo[63496]: pam_unix(sudo:session): session closed for user root
Oct 08 09:41:05 compute-0 sudo[63652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgpooadwasmolmhaocecstdetjyuqjhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916464.9430985-689-180311322214026/AnsiballZ_file.py'
Oct 08 09:41:05 compute-0 sudo[63652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:41:05 compute-0 python3.9[63654]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:41:05 compute-0 sudo[63652]: pam_unix(sudo:session): session closed for user root
Oct 08 09:41:05 compute-0 sudo[63804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvoylinalbxcigfngknitxwauuupahmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916465.686154-713-67116703037697/AnsiballZ_stat.py'
Oct 08 09:41:05 compute-0 sudo[63804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:41:06 compute-0 python3.9[63806]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:41:06 compute-0 sudo[63804]: pam_unix(sudo:session): session closed for user root
Oct 08 09:41:06 compute-0 sudo[63927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpupfegmtfjleoctwflcswedeovmblut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916465.686154-713-67116703037697/AnsiballZ_copy.py'
Oct 08 09:41:06 compute-0 sudo[63927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:41:06 compute-0 python3.9[63929]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759916465.686154-713-67116703037697/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:41:06 compute-0 sudo[63927]: pam_unix(sudo:session): session closed for user root
Oct 08 09:41:07 compute-0 sudo[64079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffcocehmndcjylbffwiloavijwhjiggw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916467.0128467-758-123189238895354/AnsiballZ_stat.py'
Oct 08 09:41:07 compute-0 sudo[64079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:41:07 compute-0 python3.9[64081]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:41:07 compute-0 sudo[64079]: pam_unix(sudo:session): session closed for user root
Oct 08 09:41:07 compute-0 sudo[64202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvcqrwktdxcnnwbvtihgqnwpresglshq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916467.0128467-758-123189238895354/AnsiballZ_copy.py'
Oct 08 09:41:07 compute-0 sudo[64202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:41:07 compute-0 python3.9[64204]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759916467.0128467-758-123189238895354/.source.yaml _original_basename=.462ryte_ follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:41:08 compute-0 sudo[64202]: pam_unix(sudo:session): session closed for user root
Oct 08 09:41:08 compute-0 sudo[64354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aitmspliwhcgjwfhnbvbzcrvyqjnevue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916468.2791557-803-89898809109499/AnsiballZ_stat.py'
Oct 08 09:41:08 compute-0 sudo[64354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:41:08 compute-0 python3.9[64356]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:41:08 compute-0 sudo[64354]: pam_unix(sudo:session): session closed for user root
Oct 08 09:41:09 compute-0 sudo[64477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuymgcrsrgkaeyphkzhfkyaujmgifhgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916468.2791557-803-89898809109499/AnsiballZ_copy.py'
Oct 08 09:41:09 compute-0 sudo[64477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:41:09 compute-0 python3.9[64479]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759916468.2791557-803-89898809109499/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:41:09 compute-0 sudo[64477]: pam_unix(sudo:session): session closed for user root
Oct 08 09:41:09 compute-0 sudo[64629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bradbzkfxtzcgpnuilkjspnpnzsvgquj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916469.5548527-848-30621745729376/AnsiballZ_command.py'
Oct 08 09:41:09 compute-0 sudo[64629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:41:10 compute-0 python3.9[64631]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:41:10 compute-0 sudo[64629]: pam_unix(sudo:session): session closed for user root
Oct 08 09:41:10 compute-0 sudo[64782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqlyscrtpyxphvscwazkuetecologckb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916470.5084074-872-190106979805016/AnsiballZ_command.py'
Oct 08 09:41:10 compute-0 sudo[64782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:41:10 compute-0 python3.9[64784]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:41:11 compute-0 sudo[64782]: pam_unix(sudo:session): session closed for user root
Oct 08 09:41:11 compute-0 sudo[64935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxcnwlxqggyiztfrejsqivvgkbanvtce ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759916471.2639167-896-188455016718220/AnsiballZ_edpm_nftables_from_files.py'
Oct 08 09:41:11 compute-0 sudo[64935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:41:11 compute-0 python3[64937]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 08 09:41:11 compute-0 sudo[64935]: pam_unix(sudo:session): session closed for user root
Oct 08 09:41:12 compute-0 sudo[65087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytxdwghypgxjxfcmuetsidfhjaptjtoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916472.1359742-920-84569827939591/AnsiballZ_stat.py'
Oct 08 09:41:12 compute-0 sudo[65087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:41:12 compute-0 python3.9[65089]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:41:12 compute-0 sudo[65087]: pam_unix(sudo:session): session closed for user root
Oct 08 09:41:12 compute-0 sudo[65210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uuwzvjaroaxodoqcixloataepbfszwda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916472.1359742-920-84569827939591/AnsiballZ_copy.py'
Oct 08 09:41:12 compute-0 sudo[65210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:41:13 compute-0 python3.9[65212]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759916472.1359742-920-84569827939591/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:41:13 compute-0 sudo[65210]: pam_unix(sudo:session): session closed for user root
Oct 08 09:41:13 compute-0 sudo[65362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbvxminavulwhlsipmykemlmhmubqpvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916473.4992251-965-236898675351959/AnsiballZ_stat.py'
Oct 08 09:41:13 compute-0 sudo[65362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:41:14 compute-0 python3.9[65364]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:41:14 compute-0 sudo[65362]: pam_unix(sudo:session): session closed for user root
Oct 08 09:41:14 compute-0 sudo[65485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxdnshpxkdaukncsggxgabwhgtjhluzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916473.4992251-965-236898675351959/AnsiballZ_copy.py'
Oct 08 09:41:14 compute-0 sudo[65485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:41:14 compute-0 python3.9[65487]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759916473.4992251-965-236898675351959/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:41:14 compute-0 sudo[65485]: pam_unix(sudo:session): session closed for user root
Oct 08 09:41:15 compute-0 sudo[65637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnixiddxiadilsykstjkdwikqavorjea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916474.8748808-1010-4277159368343/AnsiballZ_stat.py'
Oct 08 09:41:15 compute-0 sudo[65637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:41:15 compute-0 python3.9[65639]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:41:15 compute-0 sudo[65637]: pam_unix(sudo:session): session closed for user root
Oct 08 09:41:15 compute-0 sudo[65760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oftkosobhrqdndqfzshxxwwriljdnykl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916474.8748808-1010-4277159368343/AnsiballZ_copy.py'
Oct 08 09:41:15 compute-0 sudo[65760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:41:15 compute-0 python3.9[65762]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759916474.8748808-1010-4277159368343/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:41:15 compute-0 sudo[65760]: pam_unix(sudo:session): session closed for user root
Oct 08 09:41:16 compute-0 sudo[65912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zimidvdrzjmdoaftwqhqbjqjrdnguzes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916476.2035341-1055-215348333513182/AnsiballZ_stat.py'
Oct 08 09:41:16 compute-0 sudo[65912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:41:16 compute-0 python3.9[65914]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:41:16 compute-0 sudo[65912]: pam_unix(sudo:session): session closed for user root
Oct 08 09:41:17 compute-0 sudo[66036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qssjwgscnavmxxhqrqzrrkfwblnkzkma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916476.2035341-1055-215348333513182/AnsiballZ_copy.py'
Oct 08 09:41:17 compute-0 sudo[66036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:41:17 compute-0 python3.9[66038]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759916476.2035341-1055-215348333513182/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:41:17 compute-0 sudo[66036]: pam_unix(sudo:session): session closed for user root
Oct 08 09:41:18 compute-0 sudo[66188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lonzmepghuhrkynvngagpmresrmpuzfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916477.8239245-1100-237685167320183/AnsiballZ_stat.py'
Oct 08 09:41:18 compute-0 sudo[66188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:41:18 compute-0 python3.9[66190]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:41:18 compute-0 sudo[66188]: pam_unix(sudo:session): session closed for user root
Oct 08 09:41:18 compute-0 sudo[66311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtkkfysrpnzrhlmmstfxailztrmhengm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916477.8239245-1100-237685167320183/AnsiballZ_copy.py'
Oct 08 09:41:18 compute-0 sudo[66311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:41:19 compute-0 python3.9[66313]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759916477.8239245-1100-237685167320183/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:41:19 compute-0 sudo[66311]: pam_unix(sudo:session): session closed for user root
Oct 08 09:41:19 compute-0 sudo[66463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjgpvmpuhfbfhcoififuqsqonletejxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916479.2671525-1145-231166984314912/AnsiballZ_file.py'
Oct 08 09:41:19 compute-0 sudo[66463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:41:19 compute-0 python3.9[66465]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:41:19 compute-0 sudo[66463]: pam_unix(sudo:session): session closed for user root
Oct 08 09:41:20 compute-0 sudo[66615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmipebnjxloznjqqnsaxiqpnesixxjqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916480.3098998-1169-94817329218039/AnsiballZ_command.py'
Oct 08 09:41:20 compute-0 sudo[66615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:41:20 compute-0 python3.9[66617]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:41:20 compute-0 sudo[66615]: pam_unix(sudo:session): session closed for user root
Oct 08 09:41:21 compute-0 sudo[66774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihmrsjzpqvptzluorcocxzzjzdtlsxxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916481.1106071-1193-186504573054613/AnsiballZ_blockinfile.py'
Oct 08 09:41:21 compute-0 sudo[66774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:41:21 compute-0 python3.9[66776]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:41:21 compute-0 sudo[66774]: pam_unix(sudo:session): session closed for user root
Oct 08 09:41:22 compute-0 sudo[66927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bassaoljnguilsunqnbeosvjxncgzjov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916482.0660355-1220-138500868400988/AnsiballZ_file.py'
Oct 08 09:41:22 compute-0 sudo[66927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:41:22 compute-0 python3.9[66929]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:41:22 compute-0 sudo[66927]: pam_unix(sudo:session): session closed for user root
Oct 08 09:41:22 compute-0 sudo[67079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blwirugysbxryjshnmymlihkhjxsjkxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916482.697724-1220-164816896928267/AnsiballZ_file.py'
Oct 08 09:41:22 compute-0 sudo[67079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:41:23 compute-0 python3.9[67081]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:41:23 compute-0 sudo[67079]: pam_unix(sudo:session): session closed for user root
Oct 08 09:41:23 compute-0 sudo[67231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glbcrumimxtymozfqzwldlzqzjrjfnae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916483.3934903-1265-131359793937251/AnsiballZ_mount.py'
Oct 08 09:41:23 compute-0 sudo[67231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:41:24 compute-0 python3.9[67233]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 08 09:41:24 compute-0 sudo[67231]: pam_unix(sudo:session): session closed for user root
Oct 08 09:41:24 compute-0 sudo[67384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhoynzmwgakvxapoifrleeksixtkxxyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916484.2997644-1265-173897597041092/AnsiballZ_mount.py'
Oct 08 09:41:24 compute-0 sudo[67384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:41:24 compute-0 python3.9[67386]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 08 09:41:24 compute-0 sudo[67384]: pam_unix(sudo:session): session closed for user root
Oct 08 09:41:25 compute-0 sshd-session[59553]: Connection closed by 192.168.122.30 port 50378
Oct 08 09:41:25 compute-0 sshd-session[59550]: pam_unix(sshd:session): session closed for user zuul
Oct 08 09:41:25 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Oct 08 09:41:25 compute-0 systemd[1]: session-15.scope: Consumed 30.783s CPU time.
Oct 08 09:41:25 compute-0 systemd-logind[798]: Session 15 logged out. Waiting for processes to exit.
Oct 08 09:41:25 compute-0 systemd-logind[798]: Removed session 15.
Oct 08 09:41:27 compute-0 chronyd[54290]: Selected source 23.133.168.247 (pool.ntp.org)
Oct 08 09:41:30 compute-0 sshd-session[67412]: Accepted publickey for zuul from 192.168.122.30 port 35506 ssh2: ECDSA SHA256:7LvTHAj52RCSkKXOsIbzSDrEw7lwj23D0dAJ0Qgx0Rg
Oct 08 09:41:30 compute-0 systemd-logind[798]: New session 16 of user zuul.
Oct 08 09:41:30 compute-0 systemd[1]: Started Session 16 of User zuul.
Oct 08 09:41:30 compute-0 sshd-session[67412]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 08 09:41:31 compute-0 sudo[67565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpvrukussrrhkfvnjueirhqrdivmmdcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916490.6048243-18-156649560185621/AnsiballZ_tempfile.py'
Oct 08 09:41:31 compute-0 sudo[67565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:41:31 compute-0 python3.9[67567]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Oct 08 09:41:31 compute-0 sudo[67565]: pam_unix(sudo:session): session closed for user root
Oct 08 09:41:32 compute-0 sudo[67717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyqcpxcztgmmvquuvafskoapsbtlinis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916491.551274-54-75874824635205/AnsiballZ_stat.py'
Oct 08 09:41:32 compute-0 sudo[67717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:41:32 compute-0 python3.9[67719]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 09:41:32 compute-0 sudo[67717]: pam_unix(sudo:session): session closed for user root
Oct 08 09:41:33 compute-0 sudo[67869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdhivkuglubjbpigzfeihmobundzqzba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916492.540785-84-157080036368845/AnsiballZ_setup.py'
Oct 08 09:41:33 compute-0 sudo[67869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:41:33 compute-0 python3.9[67871]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 08 09:41:33 compute-0 sudo[67869]: pam_unix(sudo:session): session closed for user root
Oct 08 09:41:34 compute-0 sudo[68021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekyonrebcdybtosvahchtetjmxozblnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916493.7903597-109-7302305677154/AnsiballZ_blockinfile.py'
Oct 08 09:41:34 compute-0 sudo[68021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:41:34 compute-0 python3.9[68023]: ansible-ansible.builtin.blockinfile Invoked with block=compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCYQPNjF86l7L2Hj2/ras4UwWV1W/v43YSKx2wyuHDdieMiPaKbrXfDkjmyzUBERrbiTo1QPGQAMAmA2ykBglPN8r/+0SzTmZFPysM5MwJdoYFoZLOFzs9ldQJxEusbWvZnvF+I9UgftR9Kc0etIrQ6xgLbAtGZNGqj5b2kDFCC3J7RJB10JjuqkZ7faqGp+JLC/txEe9rDOAOpOpa885Sx+ZK+5P8OmEbpqHH3vL1O9we9lyRIs2Y/RpIrncEKyaA84WKimjvp832GDFqVGlFklY8lsH31+AUKXfk65cwhnczZO7DTB1/+0QUWhiy+uUUKLdJ1C3AFfHNBBH0WWHolNsPiYjSaNrUIgxXyRLkGtLeTAtEa9LNniw8KKCXI/jptXVVqyfHGOFIzo11NDDSTeCPpVG2MrjX9vJZknGeShJLavvHzVmc1N/zNpgq0Rr0FEyFZL384e8WgnmTY1lBf7tAPdMyIaNEJgEE4MobwqVDSwMmgWKmKoOeY5jsWNlM=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHzclsFPuApUw4nYRrZrI5lJm2aKty4lBzS+387uCINA
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCmuS8ms5fq9IWCpSG062zv6KqUIHSk9g+RlcFiU/nKSB1OMQ56HhCeuGAOEbiyfVsMqC143W9W+Q6X1JDoRkcg=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCH7J4/vrAjqY7b3+xDoxlOrkvqhtdMtNCRu8feksOJjh2Lg2Yk5a4TpRFHHcUew6Or+BSrCAe5KLIJookdMX3AnHBTeYgFVrph2Ke0jsZhtIDdYFPya4HaYgVScxezyYjpFJsOgHIasA47X1Ai7KtSHamdGUMHvyRPFaMroDQGOH5uNA58Pr0jAvA9/p32JhzVhvFTNhdp5AZuuf53LCOoAJPpvxAfhZJVwv0zpQu1qJ2MQ4F6PjmLmpJe9IFedhTbswP4+A8raCmSvJK/X3zbL6A5C78i72YF0dVlX4E5Jgq2BymgfJXA2vRrB7WzfFXN/KCT+A6KjshRy8vEZTlewfHk3bMt+IjAgRaPsvV2gwOQb0lhzfUX2RkPxHTTunUAUf1PJwBTKah0plZAQoGQce+8MWTqKP842KIoZPO7/LQQZR21apoIRIEt1OtR3pITkULZqmoYaZKqVzPCyoagXj2v0W4E//8slRvaC4n2qfMRwvp2VR0mSv9qwMeqnm0=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMt6YRNNCvMAUwHQzPKNq18k03sF+qAP+8fg1vdKmMsQ
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBN1LMOBquYaNyOmBNhqWyrm3Ot0C+prylWlOCYwa7IIp3WZH4GHwVhjD6VAwSa/KvI01xKiiJwO/WJ4zgAnMAiM=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCp3Vp6dX4ruCK781x4GIhtAtcJdT75tsPxH3O/YwMPa1JuQj17BT+IZbu0qvi56CLtWm5GwO9cF5N1u+ZpYWIwNbEJlz4q4LeJud7OFwwvwDTdM2fZylZt2dEtwqbmDJUsJxwcLQshtmSxpRR5Z53dCJAMTZiKGF/MiJrVkc7A2PfxMnLH568W9poUGj9jUYetHoRmwKl9hes+OQRljbjUi8gLpseivGxW9IAewXRhJi0ybLNDnQM0iSkdQqaTVD7laQKxpynfO1a0b7U6oyFRdyTqMJqyDKe8Vx+D1esV9oZKn7UEtj+WGUAv3StaLzrk3fjhi4XePCs0Ao1s/B1MPZCcM0Po5BdHAHhf4CbUSRS+oaAS7KaaWkWTKLTKEDWS6DjX6KUR9hUyLQ54IMYu17UP6JclJnH5c9FmUQls07pus/CkhX0IIgOTinLYeOJSdBsKA9JUrnQzXKMAwzjKL18kG8OZ+Yaf7msme1EVikR9ljtRB88k+DtapF5wub8=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBDnMNJEcPeKIHMEAdXUabsWNwdNGhiYyZLatE1eeBqY
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLDW7MDD+6+vPlFKWCI8yHUVjDpLwcAatqV8Xhxm53MJMkyP9vCai5lIMwJluZIDUkA83WhSi06EgMc1afHFONA=
                                             create=True mode=0644 path=/tmp/ansible.d0hsaq01 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:41:34 compute-0 sudo[68021]: pam_unix(sudo:session): session closed for user root
Oct 08 09:41:34 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 08 09:41:35 compute-0 sudo[68175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhjntuspsnntbahjamjadtzobapsyfem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916494.6977396-133-137590267206861/AnsiballZ_command.py'
Oct 08 09:41:35 compute-0 sudo[68175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:41:35 compute-0 python3.9[68177]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.d0hsaq01' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:41:35 compute-0 sudo[68175]: pam_unix(sudo:session): session closed for user root
Oct 08 09:41:36 compute-0 sudo[68329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnklyjsqtsobkpigyjtaaegvlcpewser ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916495.5287926-157-54285973489712/AnsiballZ_file.py'
Oct 08 09:41:36 compute-0 sudo[68329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:41:36 compute-0 python3.9[68331]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.d0hsaq01 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:41:36 compute-0 sudo[68329]: pam_unix(sudo:session): session closed for user root
Oct 08 09:41:36 compute-0 sshd-session[67415]: Connection closed by 192.168.122.30 port 35506
Oct 08 09:41:36 compute-0 sshd-session[67412]: pam_unix(sshd:session): session closed for user zuul
Oct 08 09:41:36 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Oct 08 09:41:36 compute-0 systemd[1]: session-16.scope: Consumed 3.652s CPU time.
Oct 08 09:41:36 compute-0 systemd-logind[798]: Session 16 logged out. Waiting for processes to exit.
Oct 08 09:41:36 compute-0 systemd-logind[798]: Removed session 16.
Oct 08 09:41:41 compute-0 sshd-session[68356]: Accepted publickey for zuul from 192.168.122.30 port 36158 ssh2: ECDSA SHA256:7LvTHAj52RCSkKXOsIbzSDrEw7lwj23D0dAJ0Qgx0Rg
Oct 08 09:41:41 compute-0 systemd-logind[798]: New session 17 of user zuul.
Oct 08 09:41:41 compute-0 systemd[1]: Started Session 17 of User zuul.
Oct 08 09:41:41 compute-0 sshd-session[68356]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 08 09:41:42 compute-0 python3.9[68509]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 08 09:41:43 compute-0 sudo[68663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elruemerfbchqadazsdrygobmabyovsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916503.044535-56-108075712558610/AnsiballZ_systemd.py'
Oct 08 09:41:43 compute-0 sudo[68663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:41:43 compute-0 python3.9[68665]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 08 09:41:44 compute-0 sudo[68663]: pam_unix(sudo:session): session closed for user root
Oct 08 09:41:44 compute-0 sudo[68817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjnxabyodaklhukwkpyzvnqkonrfuqvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916504.2395647-80-54915474096875/AnsiballZ_systemd.py'
Oct 08 09:41:44 compute-0 sudo[68817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:41:44 compute-0 python3.9[68819]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 08 09:41:44 compute-0 sudo[68817]: pam_unix(sudo:session): session closed for user root
Oct 08 09:41:45 compute-0 sudo[68970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofyenipkgskeyatxwsmyrusgjverwqpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916505.147636-107-71901784068022/AnsiballZ_command.py'
Oct 08 09:41:45 compute-0 sudo[68970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:41:45 compute-0 python3.9[68972]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:41:45 compute-0 sudo[68970]: pam_unix(sudo:session): session closed for user root
Oct 08 09:41:46 compute-0 sudo[69123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oimvclvqscwlypulfputvdtdfwtkmzdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916506.0017805-131-242042014513459/AnsiballZ_stat.py'
Oct 08 09:41:46 compute-0 sudo[69123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:41:46 compute-0 python3.9[69125]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 09:41:46 compute-0 sudo[69123]: pam_unix(sudo:session): session closed for user root
Oct 08 09:41:47 compute-0 sudo[69277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yviqtzfexycaysfllvjkkliubhotvwlq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916506.852544-155-73716696785086/AnsiballZ_command.py'
Oct 08 09:41:47 compute-0 sudo[69277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:41:47 compute-0 python3.9[69279]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:41:47 compute-0 sudo[69277]: pam_unix(sudo:session): session closed for user root
Oct 08 09:41:48 compute-0 sudo[69432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slmckehjbgcqbmalyocngyzzpzipfazd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916507.5822923-179-10287838064767/AnsiballZ_file.py'
Oct 08 09:41:48 compute-0 sudo[69432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:41:48 compute-0 python3.9[69434]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:41:48 compute-0 sudo[69432]: pam_unix(sudo:session): session closed for user root
Oct 08 09:41:48 compute-0 sshd-session[68359]: Connection closed by 192.168.122.30 port 36158
Oct 08 09:41:48 compute-0 sshd-session[68356]: pam_unix(sshd:session): session closed for user zuul
Oct 08 09:41:48 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Oct 08 09:41:48 compute-0 systemd[1]: session-17.scope: Consumed 4.334s CPU time.
Oct 08 09:41:48 compute-0 systemd-logind[798]: Session 17 logged out. Waiting for processes to exit.
Oct 08 09:41:48 compute-0 systemd-logind[798]: Removed session 17.
Oct 08 09:41:53 compute-0 sshd-session[69460]: Accepted publickey for zuul from 192.168.122.30 port 37688 ssh2: ECDSA SHA256:7LvTHAj52RCSkKXOsIbzSDrEw7lwj23D0dAJ0Qgx0Rg
Oct 08 09:41:53 compute-0 systemd-logind[798]: New session 18 of user zuul.
Oct 08 09:41:53 compute-0 systemd[1]: Started Session 18 of User zuul.
Oct 08 09:41:53 compute-0 sshd-session[69460]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 08 09:41:55 compute-0 python3.9[69613]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 08 09:41:55 compute-0 sudo[69767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kospsduhckkeewiwzillknycpwvqrxjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916515.5920658-62-257109181488726/AnsiballZ_setup.py'
Oct 08 09:41:55 compute-0 sudo[69767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:41:56 compute-0 python3.9[69769]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 08 09:41:56 compute-0 sudo[69767]: pam_unix(sudo:session): session closed for user root
Oct 08 09:41:56 compute-0 sudo[69851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntdpnassasyynuopuyskqhqftiornptr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916515.5920658-62-257109181488726/AnsiballZ_dnf.py'
Oct 08 09:41:56 compute-0 sudo[69851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:41:57 compute-0 python3.9[69853]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 08 09:41:58 compute-0 sudo[69851]: pam_unix(sudo:session): session closed for user root
Oct 08 09:41:59 compute-0 python3.9[70004]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:42:00 compute-0 python3.9[70155]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 08 09:42:01 compute-0 python3.9[70305]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 09:42:01 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 08 09:42:02 compute-0 python3.9[70456]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 09:42:02 compute-0 sshd-session[69463]: Connection closed by 192.168.122.30 port 37688
Oct 08 09:42:02 compute-0 sshd-session[69460]: pam_unix(sshd:session): session closed for user zuul
Oct 08 09:42:02 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Oct 08 09:42:02 compute-0 systemd[1]: session-18.scope: Consumed 5.856s CPU time.
Oct 08 09:42:02 compute-0 systemd-logind[798]: Session 18 logged out. Waiting for processes to exit.
Oct 08 09:42:02 compute-0 systemd-logind[798]: Removed session 18.
Oct 08 09:42:11 compute-0 sshd-session[70481]: Accepted publickey for zuul from 38.102.83.97 port 59276 ssh2: RSA SHA256:gAGXrS9nBEZo6eSiaUIpvcgcfSt2T2MqoUt9m43i77Q
Oct 08 09:42:11 compute-0 systemd-logind[798]: New session 19 of user zuul.
Oct 08 09:42:11 compute-0 systemd[1]: Started Session 19 of User zuul.
Oct 08 09:42:11 compute-0 sshd-session[70481]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 08 09:42:11 compute-0 sudo[70557]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmvoxgjipojdvtqxcgvocsbtwntoddtc ; /usr/bin/python3'
Oct 08 09:42:11 compute-0 sudo[70557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:42:11 compute-0 useradd[70561]: new group: name=ceph-admin, GID=42478
Oct 08 09:42:11 compute-0 useradd[70561]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Oct 08 09:42:11 compute-0 sudo[70557]: pam_unix(sudo:session): session closed for user root
Oct 08 09:42:12 compute-0 sudo[70643]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pctrxhbwpffjltgteihkrkikeqfzzfqk ; /usr/bin/python3'
Oct 08 09:42:12 compute-0 sudo[70643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:42:12 compute-0 sudo[70643]: pam_unix(sudo:session): session closed for user root
Oct 08 09:42:12 compute-0 sudo[70716]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlasxwzdvydplrwkvuoedxstqpooovzt ; /usr/bin/python3'
Oct 08 09:42:12 compute-0 sudo[70716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:42:12 compute-0 sudo[70716]: pam_unix(sudo:session): session closed for user root
Oct 08 09:42:13 compute-0 sudo[70766]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efqtcztpigahcvrhmldoxgqsqjxzkyws ; /usr/bin/python3'
Oct 08 09:42:13 compute-0 sudo[70766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:42:13 compute-0 sudo[70766]: pam_unix(sudo:session): session closed for user root
Oct 08 09:42:13 compute-0 sudo[70792]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkycrxsxfzvsokbpdrqkujsfaozyhjxq ; /usr/bin/python3'
Oct 08 09:42:13 compute-0 sudo[70792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:42:13 compute-0 sudo[70792]: pam_unix(sudo:session): session closed for user root
Oct 08 09:42:14 compute-0 sudo[70818]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-metdkbvnrhbvbknymhzfhquavbnhrsgm ; /usr/bin/python3'
Oct 08 09:42:14 compute-0 sudo[70818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:42:14 compute-0 sudo[70818]: pam_unix(sudo:session): session closed for user root
Oct 08 09:42:14 compute-0 sudo[70844]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgzubenqrjsayogrfavfznofwccpurmc ; /usr/bin/python3'
Oct 08 09:42:14 compute-0 sudo[70844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:42:14 compute-0 sudo[70844]: pam_unix(sudo:session): session closed for user root
Oct 08 09:42:15 compute-0 sudo[70922]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-riuztylzzduafgwdvbnmpdbuttaiyxad ; /usr/bin/python3'
Oct 08 09:42:15 compute-0 sudo[70922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:42:15 compute-0 sudo[70922]: pam_unix(sudo:session): session closed for user root
Oct 08 09:42:15 compute-0 sudo[70995]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxwzoyggrwbzpnifatjmxibtaibvhdaw ; /usr/bin/python3'
Oct 08 09:42:15 compute-0 sudo[70995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:42:15 compute-0 sudo[70995]: pam_unix(sudo:session): session closed for user root
Oct 08 09:42:16 compute-0 sudo[71097]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czuaaxcpthgvntxnebildwjvaangmvbp ; /usr/bin/python3'
Oct 08 09:42:16 compute-0 sudo[71097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:42:16 compute-0 sudo[71097]: pam_unix(sudo:session): session closed for user root
Oct 08 09:42:16 compute-0 sudo[71170]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-viiwctsufvqxlyzpujmulzqrmprhsxws ; /usr/bin/python3'
Oct 08 09:42:16 compute-0 sudo[71170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:42:16 compute-0 sudo[71170]: pam_unix(sudo:session): session closed for user root
Oct 08 09:42:17 compute-0 sudo[71220]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhbnmfibqpbkyauywzxnklddxecnknkl ; /usr/bin/python3'
Oct 08 09:42:17 compute-0 sudo[71220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:42:17 compute-0 python3[71222]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 08 09:42:18 compute-0 sudo[71220]: pam_unix(sudo:session): session closed for user root
Oct 08 09:42:19 compute-0 sudo[71315]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbmyzlzkzllhpqrohrklqwgtzdchfutc ; /usr/bin/python3'
Oct 08 09:42:19 compute-0 sudo[71315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:42:19 compute-0 python3[71317]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 08 09:42:20 compute-0 sudo[71315]: pam_unix(sudo:session): session closed for user root
Oct 08 09:42:20 compute-0 sudo[71342]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezyfyfrmmdiiectupmchwysgandtjibc ; /usr/bin/python3'
Oct 08 09:42:20 compute-0 sudo[71342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:42:20 compute-0 python3[71344]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 09:42:20 compute-0 sudo[71342]: pam_unix(sudo:session): session closed for user root
Oct 08 09:42:21 compute-0 sudo[71368]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocootkdaypfyxjtwqynvglitpmgprzej ; /usr/bin/python3'
Oct 08 09:42:21 compute-0 sudo[71368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:42:21 compute-0 python3[71370]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:42:21 compute-0 kernel: loop: module loaded
Oct 08 09:42:21 compute-0 kernel: loop3: detected capacity change from 0 to 41943040
Oct 08 09:42:21 compute-0 sudo[71368]: pam_unix(sudo:session): session closed for user root
Oct 08 09:42:21 compute-0 sudo[71403]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnqsneeqdnynfotyuocupilmdqgszgcg ; /usr/bin/python3'
Oct 08 09:42:21 compute-0 sudo[71403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:42:21 compute-0 python3[71405]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:42:21 compute-0 lvm[71408]: PV /dev/loop3 not used.
Oct 08 09:42:21 compute-0 lvm[71417]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 08 09:42:21 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Oct 08 09:42:21 compute-0 sudo[71403]: pam_unix(sudo:session): session closed for user root
Oct 08 09:42:21 compute-0 lvm[71419]:   1 logical volume(s) in volume group "ceph_vg0" now active
Oct 08 09:42:21 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Oct 08 09:42:22 compute-0 sudo[71495]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iveogosvziqvtzevnvmldpmhklthklap ; /usr/bin/python3'
Oct 08 09:42:22 compute-0 sudo[71495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:42:22 compute-0 python3[71497]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 08 09:42:22 compute-0 sudo[71495]: pam_unix(sudo:session): session closed for user root
Oct 08 09:42:22 compute-0 sudo[71568]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugexmsktvqetcolgmullvatpkhjyimwu ; /usr/bin/python3'
Oct 08 09:42:22 compute-0 sudo[71568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:42:22 compute-0 python3[71570]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759916542.1615882-33332-68038896034014/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:42:22 compute-0 sudo[71568]: pam_unix(sudo:session): session closed for user root
Oct 08 09:42:23 compute-0 sudo[71618]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytzdiwzqnyahjwxetxkqcqiajabltzox ; /usr/bin/python3'
Oct 08 09:42:23 compute-0 sudo[71618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:42:23 compute-0 python3[71620]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 09:42:23 compute-0 systemd[1]: Reloading.
Oct 08 09:42:23 compute-0 systemd-rc-local-generator[71643]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:42:23 compute-0 systemd-sysv-generator[71650]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:42:24 compute-0 systemd[1]: Starting Ceph OSD losetup...
Oct 08 09:42:24 compute-0 bash[71661]: /dev/loop3: [64513]:4349020 (/var/lib/ceph-osd-0.img)
Oct 08 09:42:24 compute-0 systemd[1]: Finished Ceph OSD losetup.
Oct 08 09:42:24 compute-0 lvm[71662]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 08 09:42:24 compute-0 lvm[71662]: VG ceph_vg0 finished
Oct 08 09:42:24 compute-0 sudo[71618]: pam_unix(sudo:session): session closed for user root
Oct 08 09:42:26 compute-0 python3[71686]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 08 09:42:29 compute-0 sudo[71777]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmqcpueuarqqvywjqoymhcnxpjzqjygu ; /usr/bin/python3'
Oct 08 09:42:29 compute-0 sudo[71777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:42:29 compute-0 python3[71779]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-squid'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 08 09:42:31 compute-0 sudo[71777]: pam_unix(sudo:session): session closed for user root
Oct 08 09:42:31 compute-0 sudo[71834]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pximebacifmjssibhhsbsjtdyjjfmrop ; /usr/bin/python3'
Oct 08 09:42:31 compute-0 sudo[71834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:42:31 compute-0 python3[71836]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 08 09:42:34 compute-0 groupadd[71846]: group added to /etc/group: name=cephadm, GID=992
Oct 08 09:42:34 compute-0 groupadd[71846]: group added to /etc/gshadow: name=cephadm
Oct 08 09:42:34 compute-0 groupadd[71846]: new group: name=cephadm, GID=992
Oct 08 09:42:34 compute-0 useradd[71853]: new user: name=cephadm, UID=992, GID=992, home=/var/lib/cephadm, shell=/bin/bash, from=none
Oct 08 09:42:35 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 08 09:42:35 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 08 09:42:35 compute-0 sudo[71834]: pam_unix(sudo:session): session closed for user root
Oct 08 09:42:35 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 08 09:42:35 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 08 09:42:35 compute-0 systemd[1]: run-r314bd02ad92941de879a0d133d4ada9f.service: Deactivated successfully.
Oct 08 09:42:35 compute-0 sudo[71953]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avdxctivsehinuvfzgdvoubtphqlfwvp ; /usr/bin/python3'
Oct 08 09:42:35 compute-0 sudo[71953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:42:35 compute-0 python3[71955]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 09:42:35 compute-0 sudo[71953]: pam_unix(sudo:session): session closed for user root
Oct 08 09:42:35 compute-0 sudo[71981]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkeqtceeqeorirnwitlmblzplydruhro ; /usr/bin/python3'
Oct 08 09:42:35 compute-0 sudo[71981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:42:36 compute-0 python3[71983]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:42:36 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 08 09:42:36 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 08 09:42:36 compute-0 sudo[71981]: pam_unix(sudo:session): session closed for user root
Oct 08 09:42:36 compute-0 sudo[72045]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wavacnsxqszkbixzlcnzcwkyunbdocpy ; /usr/bin/python3'
Oct 08 09:42:36 compute-0 sudo[72045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:42:37 compute-0 python3[72047]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:42:37 compute-0 sudo[72045]: pam_unix(sudo:session): session closed for user root
Oct 08 09:42:37 compute-0 sudo[72071]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwgnohsqgowtbrosclnbyqsqyhmjwnrr ; /usr/bin/python3'
Oct 08 09:42:37 compute-0 sudo[72071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:42:37 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 08 09:42:37 compute-0 python3[72073]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:42:37 compute-0 sudo[72071]: pam_unix(sudo:session): session closed for user root
Oct 08 09:42:37 compute-0 sudo[72149]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-diprafjikbtysihofmivnhyxfcnghggn ; /usr/bin/python3'
Oct 08 09:42:37 compute-0 sudo[72149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:42:38 compute-0 python3[72151]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 08 09:42:38 compute-0 sudo[72149]: pam_unix(sudo:session): session closed for user root
Oct 08 09:42:38 compute-0 sudo[72222]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzjwwwkchnnxhohewqgqvprbsvlcrczz ; /usr/bin/python3'
Oct 08 09:42:38 compute-0 sudo[72222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:42:38 compute-0 python3[72224]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759916557.8214695-33524-277441041685286/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=a2c84611a4e46cfce32a90c112eae0345cab6abb backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:42:38 compute-0 sudo[72222]: pam_unix(sudo:session): session closed for user root
Oct 08 09:42:39 compute-0 sudo[72324]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrrglxmhzfeoctwnlyjqrtemxwzqesae ; /usr/bin/python3'
Oct 08 09:42:39 compute-0 sudo[72324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:42:39 compute-0 python3[72326]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 08 09:42:39 compute-0 sudo[72324]: pam_unix(sudo:session): session closed for user root
Oct 08 09:42:39 compute-0 sudo[72397]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfstrmkwarjdqcoruxnhpmlkaozkpvfm ; /usr/bin/python3'
Oct 08 09:42:39 compute-0 sudo[72397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:42:39 compute-0 python3[72399]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759916559.0681355-33542-19457543483014/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:42:39 compute-0 sudo[72397]: pam_unix(sudo:session): session closed for user root
Oct 08 09:42:39 compute-0 sudo[72447]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uebdibnhskotbymohnvrqmlunblkqfvl ; /usr/bin/python3'
Oct 08 09:42:39 compute-0 sudo[72447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:42:40 compute-0 python3[72449]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 09:42:40 compute-0 sudo[72447]: pam_unix(sudo:session): session closed for user root
Oct 08 09:42:40 compute-0 sudo[72475]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwiqwdoitcmvkyiqdpypceswxhlvyild ; /usr/bin/python3'
Oct 08 09:42:40 compute-0 sudo[72475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:42:40 compute-0 python3[72477]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 09:42:40 compute-0 sudo[72475]: pam_unix(sudo:session): session closed for user root
Oct 08 09:42:40 compute-0 sudo[72503]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjplwfxswcyirlzvejqqdtsclgbmtpvo ; /usr/bin/python3'
Oct 08 09:42:40 compute-0 sudo[72503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:42:40 compute-0 python3[72505]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 09:42:40 compute-0 sudo[72503]: pam_unix(sudo:session): session closed for user root
Oct 08 09:42:40 compute-0 sudo[72531]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgnizyhmsbjrxkavepkglsusrkymoval ; /usr/bin/python3'
Oct 08 09:42:40 compute-0 sudo[72531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:42:41 compute-0 python3[72533]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 787292cc-8154-50c4-9e00-e9be3e817149 --config /home/ceph-admin/assimilate_ceph.conf \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100
                                           _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:42:41 compute-0 sshd-session[72537]: Accepted publickey for ceph-admin from 192.168.122.100 port 41714 ssh2: RSA SHA256:oltPosKfvcqSfDqAHq+rz23Sj7/sQ0zn4f0i/r7NEZA
Oct 08 09:42:41 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Oct 08 09:42:41 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct 08 09:42:41 compute-0 systemd-logind[798]: New session 20 of user ceph-admin.
Oct 08 09:42:41 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct 08 09:42:41 compute-0 systemd[1]: Starting User Manager for UID 42477...
Oct 08 09:42:41 compute-0 systemd[72541]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 08 09:42:41 compute-0 systemd[72541]: Queued start job for default target Main User Target.
Oct 08 09:42:41 compute-0 systemd[72541]: Created slice User Application Slice.
Oct 08 09:42:41 compute-0 systemd[72541]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 08 09:42:41 compute-0 systemd[72541]: Started Daily Cleanup of User's Temporary Directories.
Oct 08 09:42:41 compute-0 systemd[72541]: Reached target Paths.
Oct 08 09:42:41 compute-0 systemd[72541]: Reached target Timers.
Oct 08 09:42:41 compute-0 systemd[72541]: Starting D-Bus User Message Bus Socket...
Oct 08 09:42:41 compute-0 systemd[72541]: Starting Create User's Volatile Files and Directories...
Oct 08 09:42:41 compute-0 systemd[72541]: Listening on D-Bus User Message Bus Socket.
Oct 08 09:42:41 compute-0 systemd[72541]: Reached target Sockets.
Oct 08 09:42:41 compute-0 systemd[72541]: Finished Create User's Volatile Files and Directories.
Oct 08 09:42:41 compute-0 systemd[72541]: Reached target Basic System.
Oct 08 09:42:41 compute-0 systemd[72541]: Reached target Main User Target.
Oct 08 09:42:41 compute-0 systemd[72541]: Startup finished in 112ms.
Oct 08 09:42:41 compute-0 systemd[1]: Started User Manager for UID 42477.
Oct 08 09:42:41 compute-0 systemd[1]: Started Session 20 of User ceph-admin.
Oct 08 09:42:41 compute-0 sshd-session[72537]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 08 09:42:41 compute-0 sudo[72556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/echo
Oct 08 09:42:41 compute-0 sudo[72556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:42:41 compute-0 sudo[72556]: pam_unix(sudo:session): session closed for user root
Oct 08 09:42:41 compute-0 sshd-session[72555]: Received disconnect from 192.168.122.100 port 41714:11: disconnected by user
Oct 08 09:42:41 compute-0 sshd-session[72555]: Disconnected from user ceph-admin 192.168.122.100 port 41714
Oct 08 09:42:41 compute-0 sshd-session[72537]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 08 09:42:41 compute-0 systemd[1]: session-20.scope: Deactivated successfully.
Oct 08 09:42:41 compute-0 systemd-logind[798]: Session 20 logged out. Waiting for processes to exit.
Oct 08 09:42:41 compute-0 systemd-logind[798]: Removed session 20.
Oct 08 09:42:41 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 08 09:42:41 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 08 09:42:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat381809274-lower\x2dmapped.mount: Deactivated successfully.
Oct 08 09:42:51 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Oct 08 09:42:51 compute-0 systemd[72541]: Activating special unit Exit the Session...
Oct 08 09:42:51 compute-0 systemd[72541]: Stopped target Main User Target.
Oct 08 09:42:51 compute-0 systemd[72541]: Stopped target Basic System.
Oct 08 09:42:51 compute-0 systemd[72541]: Stopped target Paths.
Oct 08 09:42:51 compute-0 systemd[72541]: Stopped target Sockets.
Oct 08 09:42:51 compute-0 systemd[72541]: Stopped target Timers.
Oct 08 09:42:51 compute-0 systemd[72541]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct 08 09:42:51 compute-0 systemd[72541]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 08 09:42:51 compute-0 systemd[72541]: Closed D-Bus User Message Bus Socket.
Oct 08 09:42:51 compute-0 systemd[72541]: Stopped Create User's Volatile Files and Directories.
Oct 08 09:42:51 compute-0 systemd[72541]: Removed slice User Application Slice.
Oct 08 09:42:51 compute-0 systemd[72541]: Reached target Shutdown.
Oct 08 09:42:51 compute-0 systemd[72541]: Finished Exit the Session.
Oct 08 09:42:51 compute-0 systemd[72541]: Reached target Exit the Session.
Oct 08 09:42:51 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Oct 08 09:42:51 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Oct 08 09:42:51 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Oct 08 09:42:51 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Oct 08 09:42:51 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Oct 08 09:42:51 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Oct 08 09:42:51 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Oct 08 09:42:58 compute-0 podman[72633]: 2025-10-08 09:42:58.126710287 +0000 UTC m=+16.272952782 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:42:58 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 08 09:42:58 compute-0 podman[72693]: 2025-10-08 09:42:58.186869667 +0000 UTC m=+0.039256076 container create b6f9d463dca56ed09ccd9916502cdbaea759f831681458895555f22c4faab38e (image=quay.io/ceph/ceph:v19, name=gallant_galois, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:42:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-volatile\x2dcheck2342328274-merged.mount: Deactivated successfully.
Oct 08 09:42:58 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Oct 08 09:42:58 compute-0 systemd[1]: Started libpod-conmon-b6f9d463dca56ed09ccd9916502cdbaea759f831681458895555f22c4faab38e.scope.
Oct 08 09:42:58 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:42:58 compute-0 podman[72693]: 2025-10-08 09:42:58.166287486 +0000 UTC m=+0.018673945 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:42:58 compute-0 podman[72693]: 2025-10-08 09:42:58.270608393 +0000 UTC m=+0.122994822 container init b6f9d463dca56ed09ccd9916502cdbaea759f831681458895555f22c4faab38e (image=quay.io/ceph/ceph:v19, name=gallant_galois, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:42:58 compute-0 podman[72693]: 2025-10-08 09:42:58.276875208 +0000 UTC m=+0.129261627 container start b6f9d463dca56ed09ccd9916502cdbaea759f831681458895555f22c4faab38e (image=quay.io/ceph/ceph:v19, name=gallant_galois, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 08 09:42:58 compute-0 podman[72693]: 2025-10-08 09:42:58.280265648 +0000 UTC m=+0.132652057 container attach b6f9d463dca56ed09ccd9916502cdbaea759f831681458895555f22c4faab38e (image=quay.io/ceph/ceph:v19, name=gallant_galois, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:42:58 compute-0 gallant_galois[72709]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
Oct 08 09:42:58 compute-0 systemd[1]: libpod-b6f9d463dca56ed09ccd9916502cdbaea759f831681458895555f22c4faab38e.scope: Deactivated successfully.
Oct 08 09:42:58 compute-0 podman[72693]: 2025-10-08 09:42:58.375388944 +0000 UTC m=+0.227775353 container died b6f9d463dca56ed09ccd9916502cdbaea759f831681458895555f22c4faab38e (image=quay.io/ceph/ceph:v19, name=gallant_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:42:58 compute-0 podman[72693]: 2025-10-08 09:42:58.415657698 +0000 UTC m=+0.268044107 container remove b6f9d463dca56ed09ccd9916502cdbaea759f831681458895555f22c4faab38e (image=quay.io/ceph/ceph:v19, name=gallant_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:42:58 compute-0 systemd[1]: libpod-conmon-b6f9d463dca56ed09ccd9916502cdbaea759f831681458895555f22c4faab38e.scope: Deactivated successfully.
Oct 08 09:42:58 compute-0 podman[72726]: 2025-10-08 09:42:58.475611735 +0000 UTC m=+0.039678739 container create 7051260230ed919b857aca3de8de175e8600bbf0fb92ea027da729e8410e27e6 (image=quay.io/ceph/ceph:v19, name=vigorous_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:42:58 compute-0 systemd[1]: Started libpod-conmon-7051260230ed919b857aca3de8de175e8600bbf0fb92ea027da729e8410e27e6.scope.
Oct 08 09:42:58 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:42:58 compute-0 podman[72726]: 2025-10-08 09:42:58.542858957 +0000 UTC m=+0.106925951 container init 7051260230ed919b857aca3de8de175e8600bbf0fb92ea027da729e8410e27e6 (image=quay.io/ceph/ceph:v19, name=vigorous_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:42:58 compute-0 podman[72726]: 2025-10-08 09:42:58.55122489 +0000 UTC m=+0.115291924 container start 7051260230ed919b857aca3de8de175e8600bbf0fb92ea027da729e8410e27e6 (image=quay.io/ceph/ceph:v19, name=vigorous_keldysh, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:42:58 compute-0 vigorous_keldysh[72743]: 167 167
Oct 08 09:42:58 compute-0 podman[72726]: 2025-10-08 09:42:58.45785467 +0000 UTC m=+0.021921694 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:42:58 compute-0 systemd[1]: libpod-7051260230ed919b857aca3de8de175e8600bbf0fb92ea027da729e8410e27e6.scope: Deactivated successfully.
Oct 08 09:42:58 compute-0 podman[72726]: 2025-10-08 09:42:58.555058924 +0000 UTC m=+0.119125968 container attach 7051260230ed919b857aca3de8de175e8600bbf0fb92ea027da729e8410e27e6 (image=quay.io/ceph/ceph:v19, name=vigorous_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:42:58 compute-0 podman[72726]: 2025-10-08 09:42:58.55572118 +0000 UTC m=+0.119788214 container died 7051260230ed919b857aca3de8de175e8600bbf0fb92ea027da729e8410e27e6 (image=quay.io/ceph/ceph:v19, name=vigorous_keldysh, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Oct 08 09:42:58 compute-0 podman[72726]: 2025-10-08 09:42:58.592937087 +0000 UTC m=+0.157004081 container remove 7051260230ed919b857aca3de8de175e8600bbf0fb92ea027da729e8410e27e6 (image=quay.io/ceph/ceph:v19, name=vigorous_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:42:58 compute-0 systemd[1]: libpod-conmon-7051260230ed919b857aca3de8de175e8600bbf0fb92ea027da729e8410e27e6.scope: Deactivated successfully.
Oct 08 09:42:58 compute-0 podman[72760]: 2025-10-08 09:42:58.643717634 +0000 UTC m=+0.032716869 container create f353d6b884c81c8747d9f9cd66a2f91571f579a91d214314d95d76343130c75e (image=quay.io/ceph/ceph:v19, name=hopeful_morse, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct 08 09:42:58 compute-0 systemd[1]: Started libpod-conmon-f353d6b884c81c8747d9f9cd66a2f91571f579a91d214314d95d76343130c75e.scope.
Oct 08 09:42:58 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:42:58 compute-0 podman[72760]: 2025-10-08 09:42:58.698278084 +0000 UTC m=+0.087277349 container init f353d6b884c81c8747d9f9cd66a2f91571f579a91d214314d95d76343130c75e (image=quay.io/ceph/ceph:v19, name=hopeful_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:42:58 compute-0 podman[72760]: 2025-10-08 09:42:58.704091665 +0000 UTC m=+0.093090890 container start f353d6b884c81c8747d9f9cd66a2f91571f579a91d214314d95d76343130c75e (image=quay.io/ceph/ceph:v19, name=hopeful_morse, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:42:58 compute-0 podman[72760]: 2025-10-08 09:42:58.70689035 +0000 UTC m=+0.095889585 container attach f353d6b884c81c8747d9f9cd66a2f91571f579a91d214314d95d76343130c75e (image=quay.io/ceph/ceph:v19, name=hopeful_morse, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct 08 09:42:58 compute-0 hopeful_morse[72776]: AQAiMuZoKm3wKhAArvS1ox2lkrw7anYpGWXX/g==
Oct 08 09:42:58 compute-0 systemd[1]: libpod-f353d6b884c81c8747d9f9cd66a2f91571f579a91d214314d95d76343130c75e.scope: Deactivated successfully.
Oct 08 09:42:58 compute-0 podman[72760]: 2025-10-08 09:42:58.722958251 +0000 UTC m=+0.111957486 container died f353d6b884c81c8747d9f9cd66a2f91571f579a91d214314d95d76343130c75e (image=quay.io/ceph/ceph:v19, name=hopeful_morse, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct 08 09:42:58 compute-0 podman[72760]: 2025-10-08 09:42:58.629939852 +0000 UTC m=+0.018939107 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:42:58 compute-0 podman[72760]: 2025-10-08 09:42:58.756638177 +0000 UTC m=+0.145637412 container remove f353d6b884c81c8747d9f9cd66a2f91571f579a91d214314d95d76343130c75e (image=quay.io/ceph/ceph:v19, name=hopeful_morse, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:42:58 compute-0 systemd[1]: libpod-conmon-f353d6b884c81c8747d9f9cd66a2f91571f579a91d214314d95d76343130c75e.scope: Deactivated successfully.
Oct 08 09:42:58 compute-0 podman[72794]: 2025-10-08 09:42:58.818492061 +0000 UTC m=+0.039467718 container create aab2ed647a99cb2734b0655bcb5aafe0f91af0ebf8083d277d02d93aa3b66ff8 (image=quay.io/ceph/ceph:v19, name=ecstatic_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:42:58 compute-0 systemd[1]: Started libpod-conmon-aab2ed647a99cb2734b0655bcb5aafe0f91af0ebf8083d277d02d93aa3b66ff8.scope.
Oct 08 09:42:58 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:42:58 compute-0 podman[72794]: 2025-10-08 09:42:58.877432309 +0000 UTC m=+0.098407986 container init aab2ed647a99cb2734b0655bcb5aafe0f91af0ebf8083d277d02d93aa3b66ff8 (image=quay.io/ceph/ceph:v19, name=ecstatic_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:42:58 compute-0 podman[72794]: 2025-10-08 09:42:58.882018889 +0000 UTC m=+0.102994546 container start aab2ed647a99cb2734b0655bcb5aafe0f91af0ebf8083d277d02d93aa3b66ff8 (image=quay.io/ceph/ceph:v19, name=ecstatic_nightingale, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:42:58 compute-0 podman[72794]: 2025-10-08 09:42:58.885300378 +0000 UTC m=+0.106276035 container attach aab2ed647a99cb2734b0655bcb5aafe0f91af0ebf8083d277d02d93aa3b66ff8 (image=quay.io/ceph/ceph:v19, name=ecstatic_nightingale, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 08 09:42:58 compute-0 podman[72794]: 2025-10-08 09:42:58.8036806 +0000 UTC m=+0.024656257 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:42:58 compute-0 ecstatic_nightingale[72810]: AQAiMuZoCGjGNhAAl17StKeL5XF07Jmf5tnDBw==
Oct 08 09:42:58 compute-0 systemd[1]: libpod-aab2ed647a99cb2734b0655bcb5aafe0f91af0ebf8083d277d02d93aa3b66ff8.scope: Deactivated successfully.
Oct 08 09:42:58 compute-0 podman[72794]: 2025-10-08 09:42:58.924990847 +0000 UTC m=+0.145966504 container died aab2ed647a99cb2734b0655bcb5aafe0f91af0ebf8083d277d02d93aa3b66ff8 (image=quay.io/ceph/ceph:v19, name=ecstatic_nightingale, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:42:58 compute-0 podman[72794]: 2025-10-08 09:42:58.961414288 +0000 UTC m=+0.182389945 container remove aab2ed647a99cb2734b0655bcb5aafe0f91af0ebf8083d277d02d93aa3b66ff8 (image=quay.io/ceph/ceph:v19, name=ecstatic_nightingale, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 08 09:42:58 compute-0 systemd[1]: libpod-conmon-aab2ed647a99cb2734b0655bcb5aafe0f91af0ebf8083d277d02d93aa3b66ff8.scope: Deactivated successfully.
Oct 08 09:42:59 compute-0 podman[72830]: 2025-10-08 09:42:59.015428943 +0000 UTC m=+0.034924649 container create 4aef8964cc850fe567f544f71dba581b24c6b68cf804cf83383076a87068829a (image=quay.io/ceph/ceph:v19, name=beautiful_hoover, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Oct 08 09:42:59 compute-0 systemd[1]: Started libpod-conmon-4aef8964cc850fe567f544f71dba581b24c6b68cf804cf83383076a87068829a.scope.
Oct 08 09:42:59 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:42:59 compute-0 podman[72830]: 2025-10-08 09:42:59.070503517 +0000 UTC m=+0.089999263 container init 4aef8964cc850fe567f544f71dba581b24c6b68cf804cf83383076a87068829a (image=quay.io/ceph/ceph:v19, name=beautiful_hoover, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct 08 09:42:59 compute-0 podman[72830]: 2025-10-08 09:42:59.074759764 +0000 UTC m=+0.094255470 container start 4aef8964cc850fe567f544f71dba581b24c6b68cf804cf83383076a87068829a (image=quay.io/ceph/ceph:v19, name=beautiful_hoover, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 08 09:42:59 compute-0 podman[72830]: 2025-10-08 09:42:59.077805861 +0000 UTC m=+0.097301597 container attach 4aef8964cc850fe567f544f71dba581b24c6b68cf804cf83383076a87068829a (image=quay.io/ceph/ceph:v19, name=beautiful_hoover, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 08 09:42:59 compute-0 beautiful_hoover[72847]: AQAjMuZoqAN/BRAAiOuukMBorzTKYoEIuS0Nfw==
Oct 08 09:42:59 compute-0 systemd[1]: libpod-4aef8964cc850fe567f544f71dba581b24c6b68cf804cf83383076a87068829a.scope: Deactivated successfully.
Oct 08 09:42:59 compute-0 podman[72830]: 2025-10-08 09:42:59.095362656 +0000 UTC m=+0.114858402 container died 4aef8964cc850fe567f544f71dba581b24c6b68cf804cf83383076a87068829a (image=quay.io/ceph/ceph:v19, name=beautiful_hoover, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct 08 09:42:59 compute-0 podman[72830]: 2025-10-08 09:42:58.999491342 +0000 UTC m=+0.018987058 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:42:59 compute-0 podman[72830]: 2025-10-08 09:42:59.128229864 +0000 UTC m=+0.147725560 container remove 4aef8964cc850fe567f544f71dba581b24c6b68cf804cf83383076a87068829a (image=quay.io/ceph/ceph:v19, name=beautiful_hoover, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Oct 08 09:42:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-71b422a3cd9070af5308dfa82e40ea0d30f2b4b22ef920027485dc0405fea92a-merged.mount: Deactivated successfully.
Oct 08 09:42:59 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 08 09:42:59 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 08 09:42:59 compute-0 systemd[1]: libpod-conmon-4aef8964cc850fe567f544f71dba581b24c6b68cf804cf83383076a87068829a.scope: Deactivated successfully.
Oct 08 09:42:59 compute-0 podman[72866]: 2025-10-08 09:42:59.203017173 +0000 UTC m=+0.046128177 container create 6305830abf507c32f15dfc2310afe7ffb05a28ac41705925ed6d65b5f403ec5e (image=quay.io/ceph/ceph:v19, name=strange_banzai, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:42:59 compute-0 systemd[1]: Started libpod-conmon-6305830abf507c32f15dfc2310afe7ffb05a28ac41705925ed6d65b5f403ec5e.scope.
Oct 08 09:42:59 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:42:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e056b4a140eab56a5cf3055d1419092da7ed0474d222cce18723625da2099328/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Oct 08 09:42:59 compute-0 podman[72866]: 2025-10-08 09:42:59.26871585 +0000 UTC m=+0.111826864 container init 6305830abf507c32f15dfc2310afe7ffb05a28ac41705925ed6d65b5f403ec5e (image=quay.io/ceph/ceph:v19, name=strange_banzai, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:42:59 compute-0 podman[72866]: 2025-10-08 09:42:59.274500731 +0000 UTC m=+0.117611745 container start 6305830abf507c32f15dfc2310afe7ffb05a28ac41705925ed6d65b5f403ec5e (image=quay.io/ceph/ceph:v19, name=strange_banzai, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct 08 09:42:59 compute-0 podman[72866]: 2025-10-08 09:42:59.18119718 +0000 UTC m=+0.024308204 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:42:59 compute-0 podman[72866]: 2025-10-08 09:42:59.280255631 +0000 UTC m=+0.123366645 container attach 6305830abf507c32f15dfc2310afe7ffb05a28ac41705925ed6d65b5f403ec5e (image=quay.io/ceph/ceph:v19, name=strange_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:42:59 compute-0 strange_banzai[72882]: /usr/bin/monmaptool: monmap file /tmp/monmap
Oct 08 09:42:59 compute-0 strange_banzai[72882]: setting min_mon_release = quincy
Oct 08 09:42:59 compute-0 strange_banzai[72882]: /usr/bin/monmaptool: set fsid to 787292cc-8154-50c4-9e00-e9be3e817149
Oct 08 09:42:59 compute-0 strange_banzai[72882]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Oct 08 09:42:59 compute-0 systemd[1]: libpod-6305830abf507c32f15dfc2310afe7ffb05a28ac41705925ed6d65b5f403ec5e.scope: Deactivated successfully.
Oct 08 09:42:59 compute-0 podman[72891]: 2025-10-08 09:42:59.349780073 +0000 UTC m=+0.025387535 container died 6305830abf507c32f15dfc2310afe7ffb05a28ac41705925ed6d65b5f403ec5e (image=quay.io/ceph/ceph:v19, name=strange_banzai, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 08 09:42:59 compute-0 podman[72891]: 2025-10-08 09:42:59.390142958 +0000 UTC m=+0.065750390 container remove 6305830abf507c32f15dfc2310afe7ffb05a28ac41705925ed6d65b5f403ec5e (image=quay.io/ceph/ceph:v19, name=strange_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:42:59 compute-0 systemd[1]: libpod-conmon-6305830abf507c32f15dfc2310afe7ffb05a28ac41705925ed6d65b5f403ec5e.scope: Deactivated successfully.
Oct 08 09:42:59 compute-0 podman[72906]: 2025-10-08 09:42:59.479794866 +0000 UTC m=+0.056857911 container create 8ec66c23a7dc497ab81a3de2170cefb057b0e54b61ba374d4cc9b578400b6d0a (image=quay.io/ceph/ceph:v19, name=crazy_wright, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:42:59 compute-0 systemd[1]: Started libpod-conmon-8ec66c23a7dc497ab81a3de2170cefb057b0e54b61ba374d4cc9b578400b6d0a.scope.
Oct 08 09:42:59 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:42:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb4530cd552b1f440029dcc745ae2021e6a35947aef3397a35656b1151804e35/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 09:42:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb4530cd552b1f440029dcc745ae2021e6a35947aef3397a35656b1151804e35/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Oct 08 09:42:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb4530cd552b1f440029dcc745ae2021e6a35947aef3397a35656b1151804e35/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:42:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb4530cd552b1f440029dcc745ae2021e6a35947aef3397a35656b1151804e35/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 08 09:42:59 compute-0 podman[72906]: 2025-10-08 09:42:59.536519754 +0000 UTC m=+0.113582789 container init 8ec66c23a7dc497ab81a3de2170cefb057b0e54b61ba374d4cc9b578400b6d0a (image=quay.io/ceph/ceph:v19, name=crazy_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:42:59 compute-0 podman[72906]: 2025-10-08 09:42:59.544103412 +0000 UTC m=+0.121166447 container start 8ec66c23a7dc497ab81a3de2170cefb057b0e54b61ba374d4cc9b578400b6d0a (image=quay.io/ceph/ceph:v19, name=crazy_wright, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:42:59 compute-0 podman[72906]: 2025-10-08 09:42:59.546994347 +0000 UTC m=+0.124057382 container attach 8ec66c23a7dc497ab81a3de2170cefb057b0e54b61ba374d4cc9b578400b6d0a (image=quay.io/ceph/ceph:v19, name=crazy_wright, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct 08 09:42:59 compute-0 podman[72906]: 2025-10-08 09:42:59.458777421 +0000 UTC m=+0.035840476 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:42:59 compute-0 systemd[1]: libpod-8ec66c23a7dc497ab81a3de2170cefb057b0e54b61ba374d4cc9b578400b6d0a.scope: Deactivated successfully.
Oct 08 09:42:59 compute-0 podman[72906]: 2025-10-08 09:42:59.62008814 +0000 UTC m=+0.197151205 container died 8ec66c23a7dc497ab81a3de2170cefb057b0e54b61ba374d4cc9b578400b6d0a (image=quay.io/ceph/ceph:v19, name=crazy_wright, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:42:59 compute-0 podman[72906]: 2025-10-08 09:42:59.661634055 +0000 UTC m=+0.238697130 container remove 8ec66c23a7dc497ab81a3de2170cefb057b0e54b61ba374d4cc9b578400b6d0a (image=quay.io/ceph/ceph:v19, name=crazy_wright, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:42:59 compute-0 systemd[1]: libpod-conmon-8ec66c23a7dc497ab81a3de2170cefb057b0e54b61ba374d4cc9b578400b6d0a.scope: Deactivated successfully.
Oct 08 09:42:59 compute-0 systemd[1]: Reloading.
Oct 08 09:42:59 compute-0 systemd-rc-local-generator[72991]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:42:59 compute-0 systemd-sysv-generator[72994]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:42:59 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 08 09:42:59 compute-0 systemd[1]: Reloading.
Oct 08 09:43:00 compute-0 systemd-rc-local-generator[73024]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:43:00 compute-0 systemd-sysv-generator[73029]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:43:00 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Oct 08 09:43:00 compute-0 systemd[1]: Reloading.
Oct 08 09:43:00 compute-0 systemd-sysv-generator[73068]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:43:00 compute-0 systemd-rc-local-generator[73064]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:43:00 compute-0 systemd[1]: Reached target Ceph cluster 787292cc-8154-50c4-9e00-e9be3e817149.
Oct 08 09:43:00 compute-0 systemd[1]: Reloading.
Oct 08 09:43:00 compute-0 systemd-rc-local-generator[73106]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:43:00 compute-0 systemd-sysv-generator[73109]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:43:00 compute-0 systemd[1]: Reloading.
Oct 08 09:43:00 compute-0 systemd-rc-local-generator[73139]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:43:00 compute-0 systemd-sysv-generator[73142]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:43:00 compute-0 systemd[1]: Created slice Slice /system/ceph-787292cc-8154-50c4-9e00-e9be3e817149.
Oct 08 09:43:00 compute-0 systemd[1]: Reached target System Time Set.
Oct 08 09:43:00 compute-0 systemd[1]: Reached target System Time Synchronized.
Oct 08 09:43:00 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct 08 09:43:01 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 08 09:43:01 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 08 09:43:01 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 08 09:43:01 compute-0 podman[73199]: 2025-10-08 09:43:01.209619988 +0000 UTC m=+0.047670930 container create 16f7f2abb5b7341e8d2841d5660a720c26117b197edd740905798f48f745f13a (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 08 09:43:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e9e664ae770a6a6c207293af8864848a9604e8018ff194db54d90f9d426d719/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e9e664ae770a6a6c207293af8864848a9604e8018ff194db54d90f9d426d719/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e9e664ae770a6a6c207293af8864848a9604e8018ff194db54d90f9d426d719/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e9e664ae770a6a6c207293af8864848a9604e8018ff194db54d90f9d426d719/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:01 compute-0 podman[73199]: 2025-10-08 09:43:01.274847921 +0000 UTC m=+0.112898823 container init 16f7f2abb5b7341e8d2841d5660a720c26117b197edd740905798f48f745f13a (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 08 09:43:01 compute-0 podman[73199]: 2025-10-08 09:43:01.185170163 +0000 UTC m=+0.023221145 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:43:01 compute-0 podman[73199]: 2025-10-08 09:43:01.283691779 +0000 UTC m=+0.121742681 container start 16f7f2abb5b7341e8d2841d5660a720c26117b197edd740905798f48f745f13a (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True)
Oct 08 09:43:01 compute-0 bash[73199]: 16f7f2abb5b7341e8d2841d5660a720c26117b197edd740905798f48f745f13a
Oct 08 09:43:01 compute-0 systemd[1]: Started Ceph mon.compute-0 for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct 08 09:43:01 compute-0 ceph-mon[73218]: set uid:gid to 167:167 (ceph:ceph)
Oct 08 09:43:01 compute-0 ceph-mon[73218]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Oct 08 09:43:01 compute-0 ceph-mon[73218]: pidfile_write: ignore empty --pid-file
Oct 08 09:43:01 compute-0 ceph-mon[73218]: load: jerasure load: lrc 
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb: RocksDB version: 7.9.2
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb: Git sha 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb: Compile date 2025-07-17 03:12:14
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb: DB SUMMARY
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb: DB Session ID:  I5X2GQVJKNE8052F5XL5
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb: CURRENT file:  CURRENT
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb: IDENTITY file:  IDENTITY
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                         Options.error_if_exists: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                       Options.create_if_missing: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                         Options.paranoid_checks: 1
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                                     Options.env: 0x56400a51ec20
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                                      Options.fs: PosixFileSystem
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                                Options.info_log: 0x56400c1d2d60
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                Options.max_file_opening_threads: 16
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                              Options.statistics: (nil)
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                               Options.use_fsync: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                       Options.max_log_file_size: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                         Options.allow_fallocate: 1
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                        Options.use_direct_reads: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:          Options.create_missing_column_families: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                              Options.db_log_dir: 
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                                 Options.wal_dir: 
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                   Options.advise_random_on_open: 1
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                    Options.write_buffer_manager: 0x56400c1d7900
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                            Options.rate_limiter: (nil)
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                  Options.unordered_write: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                               Options.row_cache: None
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                              Options.wal_filter: None
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:             Options.allow_ingest_behind: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:             Options.two_write_queues: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:             Options.manual_wal_flush: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:             Options.wal_compression: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:             Options.atomic_flush: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                 Options.log_readahead_size: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:             Options.allow_data_in_errors: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:             Options.db_host_id: __hostname__
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:             Options.max_background_jobs: 2
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:             Options.max_background_compactions: -1
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:             Options.max_subcompactions: 1
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:             Options.max_total_wal_size: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                          Options.max_open_files: -1
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                          Options.bytes_per_sync: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:       Options.compaction_readahead_size: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                  Options.max_background_flushes: -1
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb: Compression algorithms supported:
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:         kZSTD supported: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:         kXpressCompression supported: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:         kBZip2Compression supported: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:         kLZ4Compression supported: 1
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:         kZlibCompression supported: 1
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:         kLZ4HCCompression supported: 1
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:         kSnappyCompression supported: 1
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:           Options.merge_operator: 
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:        Options.compaction_filter: None
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:        Options.compaction_filter_factory: None
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:  Options.sst_partitioner_factory: None
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56400c1d2500)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56400c1f7350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:        Options.write_buffer_size: 33554432
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:  Options.max_write_buffer_number: 2
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:          Options.compression: NoCompression
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:       Options.prefix_extractor: nullptr
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:             Options.num_levels: 7
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                  Options.compression_opts.level: 32767
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:               Options.compression_opts.strategy: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                  Options.compression_opts.enabled: false
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                        Options.arena_block_size: 1048576
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                Options.disable_auto_compactions: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                   Options.inplace_update_support: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                           Options.bloom_locality: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                    Options.max_successive_merges: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                Options.paranoid_file_checks: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                Options.force_consistency_checks: 1
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                Options.report_bg_io_stats: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                               Options.ttl: 2592000
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                       Options.enable_blob_files: false
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                           Options.min_blob_size: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                          Options.blob_file_size: 268435456
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb:                Options.blob_file_starting_level: 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 5fe81d9b-468a-4413-adf1-4e4bd83383d4
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916581342080, "job": 1, "event": "recovery_started", "wal_files": [4]}
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916581343992, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916581, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "I5X2GQVJKNE8052F5XL5", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916581344104, "job": 1, "event": "recovery_finished"}
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x56400c1f8e00
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb: DB pointer 0x56400c302000
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 08 09:43:01 compute-0 ceph-mon[73218]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.17 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.17 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56400c1f7350#2 capacity: 512.00 MB usage: 1.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(2,0.95 KB,0.000181794%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 08 09:43:01 compute-0 ceph-mon[73218]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 787292cc-8154-50c4-9e00-e9be3e817149
Oct 08 09:43:01 compute-0 ceph-mon[73218]: mon.compute-0@-1(???) e0 preinit fsid 787292cc-8154-50c4-9e00-e9be3e817149
Oct 08 09:43:01 compute-0 ceph-mon[73218]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Oct 08 09:43:01 compute-0 ceph-mon[73218]: mon.compute-0@0(probing) e0 win_standalone_election
Oct 08 09:43:01 compute-0 ceph-mon[73218]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Oct 08 09:43:01 compute-0 ceph-mon[73218]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 08 09:43:01 compute-0 ceph-mon[73218]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 08 09:43:01 compute-0 ceph-mon[73218]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Oct 08 09:43:01 compute-0 ceph-mon[73218]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Oct 08 09:43:01 compute-0 ceph-mon[73218]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Oct 08 09:43:01 compute-0 ceph-mon[73218]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Oct 08 09:43:01 compute-0 ceph-mon[73218]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 08 09:43:01 compute-0 ceph-mon[73218]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Oct 08 09:43:01 compute-0 podman[73219]: 2025-10-08 09:43:01.367336364 +0000 UTC m=+0.045520310 container create fd5addef9437bd5c5454e70e6a2475fb0941706ea8c2f96802ce0e2035453e98 (image=quay.io/ceph/ceph:v19, name=affectionate_thompson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Oct 08 09:43:01 compute-0 ceph-mon[73218]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: mon.compute-0@0(probing) e1 win_standalone_election
Oct 08 09:43:01 compute-0 ceph-mon[73218]: paxos.0).electionLogic(2) init, last seen epoch 2
Oct 08 09:43:01 compute-0 ceph-mon[73218]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 08 09:43:01 compute-0 ceph-mon[73218]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 08 09:43:01 compute-0 ceph-mon[73218]: log_channel(cluster) log [DBG] : monmap epoch 1
Oct 08 09:43:01 compute-0 ceph-mon[73218]: log_channel(cluster) log [DBG] : fsid 787292cc-8154-50c4-9e00-e9be3e817149
Oct 08 09:43:01 compute-0 ceph-mon[73218]: log_channel(cluster) log [DBG] : last_changed 2025-10-08T09:42:59.307631+0000
Oct 08 09:43:01 compute-0 ceph-mon[73218]: log_channel(cluster) log [DBG] : created 2025-10-08T09:42:59.307631+0000
Oct 08 09:43:01 compute-0 ceph-mon[73218]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Oct 08 09:43:01 compute-0 ceph-mon[73218]: log_channel(cluster) log [DBG] : election_strategy: 1
Oct 08 09:43:01 compute-0 ceph-mon[73218]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 08 09:43:01 compute-0 ceph-mon[73218]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v19,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025,kernel_version=5.14.0-620.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864104,os=Linux}
Oct 08 09:43:01 compute-0 ceph-mon[73218]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Oct 08 09:43:01 compute-0 ceph-mon[73218]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Oct 08 09:43:01 compute-0 ceph-mon[73218]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Oct 08 09:43:01 compute-0 ceph-mon[73218]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Oct 08 09:43:01 compute-0 ceph-mon[73218]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 08 09:43:01 compute-0 ceph-mon[73218]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout}
Oct 08 09:43:01 compute-0 ceph-mon[73218]: mon.compute-0@0(leader).mds e1 new map
Oct 08 09:43:01 compute-0 ceph-mon[73218]: mon.compute-0@0(leader).mds e1 print_map
                                           e1
                                           btime 2025-10-08T09:43:01:374245+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Oct 08 09:43:01 compute-0 ceph-mon[73218]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Oct 08 09:43:01 compute-0 ceph-mon[73218]: log_channel(cluster) log [DBG] : fsmap 
Oct 08 09:43:01 compute-0 ceph-mon[73218]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Oct 08 09:43:01 compute-0 ceph-mon[73218]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Oct 08 09:43:01 compute-0 ceph-mon[73218]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Oct 08 09:43:01 compute-0 ceph-mon[73218]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Oct 08 09:43:01 compute-0 ceph-mon[73218]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 08 09:43:01 compute-0 ceph-mon[73218]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 08 09:43:01 compute-0 ceph-mon[73218]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 08 09:43:01 compute-0 ceph-mon[73218]: mkfs 787292cc-8154-50c4-9e00-e9be3e817149
Oct 08 09:43:01 compute-0 ceph-mon[73218]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Oct 08 09:43:01 compute-0 ceph-mon[73218]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Oct 08 09:43:01 compute-0 ceph-mon[73218]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Oct 08 09:43:01 compute-0 ceph-mon[73218]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 08 09:43:01 compute-0 systemd[1]: Started libpod-conmon-fd5addef9437bd5c5454e70e6a2475fb0941706ea8c2f96802ce0e2035453e98.scope.
Oct 08 09:43:01 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:43:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c487c7a8e36e6c42bf640cc52a3fa0f29dd300a992c105216206a7ad48d04f0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c487c7a8e36e6c42bf640cc52a3fa0f29dd300a992c105216206a7ad48d04f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c487c7a8e36e6c42bf640cc52a3fa0f29dd300a992c105216206a7ad48d04f0/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:01 compute-0 podman[73219]: 2025-10-08 09:43:01.350835799 +0000 UTC m=+0.029019775 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:43:01 compute-0 podman[73219]: 2025-10-08 09:43:01.446577162 +0000 UTC m=+0.124761138 container init fd5addef9437bd5c5454e70e6a2475fb0941706ea8c2f96802ce0e2035453e98 (image=quay.io/ceph/ceph:v19, name=affectionate_thompson, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 08 09:43:01 compute-0 podman[73219]: 2025-10-08 09:43:01.453506233 +0000 UTC m=+0.131690209 container start fd5addef9437bd5c5454e70e6a2475fb0941706ea8c2f96802ce0e2035453e98 (image=quay.io/ceph/ceph:v19, name=affectionate_thompson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 08 09:43:01 compute-0 podman[73219]: 2025-10-08 09:43:01.457287596 +0000 UTC m=+0.135471572 container attach fd5addef9437bd5c5454e70e6a2475fb0941706ea8c2f96802ce0e2035453e98 (image=quay.io/ceph/ceph:v19, name=affectionate_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Oct 08 09:43:01 compute-0 ceph-mon[73218]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Oct 08 09:43:01 compute-0 ceph-mon[73218]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3330539534' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 08 09:43:01 compute-0 affectionate_thompson[73274]:   cluster:
Oct 08 09:43:01 compute-0 affectionate_thompson[73274]:     id:     787292cc-8154-50c4-9e00-e9be3e817149
Oct 08 09:43:01 compute-0 affectionate_thompson[73274]:     health: HEALTH_OK
Oct 08 09:43:01 compute-0 affectionate_thompson[73274]:  
Oct 08 09:43:01 compute-0 affectionate_thompson[73274]:   services:
Oct 08 09:43:01 compute-0 affectionate_thompson[73274]:     mon: 1 daemons, quorum compute-0 (age 0.251755s)
Oct 08 09:43:01 compute-0 affectionate_thompson[73274]:     mgr: no daemons active
Oct 08 09:43:01 compute-0 affectionate_thompson[73274]:     osd: 0 osds: 0 up, 0 in
Oct 08 09:43:01 compute-0 affectionate_thompson[73274]:  
Oct 08 09:43:01 compute-0 affectionate_thompson[73274]:   data:
Oct 08 09:43:01 compute-0 affectionate_thompson[73274]:     pools:   0 pools, 0 pgs
Oct 08 09:43:01 compute-0 affectionate_thompson[73274]:     objects: 0 objects, 0 B
Oct 08 09:43:01 compute-0 affectionate_thompson[73274]:     usage:   0 B used, 0 B / 0 B avail
Oct 08 09:43:01 compute-0 affectionate_thompson[73274]:     pgs:     
Oct 08 09:43:01 compute-0 affectionate_thompson[73274]:  
Oct 08 09:43:01 compute-0 systemd[1]: libpod-fd5addef9437bd5c5454e70e6a2475fb0941706ea8c2f96802ce0e2035453e98.scope: Deactivated successfully.
Oct 08 09:43:01 compute-0 podman[73219]: 2025-10-08 09:43:01.641515187 +0000 UTC m=+0.319699133 container died fd5addef9437bd5c5454e70e6a2475fb0941706ea8c2f96802ce0e2035453e98 (image=quay.io/ceph/ceph:v19, name=affectionate_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:43:01 compute-0 podman[73219]: 2025-10-08 09:43:01.675631946 +0000 UTC m=+0.353815892 container remove fd5addef9437bd5c5454e70e6a2475fb0941706ea8c2f96802ce0e2035453e98 (image=quay.io/ceph/ceph:v19, name=affectionate_thompson, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:43:01 compute-0 systemd[1]: libpod-conmon-fd5addef9437bd5c5454e70e6a2475fb0941706ea8c2f96802ce0e2035453e98.scope: Deactivated successfully.
Oct 08 09:43:01 compute-0 podman[73313]: 2025-10-08 09:43:01.731120915 +0000 UTC m=+0.036453652 container create d5a6c53cbdca50f23e26c85bdc3dfeeea93453795db7644c7fcd0ba392691279 (image=quay.io/ceph/ceph:v19, name=romantic_dubinsky, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:43:01 compute-0 systemd[1]: Started libpod-conmon-d5a6c53cbdca50f23e26c85bdc3dfeeea93453795db7644c7fcd0ba392691279.scope.
Oct 08 09:43:01 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:43:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b546311652d4dd558dc6afee95185321c437efbcfb69cd6287ca933f7e49bf4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b546311652d4dd558dc6afee95185321c437efbcfb69cd6287ca933f7e49bf4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b546311652d4dd558dc6afee95185321c437efbcfb69cd6287ca933f7e49bf4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b546311652d4dd558dc6afee95185321c437efbcfb69cd6287ca933f7e49bf4/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:01 compute-0 podman[73313]: 2025-10-08 09:43:01.715972831 +0000 UTC m=+0.021305578 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:43:01 compute-0 podman[73313]: 2025-10-08 09:43:01.816356754 +0000 UTC m=+0.121689581 container init d5a6c53cbdca50f23e26c85bdc3dfeeea93453795db7644c7fcd0ba392691279 (image=quay.io/ceph/ceph:v19, name=romantic_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:43:01 compute-0 podman[73313]: 2025-10-08 09:43:01.828978475 +0000 UTC m=+0.134311202 container start d5a6c53cbdca50f23e26c85bdc3dfeeea93453795db7644c7fcd0ba392691279 (image=quay.io/ceph/ceph:v19, name=romantic_dubinsky, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:43:01 compute-0 podman[73313]: 2025-10-08 09:43:01.833547015 +0000 UTC m=+0.138879742 container attach d5a6c53cbdca50f23e26c85bdc3dfeeea93453795db7644c7fcd0ba392691279 (image=quay.io/ceph/ceph:v19, name=romantic_dubinsky, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Oct 08 09:43:02 compute-0 ceph-mon[73218]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Oct 08 09:43:02 compute-0 ceph-mon[73218]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4199698163' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 08 09:43:02 compute-0 ceph-mon[73218]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4199698163' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct 08 09:43:02 compute-0 romantic_dubinsky[73329]: 
Oct 08 09:43:02 compute-0 romantic_dubinsky[73329]: [global]
Oct 08 09:43:02 compute-0 romantic_dubinsky[73329]:         fsid = 787292cc-8154-50c4-9e00-e9be3e817149
Oct 08 09:43:02 compute-0 romantic_dubinsky[73329]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Oct 08 09:43:02 compute-0 systemd[1]: libpod-d5a6c53cbdca50f23e26c85bdc3dfeeea93453795db7644c7fcd0ba392691279.scope: Deactivated successfully.
Oct 08 09:43:02 compute-0 podman[73313]: 2025-10-08 09:43:02.024849277 +0000 UTC m=+0.330182004 container died d5a6c53cbdca50f23e26c85bdc3dfeeea93453795db7644c7fcd0ba392691279 (image=quay.io/ceph/ceph:v19, name=romantic_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default)
Oct 08 09:43:02 compute-0 podman[73313]: 2025-10-08 09:43:02.056770708 +0000 UTC m=+0.362103435 container remove d5a6c53cbdca50f23e26c85bdc3dfeeea93453795db7644c7fcd0ba392691279 (image=quay.io/ceph/ceph:v19, name=romantic_dubinsky, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:43:02 compute-0 systemd[1]: libpod-conmon-d5a6c53cbdca50f23e26c85bdc3dfeeea93453795db7644c7fcd0ba392691279.scope: Deactivated successfully.
Oct 08 09:43:02 compute-0 podman[73366]: 2025-10-08 09:43:02.143540451 +0000 UTC m=+0.055282637 container create 4136bc89e78b419a7d8d9a0963325ba86e565e5c4314e42d33f12faadefc3ddb (image=quay.io/ceph/ceph:v19, name=frosty_panini, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 08 09:43:02 compute-0 systemd[1]: Started libpod-conmon-4136bc89e78b419a7d8d9a0963325ba86e565e5c4314e42d33f12faadefc3ddb.scope.
Oct 08 09:43:02 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:43:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/783aec4a062ac05ee08cf550688b80fa5cf5f0bd0eb18541208a410d7be029d5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/783aec4a062ac05ee08cf550688b80fa5cf5f0bd0eb18541208a410d7be029d5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/783aec4a062ac05ee08cf550688b80fa5cf5f0bd0eb18541208a410d7be029d5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/783aec4a062ac05ee08cf550688b80fa5cf5f0bd0eb18541208a410d7be029d5/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:02 compute-0 podman[73366]: 2025-10-08 09:43:02.211416488 +0000 UTC m=+0.123158674 container init 4136bc89e78b419a7d8d9a0963325ba86e565e5c4314e42d33f12faadefc3ddb (image=quay.io/ceph/ceph:v19, name=frosty_panini, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:43:02 compute-0 podman[73366]: 2025-10-08 09:43:02.219115275 +0000 UTC m=+0.130857451 container start 4136bc89e78b419a7d8d9a0963325ba86e565e5c4314e42d33f12faadefc3ddb (image=quay.io/ceph/ceph:v19, name=frosty_panini, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 08 09:43:02 compute-0 podman[73366]: 2025-10-08 09:43:02.124632515 +0000 UTC m=+0.036374701 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:43:02 compute-0 podman[73366]: 2025-10-08 09:43:02.222292293 +0000 UTC m=+0.134034499 container attach 4136bc89e78b419a7d8d9a0963325ba86e565e5c4314e42d33f12faadefc3ddb (image=quay.io/ceph/ceph:v19, name=frosty_panini, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:43:02 compute-0 ceph-mon[73218]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:43:02 compute-0 ceph-mon[73218]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3472823470' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:43:02 compute-0 ceph-mon[73218]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 08 09:43:02 compute-0 ceph-mon[73218]: monmap epoch 1
Oct 08 09:43:02 compute-0 ceph-mon[73218]: fsid 787292cc-8154-50c4-9e00-e9be3e817149
Oct 08 09:43:02 compute-0 ceph-mon[73218]: last_changed 2025-10-08T09:42:59.307631+0000
Oct 08 09:43:02 compute-0 ceph-mon[73218]: created 2025-10-08T09:42:59.307631+0000
Oct 08 09:43:02 compute-0 ceph-mon[73218]: min_mon_release 19 (squid)
Oct 08 09:43:02 compute-0 ceph-mon[73218]: election_strategy: 1
Oct 08 09:43:02 compute-0 ceph-mon[73218]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Oct 08 09:43:02 compute-0 ceph-mon[73218]: fsmap 
Oct 08 09:43:02 compute-0 ceph-mon[73218]: osdmap e1: 0 total, 0 up, 0 in
Oct 08 09:43:02 compute-0 ceph-mon[73218]: mgrmap e1: no daemons active
Oct 08 09:43:02 compute-0 ceph-mon[73218]: from='client.? 192.168.122.100:0/3330539534' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 08 09:43:02 compute-0 ceph-mon[73218]: from='client.? 192.168.122.100:0/4199698163' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 08 09:43:02 compute-0 ceph-mon[73218]: from='client.? 192.168.122.100:0/4199698163' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct 08 09:43:02 compute-0 ceph-mon[73218]: from='client.? 192.168.122.100:0/3472823470' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:43:02 compute-0 systemd[1]: libpod-4136bc89e78b419a7d8d9a0963325ba86e565e5c4314e42d33f12faadefc3ddb.scope: Deactivated successfully.
Oct 08 09:43:02 compute-0 podman[73408]: 2025-10-08 09:43:02.445369745 +0000 UTC m=+0.022943132 container died 4136bc89e78b419a7d8d9a0963325ba86e565e5c4314e42d33f12faadefc3ddb (image=quay.io/ceph/ceph:v19, name=frosty_panini, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:43:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-783aec4a062ac05ee08cf550688b80fa5cf5f0bd0eb18541208a410d7be029d5-merged.mount: Deactivated successfully.
Oct 08 09:43:02 compute-0 podman[73408]: 2025-10-08 09:43:02.480553075 +0000 UTC m=+0.058126452 container remove 4136bc89e78b419a7d8d9a0963325ba86e565e5c4314e42d33f12faadefc3ddb (image=quay.io/ceph/ceph:v19, name=frosty_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct 08 09:43:02 compute-0 systemd[1]: libpod-conmon-4136bc89e78b419a7d8d9a0963325ba86e565e5c4314e42d33f12faadefc3ddb.scope: Deactivated successfully.
Oct 08 09:43:02 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct 08 09:43:02 compute-0 ceph-mon[73218]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Oct 08 09:43:02 compute-0 ceph-mon[73218]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Oct 08 09:43:02 compute-0 ceph-mon[73218]: mon.compute-0@0(leader) e1 shutdown
Oct 08 09:43:02 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0[73214]: 2025-10-08T09:43:02.669+0000 7fd7b6c5e640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Oct 08 09:43:02 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0[73214]: 2025-10-08T09:43:02.669+0000 7fd7b6c5e640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Oct 08 09:43:02 compute-0 ceph-mon[73218]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct 08 09:43:02 compute-0 ceph-mon[73218]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct 08 09:43:02 compute-0 podman[73452]: 2025-10-08 09:43:02.812579594 +0000 UTC m=+0.179241877 container died 16f7f2abb5b7341e8d2841d5660a720c26117b197edd740905798f48f745f13a (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 08 09:43:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e9e664ae770a6a6c207293af8864848a9604e8018ff194db54d90f9d426d719-merged.mount: Deactivated successfully.
Oct 08 09:43:02 compute-0 podman[73452]: 2025-10-08 09:43:02.846860326 +0000 UTC m=+0.213522569 container remove 16f7f2abb5b7341e8d2841d5660a720c26117b197edd740905798f48f745f13a (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:43:02 compute-0 bash[73452]: ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0
Oct 08 09:43:02 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 08 09:43:02 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@mon.compute-0.service: Deactivated successfully.
Oct 08 09:43:02 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct 08 09:43:02 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct 08 09:43:03 compute-0 podman[73555]: 2025-10-08 09:43:03.171574821 +0000 UTC m=+0.033361244 container create 01c666addd8584f02eb7d29040d97e45d8cb7f834fd134029c1f7cca550aeb9d (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:43:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54a1a03dbd7c508b561d3245ac013ec6a83c9d541d6eb822d9e74ba9d9f78e4c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54a1a03dbd7c508b561d3245ac013ec6a83c9d541d6eb822d9e74ba9d9f78e4c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54a1a03dbd7c508b561d3245ac013ec6a83c9d541d6eb822d9e74ba9d9f78e4c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54a1a03dbd7c508b561d3245ac013ec6a83c9d541d6eb822d9e74ba9d9f78e4c/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:03 compute-0 podman[73555]: 2025-10-08 09:43:03.232432966 +0000 UTC m=+0.094219469 container init 01c666addd8584f02eb7d29040d97e45d8cb7f834fd134029c1f7cca550aeb9d (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Oct 08 09:43:03 compute-0 podman[73555]: 2025-10-08 09:43:03.237600881 +0000 UTC m=+0.099387344 container start 01c666addd8584f02eb7d29040d97e45d8cb7f834fd134029c1f7cca550aeb9d (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 08 09:43:03 compute-0 bash[73555]: 01c666addd8584f02eb7d29040d97e45d8cb7f834fd134029c1f7cca550aeb9d
Oct 08 09:43:03 compute-0 podman[73555]: 2025-10-08 09:43:03.157924661 +0000 UTC m=+0.019711114 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:43:03 compute-0 systemd[1]: Started Ceph mon.compute-0 for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct 08 09:43:03 compute-0 ceph-mon[73572]: set uid:gid to 167:167 (ceph:ceph)
Oct 08 09:43:03 compute-0 ceph-mon[73572]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Oct 08 09:43:03 compute-0 ceph-mon[73572]: pidfile_write: ignore empty --pid-file
Oct 08 09:43:03 compute-0 ceph-mon[73572]: load: jerasure load: lrc 
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb: RocksDB version: 7.9.2
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb: Git sha 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb: Compile date 2025-07-17 03:12:14
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb: DB SUMMARY
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb: DB Session ID:  KN4HYS7VUCE6V85JIQOU
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb: CURRENT file:  CURRENT
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb: IDENTITY file:  IDENTITY
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 59859 ; 
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                         Options.error_if_exists: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                       Options.create_if_missing: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                         Options.paranoid_checks: 1
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                                     Options.env: 0x55f7a0c9cc20
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                                      Options.fs: PosixFileSystem
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                                Options.info_log: 0x55f7a1cbfac0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                Options.max_file_opening_threads: 16
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                              Options.statistics: (nil)
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                               Options.use_fsync: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                       Options.max_log_file_size: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                         Options.allow_fallocate: 1
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                        Options.use_direct_reads: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:          Options.create_missing_column_families: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                              Options.db_log_dir: 
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                                 Options.wal_dir: 
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                   Options.advise_random_on_open: 1
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                    Options.write_buffer_manager: 0x55f7a1cc3900
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                            Options.rate_limiter: (nil)
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                  Options.unordered_write: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                               Options.row_cache: None
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                              Options.wal_filter: None
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:             Options.allow_ingest_behind: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:             Options.two_write_queues: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:             Options.manual_wal_flush: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:             Options.wal_compression: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:             Options.atomic_flush: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                 Options.log_readahead_size: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:             Options.allow_data_in_errors: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:             Options.db_host_id: __hostname__
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:             Options.max_background_jobs: 2
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:             Options.max_background_compactions: -1
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:             Options.max_subcompactions: 1
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:             Options.max_total_wal_size: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                          Options.max_open_files: -1
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                          Options.bytes_per_sync: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:       Options.compaction_readahead_size: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                  Options.max_background_flushes: -1
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb: Compression algorithms supported:
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:         kZSTD supported: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:         kXpressCompression supported: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:         kBZip2Compression supported: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:         kLZ4Compression supported: 1
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:         kZlibCompression supported: 1
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:         kLZ4HCCompression supported: 1
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:         kSnappyCompression supported: 1
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:           Options.merge_operator: 
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:        Options.compaction_filter: None
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:        Options.compaction_filter_factory: None
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:  Options.sst_partitioner_factory: None
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f7a1cbeaa0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f7a1ce3350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:        Options.write_buffer_size: 33554432
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:  Options.max_write_buffer_number: 2
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:          Options.compression: NoCompression
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:       Options.prefix_extractor: nullptr
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:             Options.num_levels: 7
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                  Options.compression_opts.level: 32767
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:               Options.compression_opts.strategy: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                  Options.compression_opts.enabled: false
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                        Options.arena_block_size: 1048576
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                Options.disable_auto_compactions: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                   Options.inplace_update_support: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                           Options.bloom_locality: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                    Options.max_successive_merges: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                Options.paranoid_file_checks: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                Options.force_consistency_checks: 1
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                Options.report_bg_io_stats: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                               Options.ttl: 2592000
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                       Options.enable_blob_files: false
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                           Options.min_blob_size: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                          Options.blob_file_size: 268435456
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb:                Options.blob_file_starting_level: 0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 5fe81d9b-468a-4413-adf1-4e4bd83383d4
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916583279999, "job": 1, "event": "recovery_started", "wal_files": [9]}
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916583283778, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 59627, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 138, "table_properties": {"data_size": 58095, "index_size": 174, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 325, "raw_key_size": 3209, "raw_average_key_size": 30, "raw_value_size": 55578, "raw_average_value_size": 529, "num_data_blocks": 9, "num_entries": 105, "num_filter_entries": 105, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916583, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916583283899, "job": 1, "event": "recovery_finished"}
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55f7a1ce4e00
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb: DB pointer 0x55f7a1dee000
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 08 09:43:03 compute-0 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0   60.13 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     16.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      2/0   60.13 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     16.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     16.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     16.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 4.28 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 4.28 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f7a1ce3350#2 capacity: 512.00 MB usage: 0.84 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(2,0.48 KB,9.23872e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 08 09:43:03 compute-0 ceph-mon[73572]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 787292cc-8154-50c4-9e00-e9be3e817149
Oct 08 09:43:03 compute-0 ceph-mon[73572]: mon.compute-0@-1(???) e1 preinit fsid 787292cc-8154-50c4-9e00-e9be3e817149
Oct 08 09:43:03 compute-0 ceph-mon[73572]: mon.compute-0@-1(???).mds e1 new map
Oct 08 09:43:03 compute-0 ceph-mon[73572]: mon.compute-0@-1(???).mds e1 print_map
                                           e1
                                           btime 2025-10-08T09:43:01:374245+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Oct 08 09:43:03 compute-0 ceph-mon[73572]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Oct 08 09:43:03 compute-0 ceph-mon[73572]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 08 09:43:03 compute-0 ceph-mon[73572]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 08 09:43:03 compute-0 ceph-mon[73572]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 08 09:43:03 compute-0 ceph-mon[73572]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Oct 08 09:43:03 compute-0 ceph-mon[73572]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Oct 08 09:43:03 compute-0 ceph-mon[73572]: mon.compute-0@0(probing) e1 win_standalone_election
Oct 08 09:43:03 compute-0 ceph-mon[73572]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Oct 08 09:43:03 compute-0 ceph-mon[73572]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 08 09:43:03 compute-0 ceph-mon[73572]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 08 09:43:03 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : monmap epoch 1
Oct 08 09:43:03 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : fsid 787292cc-8154-50c4-9e00-e9be3e817149
Oct 08 09:43:03 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : last_changed 2025-10-08T09:42:59.307631+0000
Oct 08 09:43:03 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : created 2025-10-08T09:42:59.307631+0000
Oct 08 09:43:03 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Oct 08 09:43:03 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : election_strategy: 1
Oct 08 09:43:03 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 08 09:43:03 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : fsmap 
Oct 08 09:43:03 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Oct 08 09:43:03 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Oct 08 09:43:03 compute-0 podman[73573]: 2025-10-08 09:43:03.309401013 +0000 UTC m=+0.038346228 container create 44b77240aa3f3814333391940cc24d386762abd2275d00a9cb15b3413c0269db (image=quay.io/ceph/ceph:v19, name=amazing_thompson, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:43:03 compute-0 systemd[1]: Started libpod-conmon-44b77240aa3f3814333391940cc24d386762abd2275d00a9cb15b3413c0269db.scope.
Oct 08 09:43:03 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:43:03 compute-0 ceph-mon[73572]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 08 09:43:03 compute-0 ceph-mon[73572]: monmap epoch 1
Oct 08 09:43:03 compute-0 ceph-mon[73572]: fsid 787292cc-8154-50c4-9e00-e9be3e817149
Oct 08 09:43:03 compute-0 ceph-mon[73572]: last_changed 2025-10-08T09:42:59.307631+0000
Oct 08 09:43:03 compute-0 ceph-mon[73572]: created 2025-10-08T09:42:59.307631+0000
Oct 08 09:43:03 compute-0 ceph-mon[73572]: min_mon_release 19 (squid)
Oct 08 09:43:03 compute-0 ceph-mon[73572]: election_strategy: 1
Oct 08 09:43:03 compute-0 ceph-mon[73572]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Oct 08 09:43:03 compute-0 ceph-mon[73572]: fsmap 
Oct 08 09:43:03 compute-0 ceph-mon[73572]: osdmap e1: 0 total, 0 up, 0 in
Oct 08 09:43:03 compute-0 ceph-mon[73572]: mgrmap e1: no daemons active
Oct 08 09:43:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fff82d219aa41ca7f4c00ad79e92a6fa22eae5c0057b8e5ed1c6b9cd39045b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fff82d219aa41ca7f4c00ad79e92a6fa22eae5c0057b8e5ed1c6b9cd39045b8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fff82d219aa41ca7f4c00ad79e92a6fa22eae5c0057b8e5ed1c6b9cd39045b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:03 compute-0 podman[73573]: 2025-10-08 09:43:03.373898501 +0000 UTC m=+0.102843726 container init 44b77240aa3f3814333391940cc24d386762abd2275d00a9cb15b3413c0269db (image=quay.io/ceph/ceph:v19, name=amazing_thompson, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:43:03 compute-0 podman[73573]: 2025-10-08 09:43:03.380523948 +0000 UTC m=+0.109469153 container start 44b77240aa3f3814333391940cc24d386762abd2275d00a9cb15b3413c0269db (image=quay.io/ceph/ceph:v19, name=amazing_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:43:03 compute-0 podman[73573]: 2025-10-08 09:43:03.383312113 +0000 UTC m=+0.112257358 container attach 44b77240aa3f3814333391940cc24d386762abd2275d00a9cb15b3413c0269db (image=quay.io/ceph/ceph:v19, name=amazing_thompson, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:43:03 compute-0 podman[73573]: 2025-10-08 09:43:03.294581023 +0000 UTC m=+0.023526258 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:43:03 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Oct 08 09:43:03 compute-0 systemd[1]: libpod-44b77240aa3f3814333391940cc24d386762abd2275d00a9cb15b3413c0269db.scope: Deactivated successfully.
Oct 08 09:43:03 compute-0 podman[73573]: 2025-10-08 09:43:03.597115743 +0000 UTC m=+0.326060958 container died 44b77240aa3f3814333391940cc24d386762abd2275d00a9cb15b3413c0269db (image=quay.io/ceph/ceph:v19, name=amazing_thompson, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:43:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-6fff82d219aa41ca7f4c00ad79e92a6fa22eae5c0057b8e5ed1c6b9cd39045b8-merged.mount: Deactivated successfully.
Oct 08 09:43:03 compute-0 podman[73573]: 2025-10-08 09:43:03.635022036 +0000 UTC m=+0.363967241 container remove 44b77240aa3f3814333391940cc24d386762abd2275d00a9cb15b3413c0269db (image=quay.io/ceph/ceph:v19, name=amazing_thompson, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:43:03 compute-0 systemd[1]: libpod-conmon-44b77240aa3f3814333391940cc24d386762abd2275d00a9cb15b3413c0269db.scope: Deactivated successfully.
Oct 08 09:43:03 compute-0 podman[73667]: 2025-10-08 09:43:03.686721561 +0000 UTC m=+0.035659114 container create a78d8d4e05cb7df7eb56b015eaa5d6cb76e4125de5ec332a6f8481e2fc9c4e5d (image=quay.io/ceph/ceph:v19, name=modest_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 08 09:43:03 compute-0 systemd[1]: Started libpod-conmon-a78d8d4e05cb7df7eb56b015eaa5d6cb76e4125de5ec332a6f8481e2fc9c4e5d.scope.
Oct 08 09:43:03 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:43:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c6d9f63418d672a526030b4c257ce498672a7d67c28f674ca85c89e21cd6de9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c6d9f63418d672a526030b4c257ce498672a7d67c28f674ca85c89e21cd6de9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c6d9f63418d672a526030b4c257ce498672a7d67c28f674ca85c89e21cd6de9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:03 compute-0 podman[73667]: 2025-10-08 09:43:03.762975862 +0000 UTC m=+0.111913395 container init a78d8d4e05cb7df7eb56b015eaa5d6cb76e4125de5ec332a6f8481e2fc9c4e5d (image=quay.io/ceph/ceph:v19, name=modest_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Oct 08 09:43:03 compute-0 podman[73667]: 2025-10-08 09:43:03.670521189 +0000 UTC m=+0.019458722 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:43:03 compute-0 podman[73667]: 2025-10-08 09:43:03.774994707 +0000 UTC m=+0.123932220 container start a78d8d4e05cb7df7eb56b015eaa5d6cb76e4125de5ec332a6f8481e2fc9c4e5d (image=quay.io/ceph/ceph:v19, name=modest_dewdney, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:43:03 compute-0 podman[73667]: 2025-10-08 09:43:03.777977854 +0000 UTC m=+0.126915387 container attach a78d8d4e05cb7df7eb56b015eaa5d6cb76e4125de5ec332a6f8481e2fc9c4e5d (image=quay.io/ceph/ceph:v19, name=modest_dewdney, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:43:04 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Oct 08 09:43:04 compute-0 systemd[1]: libpod-a78d8d4e05cb7df7eb56b015eaa5d6cb76e4125de5ec332a6f8481e2fc9c4e5d.scope: Deactivated successfully.
Oct 08 09:43:04 compute-0 podman[73667]: 2025-10-08 09:43:04.030645946 +0000 UTC m=+0.379583549 container died a78d8d4e05cb7df7eb56b015eaa5d6cb76e4125de5ec332a6f8481e2fc9c4e5d (image=quay.io/ceph/ceph:v19, name=modest_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Oct 08 09:43:04 compute-0 podman[73667]: 2025-10-08 09:43:04.105853067 +0000 UTC m=+0.454790580 container remove a78d8d4e05cb7df7eb56b015eaa5d6cb76e4125de5ec332a6f8481e2fc9c4e5d (image=quay.io/ceph/ceph:v19, name=modest_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:43:04 compute-0 systemd[1]: libpod-conmon-a78d8d4e05cb7df7eb56b015eaa5d6cb76e4125de5ec332a6f8481e2fc9c4e5d.scope: Deactivated successfully.
Oct 08 09:43:04 compute-0 systemd[1]: Reloading.
Oct 08 09:43:04 compute-0 systemd-rc-local-generator[73740]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:43:04 compute-0 systemd-sysv-generator[73746]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:43:04 compute-0 systemd[1]: Reloading.
Oct 08 09:43:04 compute-0 systemd-rc-local-generator[73788]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:43:04 compute-0 systemd-sysv-generator[73791]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:43:04 compute-0 systemd[1]: Starting Ceph mgr.compute-0.ixicfj for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct 08 09:43:04 compute-0 podman[73849]: 2025-10-08 09:43:04.890973972 +0000 UTC m=+0.019031919 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:43:06 compute-0 podman[73849]: 2025-10-08 09:43:06.236266592 +0000 UTC m=+1.364324519 container create 507427ceb1795d8f880fc9a43897ce65f2b5ce89744d49298a3fa86e2b68fb56 (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:43:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/831c2acc3a08033f74422f0cfbd7d714be37000ffed30c8cbe85077263a3a8d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/831c2acc3a08033f74422f0cfbd7d714be37000ffed30c8cbe85077263a3a8d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/831c2acc3a08033f74422f0cfbd7d714be37000ffed30c8cbe85077263a3a8d4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/831c2acc3a08033f74422f0cfbd7d714be37000ffed30c8cbe85077263a3a8d4/merged/var/lib/ceph/mgr/ceph-compute-0.ixicfj supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:06 compute-0 podman[73849]: 2025-10-08 09:43:06.318346904 +0000 UTC m=+1.446404911 container init 507427ceb1795d8f880fc9a43897ce65f2b5ce89744d49298a3fa86e2b68fb56 (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:43:06 compute-0 podman[73849]: 2025-10-08 09:43:06.327728177 +0000 UTC m=+1.455786134 container start 507427ceb1795d8f880fc9a43897ce65f2b5ce89744d49298a3fa86e2b68fb56 (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:43:06 compute-0 bash[73849]: 507427ceb1795d8f880fc9a43897ce65f2b5ce89744d49298a3fa86e2b68fb56
Oct 08 09:43:06 compute-0 systemd[1]: Started Ceph mgr.compute-0.ixicfj for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct 08 09:43:06 compute-0 ceph-mgr[73869]: set uid:gid to 167:167 (ceph:ceph)
Oct 08 09:43:06 compute-0 ceph-mgr[73869]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct 08 09:43:06 compute-0 ceph-mgr[73869]: pidfile_write: ignore empty --pid-file
Oct 08 09:43:06 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'alerts'
Oct 08 09:43:06 compute-0 podman[73870]: 2025-10-08 09:43:06.435135111 +0000 UTC m=+0.053432531 container create e3524f7fb4cc3af542fb619af6a384f10e1c57e8068c7840c9439293bc2687e9 (image=quay.io/ceph/ceph:v19, name=jovial_lamarr, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2)
Oct 08 09:43:06 compute-0 systemd[1]: Started libpod-conmon-e3524f7fb4cc3af542fb619af6a384f10e1c57e8068c7840c9439293bc2687e9.scope.
Oct 08 09:43:06 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:43:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd8e9a6f6d9dc583358b00b66e93d8d17b0c1dff470c869fcff473458b242339/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd8e9a6f6d9dc583358b00b66e93d8d17b0c1dff470c869fcff473458b242339/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd8e9a6f6d9dc583358b00b66e93d8d17b0c1dff470c869fcff473458b242339/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:06 compute-0 ceph-mgr[73869]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 08 09:43:06 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'balancer'
Oct 08 09:43:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:06.507+0000 7f971cc6d140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 08 09:43:06 compute-0 podman[73870]: 2025-10-08 09:43:06.418240432 +0000 UTC m=+0.036537862 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:43:06 compute-0 podman[73870]: 2025-10-08 09:43:06.515154645 +0000 UTC m=+0.133452105 container init e3524f7fb4cc3af542fb619af6a384f10e1c57e8068c7840c9439293bc2687e9 (image=quay.io/ceph/ceph:v19, name=jovial_lamarr, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid)
Oct 08 09:43:06 compute-0 podman[73870]: 2025-10-08 09:43:06.521156777 +0000 UTC m=+0.139454207 container start e3524f7fb4cc3af542fb619af6a384f10e1c57e8068c7840c9439293bc2687e9 (image=quay.io/ceph/ceph:v19, name=jovial_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:43:06 compute-0 podman[73870]: 2025-10-08 09:43:06.52487243 +0000 UTC m=+0.143169880 container attach e3524f7fb4cc3af542fb619af6a384f10e1c57e8068c7840c9439293bc2687e9 (image=quay.io/ceph/ceph:v19, name=jovial_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 08 09:43:06 compute-0 ceph-mgr[73869]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 08 09:43:06 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'cephadm'
Oct 08 09:43:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:06.584+0000 7f971cc6d140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 08 09:43:06 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Oct 08 09:43:06 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4167060928' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]: 
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]: {
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]:     "fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]:     "health": {
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]:         "status": "HEALTH_OK",
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]:         "checks": {},
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]:         "mutes": []
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]:     },
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]:     "election_epoch": 5,
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]:     "quorum": [
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]:         0
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]:     ],
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]:     "quorum_names": [
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]:         "compute-0"
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]:     ],
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]:     "quorum_age": 3,
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]:     "monmap": {
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]:         "epoch": 1,
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]:         "min_mon_release_name": "squid",
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]:         "num_mons": 1
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]:     },
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]:     "osdmap": {
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]:         "epoch": 1,
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]:         "num_osds": 0,
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]:         "num_up_osds": 0,
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]:         "osd_up_since": 0,
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]:         "num_in_osds": 0,
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]:         "osd_in_since": 0,
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]:         "num_remapped_pgs": 0
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]:     },
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]:     "pgmap": {
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]:         "pgs_by_state": [],
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]:         "num_pgs": 0,
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]:         "num_pools": 0,
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]:         "num_objects": 0,
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]:         "data_bytes": 0,
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]:         "bytes_used": 0,
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]:         "bytes_avail": 0,
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]:         "bytes_total": 0
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]:     },
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]:     "fsmap": {
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]:         "epoch": 1,
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]:         "btime": "2025-10-08T09:43:01:374245+0000",
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]:         "by_rank": [],
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]:         "up:standby": 0
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]:     },
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]:     "mgrmap": {
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]:         "available": false,
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]:         "num_standbys": 0,
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]:         "modules": [
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]:             "iostat",
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]:             "nfs",
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]:             "restful"
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]:         ],
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]:         "services": {}
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]:     },
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]:     "servicemap": {
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]:         "epoch": 1,
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]:         "modified": "2025-10-08T09:43:01.375926+0000",
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]:         "services": {}
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]:     },
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]:     "progress_events": {}
Oct 08 09:43:06 compute-0 jovial_lamarr[73907]: }
Oct 08 09:43:06 compute-0 systemd[1]: libpod-e3524f7fb4cc3af542fb619af6a384f10e1c57e8068c7840c9439293bc2687e9.scope: Deactivated successfully.
Oct 08 09:43:06 compute-0 podman[73870]: 2025-10-08 09:43:06.765492886 +0000 UTC m=+0.383790306 container died e3524f7fb4cc3af542fb619af6a384f10e1c57e8068c7840c9439293bc2687e9 (image=quay.io/ceph/ceph:v19, name=jovial_lamarr, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Oct 08 09:43:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd8e9a6f6d9dc583358b00b66e93d8d17b0c1dff470c869fcff473458b242339-merged.mount: Deactivated successfully.
Oct 08 09:43:06 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/4167060928' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 08 09:43:06 compute-0 podman[73870]: 2025-10-08 09:43:06.808216202 +0000 UTC m=+0.426513622 container remove e3524f7fb4cc3af542fb619af6a384f10e1c57e8068c7840c9439293bc2687e9 (image=quay.io/ceph/ceph:v19, name=jovial_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:43:06 compute-0 systemd[1]: libpod-conmon-e3524f7fb4cc3af542fb619af6a384f10e1c57e8068c7840c9439293bc2687e9.scope: Deactivated successfully.
Oct 08 09:43:07 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'crash'
Oct 08 09:43:07 compute-0 ceph-mgr[73869]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 08 09:43:07 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'dashboard'
Oct 08 09:43:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:07.354+0000 7f971cc6d140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 08 09:43:07 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'devicehealth'
Oct 08 09:43:07 compute-0 ceph-mgr[73869]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 08 09:43:07 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'diskprediction_local'
Oct 08 09:43:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:07.969+0000 7f971cc6d140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 08 09:43:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 08 09:43:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 08 09:43:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]:   from numpy import show_config as show_numpy_config
Oct 08 09:43:08 compute-0 ceph-mgr[73869]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 08 09:43:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:08.130+0000 7f971cc6d140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 08 09:43:08 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'influx'
Oct 08 09:43:08 compute-0 ceph-mgr[73869]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 08 09:43:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:08.200+0000 7f971cc6d140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 08 09:43:08 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'insights'
Oct 08 09:43:08 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'iostat'
Oct 08 09:43:08 compute-0 ceph-mgr[73869]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 08 09:43:08 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'k8sevents'
Oct 08 09:43:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:08.333+0000 7f971cc6d140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 08 09:43:08 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'localpool'
Oct 08 09:43:08 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'mds_autoscaler'
Oct 08 09:43:08 compute-0 podman[73956]: 2025-10-08 09:43:08.874086449 +0000 UTC m=+0.044813455 container create 7a0a8ac6baf2b1e3293028e4f010021006c38a024ab45361a4567ebc3fed1c6a (image=quay.io/ceph/ceph:v19, name=relaxed_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default)
Oct 08 09:43:08 compute-0 systemd[1]: Started libpod-conmon-7a0a8ac6baf2b1e3293028e4f010021006c38a024ab45361a4567ebc3fed1c6a.scope.
Oct 08 09:43:08 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:43:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edd866ae709c382715a1ce5c8068304f8e130c9fbaf8230a9b5b7a72eacebfbc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edd866ae709c382715a1ce5c8068304f8e130c9fbaf8230a9b5b7a72eacebfbc/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edd866ae709c382715a1ce5c8068304f8e130c9fbaf8230a9b5b7a72eacebfbc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:08 compute-0 podman[73956]: 2025-10-08 09:43:08.853556438 +0000 UTC m=+0.024283444 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:43:08 compute-0 podman[73956]: 2025-10-08 09:43:08.962537256 +0000 UTC m=+0.133264262 container init 7a0a8ac6baf2b1e3293028e4f010021006c38a024ab45361a4567ebc3fed1c6a (image=quay.io/ceph/ceph:v19, name=relaxed_mclean, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:43:08 compute-0 podman[73956]: 2025-10-08 09:43:08.968305617 +0000 UTC m=+0.139032613 container start 7a0a8ac6baf2b1e3293028e4f010021006c38a024ab45361a4567ebc3fed1c6a (image=quay.io/ceph/ceph:v19, name=relaxed_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:43:08 compute-0 podman[73956]: 2025-10-08 09:43:08.972398134 +0000 UTC m=+0.143125130 container attach 7a0a8ac6baf2b1e3293028e4f010021006c38a024ab45361a4567ebc3fed1c6a (image=quay.io/ceph/ceph:v19, name=relaxed_mclean, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:43:09 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'mirroring'
Oct 08 09:43:09 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'nfs'
Oct 08 09:43:09 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Oct 08 09:43:09 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2977280056' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]: 
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]: {
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]:     "fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]:     "health": {
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]:         "status": "HEALTH_OK",
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]:         "checks": {},
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]:         "mutes": []
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]:     },
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]:     "election_epoch": 5,
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]:     "quorum": [
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]:         0
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]:     ],
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]:     "quorum_names": [
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]:         "compute-0"
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]:     ],
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]:     "quorum_age": 5,
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]:     "monmap": {
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]:         "epoch": 1,
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]:         "min_mon_release_name": "squid",
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]:         "num_mons": 1
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]:     },
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]:     "osdmap": {
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]:         "epoch": 1,
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]:         "num_osds": 0,
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]:         "num_up_osds": 0,
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]:         "osd_up_since": 0,
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]:         "num_in_osds": 0,
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]:         "osd_in_since": 0,
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]:         "num_remapped_pgs": 0
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]:     },
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]:     "pgmap": {
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]:         "pgs_by_state": [],
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]:         "num_pgs": 0,
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]:         "num_pools": 0,
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]:         "num_objects": 0,
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]:         "data_bytes": 0,
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]:         "bytes_used": 0,
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]:         "bytes_avail": 0,
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]:         "bytes_total": 0
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]:     },
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]:     "fsmap": {
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]:         "epoch": 1,
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]:         "btime": "2025-10-08T09:43:01:374245+0000",
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]:         "by_rank": [],
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]:         "up:standby": 0
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]:     },
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]:     "mgrmap": {
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]:         "available": false,
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]:         "num_standbys": 0,
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]:         "modules": [
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]:             "iostat",
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]:             "nfs",
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]:             "restful"
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]:         ],
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]:         "services": {}
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]:     },
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]:     "servicemap": {
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]:         "epoch": 1,
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]:         "modified": "2025-10-08T09:43:01.375926+0000",
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]:         "services": {}
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]:     },
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]:     "progress_events": {}
Oct 08 09:43:09 compute-0 relaxed_mclean[73972]: }
Oct 08 09:43:09 compute-0 systemd[1]: libpod-7a0a8ac6baf2b1e3293028e4f010021006c38a024ab45361a4567ebc3fed1c6a.scope: Deactivated successfully.
Oct 08 09:43:09 compute-0 podman[73956]: 2025-10-08 09:43:09.156185279 +0000 UTC m=+0.326912275 container died 7a0a8ac6baf2b1e3293028e4f010021006c38a024ab45361a4567ebc3fed1c6a (image=quay.io/ceph/ceph:v19, name=relaxed_mclean, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct 08 09:43:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-edd866ae709c382715a1ce5c8068304f8e130c9fbaf8230a9b5b7a72eacebfbc-merged.mount: Deactivated successfully.
Oct 08 09:43:09 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/2977280056' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 08 09:43:09 compute-0 podman[73956]: 2025-10-08 09:43:09.222872236 +0000 UTC m=+0.393599232 container remove 7a0a8ac6baf2b1e3293028e4f010021006c38a024ab45361a4567ebc3fed1c6a (image=quay.io/ceph/ceph:v19, name=relaxed_mclean, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS)
Oct 08 09:43:09 compute-0 systemd[1]: libpod-conmon-7a0a8ac6baf2b1e3293028e4f010021006c38a024ab45361a4567ebc3fed1c6a.scope: Deactivated successfully.
Oct 08 09:43:09 compute-0 ceph-mgr[73869]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 08 09:43:09 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'orchestrator'
Oct 08 09:43:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:09.329+0000 7f971cc6d140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 08 09:43:09 compute-0 ceph-mgr[73869]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 08 09:43:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:09.559+0000 7f971cc6d140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 08 09:43:09 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'osd_perf_query'
Oct 08 09:43:09 compute-0 ceph-mgr[73869]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 08 09:43:09 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'osd_support'
Oct 08 09:43:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:09.637+0000 7f971cc6d140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 08 09:43:09 compute-0 ceph-mgr[73869]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 08 09:43:09 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'pg_autoscaler'
Oct 08 09:43:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:09.704+0000 7f971cc6d140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 08 09:43:09 compute-0 ceph-mgr[73869]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 08 09:43:09 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'progress'
Oct 08 09:43:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:09.784+0000 7f971cc6d140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 08 09:43:09 compute-0 ceph-mgr[73869]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 08 09:43:09 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'prometheus'
Oct 08 09:43:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:09.854+0000 7f971cc6d140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 08 09:43:10 compute-0 ceph-mgr[73869]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 08 09:43:10 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'rbd_support'
Oct 08 09:43:10 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:10.210+0000 7f971cc6d140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 08 09:43:10 compute-0 ceph-mgr[73869]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 08 09:43:10 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'restful'
Oct 08 09:43:10 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:10.305+0000 7f971cc6d140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 08 09:43:10 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'rgw'
Oct 08 09:43:10 compute-0 ceph-mgr[73869]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 08 09:43:10 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'rook'
Oct 08 09:43:10 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:10.721+0000 7f971cc6d140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 08 09:43:11 compute-0 ceph-mgr[73869]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 08 09:43:11 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'selftest'
Oct 08 09:43:11 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:11.245+0000 7f971cc6d140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 08 09:43:11 compute-0 podman[74012]: 2025-10-08 09:43:11.304731034 +0000 UTC m=+0.055518819 container create 9884b1522433a7472d7f22fc0ddbf67be2bd8fb14574b794061b1b2ac9c97796 (image=quay.io/ceph/ceph:v19, name=frosty_sinoussi, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct 08 09:43:11 compute-0 ceph-mgr[73869]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 08 09:43:11 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'snap_schedule'
Oct 08 09:43:11 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:11.314+0000 7f971cc6d140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 08 09:43:11 compute-0 systemd[1]: Started libpod-conmon-9884b1522433a7472d7f22fc0ddbf67be2bd8fb14574b794061b1b2ac9c97796.scope.
Oct 08 09:43:11 compute-0 podman[74012]: 2025-10-08 09:43:11.275612437 +0000 UTC m=+0.026400292 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:43:11 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:43:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e1a9dca2d7b30fdff36568d934a3ccab08f6e50914590d51c843954c99e8d8f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e1a9dca2d7b30fdff36568d934a3ccab08f6e50914590d51c843954c99e8d8f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e1a9dca2d7b30fdff36568d934a3ccab08f6e50914590d51c843954c99e8d8f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:11 compute-0 ceph-mgr[73869]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 08 09:43:11 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'stats'
Oct 08 09:43:11 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:11.392+0000 7f971cc6d140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 08 09:43:11 compute-0 podman[74012]: 2025-10-08 09:43:11.395136568 +0000 UTC m=+0.145924323 container init 9884b1522433a7472d7f22fc0ddbf67be2bd8fb14574b794061b1b2ac9c97796 (image=quay.io/ceph/ceph:v19, name=frosty_sinoussi, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:43:11 compute-0 podman[74012]: 2025-10-08 09:43:11.402379782 +0000 UTC m=+0.153167527 container start 9884b1522433a7472d7f22fc0ddbf67be2bd8fb14574b794061b1b2ac9c97796 (image=quay.io/ceph/ceph:v19, name=frosty_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:43:11 compute-0 podman[74012]: 2025-10-08 09:43:11.405741112 +0000 UTC m=+0.156528847 container attach 9884b1522433a7472d7f22fc0ddbf67be2bd8fb14574b794061b1b2ac9c97796 (image=quay.io/ceph/ceph:v19, name=frosty_sinoussi, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct 08 09:43:11 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'status'
Oct 08 09:43:11 compute-0 ceph-mgr[73869]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 08 09:43:11 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'telegraf'
Oct 08 09:43:11 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:11.537+0000 7f971cc6d140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 08 09:43:11 compute-0 ceph-mgr[73869]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 08 09:43:11 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'telemetry'
Oct 08 09:43:11 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:11.601+0000 7f971cc6d140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 08 09:43:11 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Oct 08 09:43:11 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2884133885' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]: 
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]: {
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]:     "fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]:     "health": {
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]:         "status": "HEALTH_OK",
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]:         "checks": {},
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]:         "mutes": []
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]:     },
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]:     "election_epoch": 5,
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]:     "quorum": [
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]:         0
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]:     ],
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]:     "quorum_names": [
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]:         "compute-0"
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]:     ],
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]:     "quorum_age": 8,
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]:     "monmap": {
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]:         "epoch": 1,
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]:         "min_mon_release_name": "squid",
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]:         "num_mons": 1
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]:     },
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]:     "osdmap": {
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]:         "epoch": 1,
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]:         "num_osds": 0,
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]:         "num_up_osds": 0,
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]:         "osd_up_since": 0,
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]:         "num_in_osds": 0,
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]:         "osd_in_since": 0,
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]:         "num_remapped_pgs": 0
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]:     },
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]:     "pgmap": {
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]:         "pgs_by_state": [],
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]:         "num_pgs": 0,
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]:         "num_pools": 0,
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]:         "num_objects": 0,
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]:         "data_bytes": 0,
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]:         "bytes_used": 0,
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]:         "bytes_avail": 0,
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]:         "bytes_total": 0
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]:     },
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]:     "fsmap": {
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]:         "epoch": 1,
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]:         "btime": "2025-10-08T09:43:01:374245+0000",
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]:         "by_rank": [],
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]:         "up:standby": 0
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]:     },
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]:     "mgrmap": {
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]:         "available": false,
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]:         "num_standbys": 0,
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]:         "modules": [
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]:             "iostat",
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]:             "nfs",
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]:             "restful"
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]:         ],
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]:         "services": {}
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]:     },
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]:     "servicemap": {
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]:         "epoch": 1,
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]:         "modified": "2025-10-08T09:43:01.375926+0000",
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]:         "services": {}
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]:     },
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]:     "progress_events": {}
Oct 08 09:43:11 compute-0 frosty_sinoussi[74028]: }
Oct 08 09:43:11 compute-0 systemd[1]: libpod-9884b1522433a7472d7f22fc0ddbf67be2bd8fb14574b794061b1b2ac9c97796.scope: Deactivated successfully.
Oct 08 09:43:11 compute-0 podman[74012]: 2025-10-08 09:43:11.621168777 +0000 UTC m=+0.371956562 container died 9884b1522433a7472d7f22fc0ddbf67be2bd8fb14574b794061b1b2ac9c97796 (image=quay.io/ceph/ceph:v19, name=frosty_sinoussi, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Oct 08 09:43:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e1a9dca2d7b30fdff36568d934a3ccab08f6e50914590d51c843954c99e8d8f-merged.mount: Deactivated successfully.
Oct 08 09:43:11 compute-0 podman[74012]: 2025-10-08 09:43:11.654294088 +0000 UTC m=+0.405081843 container remove 9884b1522433a7472d7f22fc0ddbf67be2bd8fb14574b794061b1b2ac9c97796 (image=quay.io/ceph/ceph:v19, name=frosty_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 08 09:43:11 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/2884133885' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 08 09:43:11 compute-0 systemd[1]: libpod-conmon-9884b1522433a7472d7f22fc0ddbf67be2bd8fb14574b794061b1b2ac9c97796.scope: Deactivated successfully.
Oct 08 09:43:11 compute-0 ceph-mgr[73869]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 08 09:43:11 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'test_orchestrator'
Oct 08 09:43:11 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:11.754+0000 7f971cc6d140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 08 09:43:11 compute-0 ceph-mgr[73869]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 08 09:43:11 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'volumes'
Oct 08 09:43:11 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:11.965+0000 7f971cc6d140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 08 09:43:12 compute-0 ceph-mgr[73869]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 08 09:43:12 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'zabbix'
Oct 08 09:43:12 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:12.224+0000 7f971cc6d140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 08 09:43:12 compute-0 ceph-mgr[73869]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 08 09:43:12 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:12.291+0000 7f971cc6d140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 08 09:43:12 compute-0 ceph-mgr[73869]: ms_deliver_dispatch: unhandled message 0x5613731ee9c0 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Oct 08 09:43:12 compute-0 ceph-mon[73572]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.ixicfj
Oct 08 09:43:12 compute-0 ceph-mgr[73869]: mgr handle_mgr_map Activating!
Oct 08 09:43:12 compute-0 ceph-mgr[73869]: mgr handle_mgr_map I am now activating
Oct 08 09:43:12 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.ixicfj(active, starting, since 0.0120135s)
Oct 08 09:43:12 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Oct 08 09:43:12 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1852375988' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 08 09:43:12 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).mds e1 all = 1
Oct 08 09:43:12 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Oct 08 09:43:12 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1852375988' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 08 09:43:12 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Oct 08 09:43:12 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1852375988' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 08 09:43:12 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Oct 08 09:43:12 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1852375988' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 08 09:43:12 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.ixicfj", "id": "compute-0.ixicfj"} v 0)
Oct 08 09:43:12 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1852375988' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mgr metadata", "who": "compute-0.ixicfj", "id": "compute-0.ixicfj"}]: dispatch
Oct 08 09:43:12 compute-0 ceph-mgr[73869]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:43:12 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: balancer
Oct 08 09:43:12 compute-0 ceph-mgr[73869]: [balancer INFO root] Starting
Oct 08 09:43:12 compute-0 ceph-mon[73572]: log_channel(cluster) log [INF] : Manager daemon compute-0.ixicfj is now available
Oct 08 09:43:12 compute-0 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_09:43:12
Oct 08 09:43:12 compute-0 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 08 09:43:12 compute-0 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct 08 09:43:12 compute-0 ceph-mgr[73869]: [balancer INFO root] No pools available
Oct 08 09:43:12 compute-0 ceph-mgr[73869]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:43:12 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: crash
Oct 08 09:43:12 compute-0 ceph-mgr[73869]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:43:12 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: devicehealth
Oct 08 09:43:12 compute-0 ceph-mgr[73869]: [devicehealth INFO root] Starting
Oct 08 09:43:12 compute-0 ceph-mgr[73869]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:43:12 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: iostat
Oct 08 09:43:12 compute-0 ceph-mgr[73869]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:43:12 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: nfs
Oct 08 09:43:12 compute-0 ceph-mgr[73869]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:43:12 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: orchestrator
Oct 08 09:43:12 compute-0 ceph-mgr[73869]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:43:12 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: pg_autoscaler
Oct 08 09:43:12 compute-0 ceph-mgr[73869]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:43:12 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: progress
Oct 08 09:43:12 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct 08 09:43:12 compute-0 ceph-mgr[73869]: [progress INFO root] Loading...
Oct 08 09:43:12 compute-0 ceph-mgr[73869]: [progress INFO root] No stored events to load
Oct 08 09:43:12 compute-0 ceph-mgr[73869]: [progress INFO root] Loaded [] historic events
Oct 08 09:43:12 compute-0 ceph-mgr[73869]: [progress INFO root] Loaded OSDMap, ready.
Oct 08 09:43:12 compute-0 ceph-mgr[73869]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:43:12 compute-0 ceph-mgr[73869]: [rbd_support INFO root] recovery thread starting
Oct 08 09:43:12 compute-0 ceph-mgr[73869]: [rbd_support INFO root] starting setup
Oct 08 09:43:12 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: rbd_support
Oct 08 09:43:12 compute-0 ceph-mgr[73869]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:43:12 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: restful
Oct 08 09:43:12 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ixicfj/mirror_snapshot_schedule"} v 0)
Oct 08 09:43:12 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1852375988' entity='mgr.compute-0.ixicfj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ixicfj/mirror_snapshot_schedule"}]: dispatch
Oct 08 09:43:12 compute-0 ceph-mgr[73869]: [restful INFO root] server_addr: :: server_port: 8003
Oct 08 09:43:12 compute-0 ceph-mgr[73869]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:43:12 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: status
Oct 08 09:43:12 compute-0 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 08 09:43:12 compute-0 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct 08 09:43:12 compute-0 ceph-mgr[73869]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:43:12 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: telemetry
Oct 08 09:43:12 compute-0 ceph-mgr[73869]: [restful WARNING root] server not running: no certificate configured
Oct 08 09:43:12 compute-0 ceph-mgr[73869]: [rbd_support INFO root] PerfHandler: starting
Oct 08 09:43:12 compute-0 ceph-mgr[73869]: [rbd_support INFO root] TaskHandler: starting
Oct 08 09:43:12 compute-0 ceph-mgr[73869]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:43:12 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Oct 08 09:43:12 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ixicfj/trash_purge_schedule"} v 0)
Oct 08 09:43:12 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1852375988' entity='mgr.compute-0.ixicfj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ixicfj/trash_purge_schedule"}]: dispatch
Oct 08 09:43:12 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1852375988' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:12 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Oct 08 09:43:12 compute-0 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 08 09:43:12 compute-0 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct 08 09:43:12 compute-0 ceph-mgr[73869]: [rbd_support INFO root] setup complete
Oct 08 09:43:12 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1852375988' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:12 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Oct 08 09:43:12 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: volumes
Oct 08 09:43:12 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1852375988' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:12 compute-0 ceph-mon[73572]: Activating manager daemon compute-0.ixicfj
Oct 08 09:43:12 compute-0 ceph-mon[73572]: mgrmap e2: compute-0.ixicfj(active, starting, since 0.0120135s)
Oct 08 09:43:12 compute-0 ceph-mon[73572]: from='mgr.14102 192.168.122.100:0/1852375988' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 08 09:43:12 compute-0 ceph-mon[73572]: from='mgr.14102 192.168.122.100:0/1852375988' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 08 09:43:12 compute-0 ceph-mon[73572]: from='mgr.14102 192.168.122.100:0/1852375988' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 08 09:43:12 compute-0 ceph-mon[73572]: from='mgr.14102 192.168.122.100:0/1852375988' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 08 09:43:12 compute-0 ceph-mon[73572]: from='mgr.14102 192.168.122.100:0/1852375988' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mgr metadata", "who": "compute-0.ixicfj", "id": "compute-0.ixicfj"}]: dispatch
Oct 08 09:43:12 compute-0 ceph-mon[73572]: Manager daemon compute-0.ixicfj is now available
Oct 08 09:43:12 compute-0 ceph-mon[73572]: from='mgr.14102 192.168.122.100:0/1852375988' entity='mgr.compute-0.ixicfj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ixicfj/mirror_snapshot_schedule"}]: dispatch
Oct 08 09:43:12 compute-0 ceph-mon[73572]: from='mgr.14102 192.168.122.100:0/1852375988' entity='mgr.compute-0.ixicfj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ixicfj/trash_purge_schedule"}]: dispatch
Oct 08 09:43:12 compute-0 ceph-mon[73572]: from='mgr.14102 192.168.122.100:0/1852375988' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:12 compute-0 ceph-mon[73572]: from='mgr.14102 192.168.122.100:0/1852375988' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:12 compute-0 ceph-mon[73572]: from='mgr.14102 192.168.122.100:0/1852375988' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:13 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.ixicfj(active, since 1.02767s)
Oct 08 09:43:13 compute-0 podman[74146]: 2025-10-08 09:43:13.729205673 +0000 UTC m=+0.047450499 container create 2700014e68075614fd90a6fc64c95230ada3c55ddaf84e071d8e788d7965dff3 (image=quay.io/ceph/ceph:v19, name=upbeat_matsumoto, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:43:13 compute-0 systemd[1]: Started libpod-conmon-2700014e68075614fd90a6fc64c95230ada3c55ddaf84e071d8e788d7965dff3.scope.
Oct 08 09:43:13 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:43:13 compute-0 podman[74146]: 2025-10-08 09:43:13.709782762 +0000 UTC m=+0.028027838 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:43:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e26238a63b155f33001ca58e8b208a28abd890ab008f881f279a48c616fb6cf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e26238a63b155f33001ca58e8b208a28abd890ab008f881f279a48c616fb6cf/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e26238a63b155f33001ca58e8b208a28abd890ab008f881f279a48c616fb6cf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:13 compute-0 podman[74146]: 2025-10-08 09:43:13.82432399 +0000 UTC m=+0.142568846 container init 2700014e68075614fd90a6fc64c95230ada3c55ddaf84e071d8e788d7965dff3 (image=quay.io/ceph/ceph:v19, name=upbeat_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True)
Oct 08 09:43:13 compute-0 podman[74146]: 2025-10-08 09:43:13.834926313 +0000 UTC m=+0.153171169 container start 2700014e68075614fd90a6fc64c95230ada3c55ddaf84e071d8e788d7965dff3 (image=quay.io/ceph/ceph:v19, name=upbeat_matsumoto, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:43:13 compute-0 podman[74146]: 2025-10-08 09:43:13.8402773 +0000 UTC m=+0.158522276 container attach 2700014e68075614fd90a6fc64c95230ada3c55ddaf84e071d8e788d7965dff3 (image=quay.io/ceph/ceph:v19, name=upbeat_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True)
Oct 08 09:43:14 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Oct 08 09:43:14 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2940749137' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]: 
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]: {
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]:     "fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]:     "health": {
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]:         "status": "HEALTH_OK",
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]:         "checks": {},
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]:         "mutes": []
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]:     },
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]:     "election_epoch": 5,
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]:     "quorum": [
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]:         0
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]:     ],
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]:     "quorum_names": [
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]:         "compute-0"
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]:     ],
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]:     "quorum_age": 10,
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]:     "monmap": {
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]:         "epoch": 1,
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]:         "min_mon_release_name": "squid",
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]:         "num_mons": 1
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]:     },
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]:     "osdmap": {
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]:         "epoch": 1,
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]:         "num_osds": 0,
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]:         "num_up_osds": 0,
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]:         "osd_up_since": 0,
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]:         "num_in_osds": 0,
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]:         "osd_in_since": 0,
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]:         "num_remapped_pgs": 0
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]:     },
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]:     "pgmap": {
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]:         "pgs_by_state": [],
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]:         "num_pgs": 0,
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]:         "num_pools": 0,
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]:         "num_objects": 0,
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]:         "data_bytes": 0,
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]:         "bytes_used": 0,
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]:         "bytes_avail": 0,
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]:         "bytes_total": 0
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]:     },
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]:     "fsmap": {
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]:         "epoch": 1,
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]:         "btime": "2025-10-08T09:43:01:374245+0000",
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]:         "by_rank": [],
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]:         "up:standby": 0
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]:     },
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]:     "mgrmap": {
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]:         "available": true,
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]:         "num_standbys": 0,
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]:         "modules": [
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]:             "iostat",
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]:             "nfs",
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]:             "restful"
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]:         ],
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]:         "services": {}
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]:     },
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]:     "servicemap": {
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]:         "epoch": 1,
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]:         "modified": "2025-10-08T09:43:01.375926+0000",
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]:         "services": {}
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]:     },
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]:     "progress_events": {}
Oct 08 09:43:14 compute-0 upbeat_matsumoto[74162]: }
Oct 08 09:43:14 compute-0 systemd[1]: libpod-2700014e68075614fd90a6fc64c95230ada3c55ddaf84e071d8e788d7965dff3.scope: Deactivated successfully.
Oct 08 09:43:14 compute-0 podman[74146]: 2025-10-08 09:43:14.260188642 +0000 UTC m=+0.578433458 container died 2700014e68075614fd90a6fc64c95230ada3c55ddaf84e071d8e788d7965dff3 (image=quay.io/ceph/ceph:v19, name=upbeat_matsumoto, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 08 09:43:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e26238a63b155f33001ca58e8b208a28abd890ab008f881f279a48c616fb6cf-merged.mount: Deactivated successfully.
Oct 08 09:43:14 compute-0 podman[74146]: 2025-10-08 09:43:14.306942603 +0000 UTC m=+0.625187449 container remove 2700014e68075614fd90a6fc64c95230ada3c55ddaf84e071d8e788d7965dff3 (image=quay.io/ceph/ceph:v19, name=upbeat_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 08 09:43:14 compute-0 ceph-mgr[73869]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 08 09:43:14 compute-0 ceph-mon[73572]: mgrmap e3: compute-0.ixicfj(active, since 1.02767s)
Oct 08 09:43:14 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/2940749137' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 08 09:43:14 compute-0 systemd[1]: libpod-conmon-2700014e68075614fd90a6fc64c95230ada3c55ddaf84e071d8e788d7965dff3.scope: Deactivated successfully.
Oct 08 09:43:14 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.ixicfj(active, since 2s)
Oct 08 09:43:14 compute-0 podman[74200]: 2025-10-08 09:43:14.378969677 +0000 UTC m=+0.054344479 container create 88e82cc624ffda0f2e43bec2b43e2df7bfdc6346edc3186ac606fae237b2a93d (image=quay.io/ceph/ceph:v19, name=pedantic_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct 08 09:43:14 compute-0 systemd[1]: Started libpod-conmon-88e82cc624ffda0f2e43bec2b43e2df7bfdc6346edc3186ac606fae237b2a93d.scope.
Oct 08 09:43:14 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:43:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3d3020fc614a1361225c5e7906cd77f674cc54866ae7544ace2b8e496ab1701/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3d3020fc614a1361225c5e7906cd77f674cc54866ae7544ace2b8e496ab1701/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3d3020fc614a1361225c5e7906cd77f674cc54866ae7544ace2b8e496ab1701/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3d3020fc614a1361225c5e7906cd77f674cc54866ae7544ace2b8e496ab1701/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:14 compute-0 podman[74200]: 2025-10-08 09:43:14.352537015 +0000 UTC m=+0.027911857 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:43:14 compute-0 podman[74200]: 2025-10-08 09:43:14.476422124 +0000 UTC m=+0.151796966 container init 88e82cc624ffda0f2e43bec2b43e2df7bfdc6346edc3186ac606fae237b2a93d (image=quay.io/ceph/ceph:v19, name=pedantic_grothendieck, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Oct 08 09:43:14 compute-0 podman[74200]: 2025-10-08 09:43:14.486042958 +0000 UTC m=+0.161417720 container start 88e82cc624ffda0f2e43bec2b43e2df7bfdc6346edc3186ac606fae237b2a93d (image=quay.io/ceph/ceph:v19, name=pedantic_grothendieck, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:43:14 compute-0 podman[74200]: 2025-10-08 09:43:14.48958729 +0000 UTC m=+0.164962132 container attach 88e82cc624ffda0f2e43bec2b43e2df7bfdc6346edc3186ac606fae237b2a93d (image=quay.io/ceph/ceph:v19, name=pedantic_grothendieck, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True)
Oct 08 09:43:14 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Oct 08 09:43:14 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/523082670' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 08 09:43:14 compute-0 pedantic_grothendieck[74216]: 
Oct 08 09:43:14 compute-0 pedantic_grothendieck[74216]: [global]
Oct 08 09:43:14 compute-0 pedantic_grothendieck[74216]:         fsid = 787292cc-8154-50c4-9e00-e9be3e817149
Oct 08 09:43:14 compute-0 pedantic_grothendieck[74216]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Oct 08 09:43:14 compute-0 systemd[1]: libpod-88e82cc624ffda0f2e43bec2b43e2df7bfdc6346edc3186ac606fae237b2a93d.scope: Deactivated successfully.
Oct 08 09:43:14 compute-0 conmon[74216]: conmon 88e82cc624ffda0f2e43 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-88e82cc624ffda0f2e43bec2b43e2df7bfdc6346edc3186ac606fae237b2a93d.scope/container/memory.events
Oct 08 09:43:14 compute-0 podman[74200]: 2025-10-08 09:43:14.847732509 +0000 UTC m=+0.523107331 container died 88e82cc624ffda0f2e43bec2b43e2df7bfdc6346edc3186ac606fae237b2a93d (image=quay.io/ceph/ceph:v19, name=pedantic_grothendieck, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:43:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3d3020fc614a1361225c5e7906cd77f674cc54866ae7544ace2b8e496ab1701-merged.mount: Deactivated successfully.
Oct 08 09:43:14 compute-0 podman[74200]: 2025-10-08 09:43:14.896629969 +0000 UTC m=+0.572004761 container remove 88e82cc624ffda0f2e43bec2b43e2df7bfdc6346edc3186ac606fae237b2a93d (image=quay.io/ceph/ceph:v19, name=pedantic_grothendieck, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:43:14 compute-0 systemd[1]: libpod-conmon-88e82cc624ffda0f2e43bec2b43e2df7bfdc6346edc3186ac606fae237b2a93d.scope: Deactivated successfully.
Oct 08 09:43:14 compute-0 podman[74255]: 2025-10-08 09:43:14.952243047 +0000 UTC m=+0.036245899 container create c3dae8c46d316b89d5f068d9b2ce85fbaba67e859402475546ea3bca19213d55 (image=quay.io/ceph/ceph:v19, name=recursing_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 08 09:43:14 compute-0 systemd[1]: Started libpod-conmon-c3dae8c46d316b89d5f068d9b2ce85fbaba67e859402475546ea3bca19213d55.scope.
Oct 08 09:43:15 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:43:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/141243ac16d31f61cffa008ee4b1e1da808e7a15b6a2df374c04e3f043a3dc30/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/141243ac16d31f61cffa008ee4b1e1da808e7a15b6a2df374c04e3f043a3dc30/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/141243ac16d31f61cffa008ee4b1e1da808e7a15b6a2df374c04e3f043a3dc30/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:15 compute-0 podman[74255]: 2025-10-08 09:43:15.029021222 +0000 UTC m=+0.113024094 container init c3dae8c46d316b89d5f068d9b2ce85fbaba67e859402475546ea3bca19213d55 (image=quay.io/ceph/ceph:v19, name=recursing_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct 08 09:43:15 compute-0 podman[74255]: 2025-10-08 09:43:14.936125576 +0000 UTC m=+0.020128478 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:43:15 compute-0 podman[74255]: 2025-10-08 09:43:15.033711623 +0000 UTC m=+0.117714475 container start c3dae8c46d316b89d5f068d9b2ce85fbaba67e859402475546ea3bca19213d55 (image=quay.io/ceph/ceph:v19, name=recursing_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 08 09:43:15 compute-0 podman[74255]: 2025-10-08 09:43:15.037078423 +0000 UTC m=+0.121081275 container attach c3dae8c46d316b89d5f068d9b2ce85fbaba67e859402475546ea3bca19213d55 (image=quay.io/ceph/ceph:v19, name=recursing_bouman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:43:15 compute-0 ceph-mon[73572]: mgrmap e4: compute-0.ixicfj(active, since 2s)
Oct 08 09:43:15 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/523082670' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 08 09:43:15 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Oct 08 09:43:15 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1396328474' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Oct 08 09:43:16 compute-0 ceph-mgr[73869]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 08 09:43:16 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/1396328474' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Oct 08 09:43:16 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1396328474' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Oct 08 09:43:16 compute-0 ceph-mgr[73869]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct 08 09:43:16 compute-0 ceph-mgr[73869]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct 08 09:43:16 compute-0 ceph-mgr[73869]: mgr respawn  0: '/usr/bin/ceph-mgr'
Oct 08 09:43:16 compute-0 ceph-mgr[73869]: mgr respawn  1: '-n'
Oct 08 09:43:16 compute-0 ceph-mgr[73869]: mgr respawn  2: 'mgr.compute-0.ixicfj'
Oct 08 09:43:16 compute-0 ceph-mgr[73869]: mgr respawn  3: '-f'
Oct 08 09:43:16 compute-0 ceph-mgr[73869]: mgr respawn  4: '--setuser'
Oct 08 09:43:16 compute-0 ceph-mgr[73869]: mgr respawn  5: 'ceph'
Oct 08 09:43:16 compute-0 ceph-mgr[73869]: mgr respawn  6: '--setgroup'
Oct 08 09:43:16 compute-0 ceph-mgr[73869]: mgr respawn  7: 'ceph'
Oct 08 09:43:16 compute-0 ceph-mgr[73869]: mgr respawn  8: '--default-log-to-file=false'
Oct 08 09:43:16 compute-0 ceph-mgr[73869]: mgr respawn  9: '--default-log-to-journald=true'
Oct 08 09:43:16 compute-0 ceph-mgr[73869]: mgr respawn  10: '--default-log-to-stderr=false'
Oct 08 09:43:16 compute-0 ceph-mgr[73869]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Oct 08 09:43:16 compute-0 ceph-mgr[73869]: mgr respawn  exe_path /proc/self/exe
Oct 08 09:43:16 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.ixicfj(active, since 4s)
Oct 08 09:43:16 compute-0 systemd[1]: libpod-c3dae8c46d316b89d5f068d9b2ce85fbaba67e859402475546ea3bca19213d55.scope: Deactivated successfully.
Oct 08 09:43:16 compute-0 podman[74255]: 2025-10-08 09:43:16.373730016 +0000 UTC m=+1.457732888 container died c3dae8c46d316b89d5f068d9b2ce85fbaba67e859402475546ea3bca19213d55 (image=quay.io/ceph/ceph:v19, name=recursing_bouman, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Oct 08 09:43:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-141243ac16d31f61cffa008ee4b1e1da808e7a15b6a2df374c04e3f043a3dc30-merged.mount: Deactivated successfully.
Oct 08 09:43:16 compute-0 podman[74255]: 2025-10-08 09:43:16.413057542 +0000 UTC m=+1.497060394 container remove c3dae8c46d316b89d5f068d9b2ce85fbaba67e859402475546ea3bca19213d55 (image=quay.io/ceph/ceph:v19, name=recursing_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:43:16 compute-0 systemd[1]: libpod-conmon-c3dae8c46d316b89d5f068d9b2ce85fbaba67e859402475546ea3bca19213d55.scope: Deactivated successfully.
Oct 08 09:43:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ignoring --setuser ceph since I am not root
Oct 08 09:43:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ignoring --setgroup ceph since I am not root
Oct 08 09:43:16 compute-0 ceph-mgr[73869]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct 08 09:43:16 compute-0 ceph-mgr[73869]: pidfile_write: ignore empty --pid-file
Oct 08 09:43:16 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'alerts'
Oct 08 09:43:16 compute-0 podman[74309]: 2025-10-08 09:43:16.491356971 +0000 UTC m=+0.055902303 container create aa18a1faf3e80ce6853e7375bd4a75f6f053a0368b9914f847bd55dcb6aa98fd (image=quay.io/ceph/ceph:v19, name=amazing_darwin, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:43:16 compute-0 systemd[1]: Started libpod-conmon-aa18a1faf3e80ce6853e7375bd4a75f6f053a0368b9914f847bd55dcb6aa98fd.scope.
Oct 08 09:43:16 compute-0 podman[74309]: 2025-10-08 09:43:16.463416345 +0000 UTC m=+0.027961737 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:43:16 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:43:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba485fb532334a40119488da65102cb36200898a705131ab64373b2284784415/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba485fb532334a40119488da65102cb36200898a705131ab64373b2284784415/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba485fb532334a40119488da65102cb36200898a705131ab64373b2284784415/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:16 compute-0 ceph-mgr[73869]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 08 09:43:16 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'balancer'
Oct 08 09:43:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:16.576+0000 7fa8781df140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 08 09:43:16 compute-0 podman[74309]: 2025-10-08 09:43:16.581300612 +0000 UTC m=+0.145845974 container init aa18a1faf3e80ce6853e7375bd4a75f6f053a0368b9914f847bd55dcb6aa98fd (image=quay.io/ceph/ceph:v19, name=amazing_darwin, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct 08 09:43:16 compute-0 podman[74309]: 2025-10-08 09:43:16.590609794 +0000 UTC m=+0.155155086 container start aa18a1faf3e80ce6853e7375bd4a75f6f053a0368b9914f847bd55dcb6aa98fd (image=quay.io/ceph/ceph:v19, name=amazing_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Oct 08 09:43:16 compute-0 podman[74309]: 2025-10-08 09:43:16.594542398 +0000 UTC m=+0.159087770 container attach aa18a1faf3e80ce6853e7375bd4a75f6f053a0368b9914f847bd55dcb6aa98fd (image=quay.io/ceph/ceph:v19, name=amazing_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct 08 09:43:16 compute-0 ceph-mgr[73869]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 08 09:43:16 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'cephadm'
Oct 08 09:43:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:16.662+0000 7fa8781df140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 08 09:43:17 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Oct 08 09:43:17 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1658657222' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 08 09:43:17 compute-0 amazing_darwin[74345]: {
Oct 08 09:43:17 compute-0 amazing_darwin[74345]:     "epoch": 5,
Oct 08 09:43:17 compute-0 amazing_darwin[74345]:     "available": true,
Oct 08 09:43:17 compute-0 amazing_darwin[74345]:     "active_name": "compute-0.ixicfj",
Oct 08 09:43:17 compute-0 amazing_darwin[74345]:     "num_standby": 0
Oct 08 09:43:17 compute-0 amazing_darwin[74345]: }
Oct 08 09:43:17 compute-0 systemd[1]: libpod-aa18a1faf3e80ce6853e7375bd4a75f6f053a0368b9914f847bd55dcb6aa98fd.scope: Deactivated successfully.
Oct 08 09:43:17 compute-0 podman[74309]: 2025-10-08 09:43:17.047496831 +0000 UTC m=+0.612042123 container died aa18a1faf3e80ce6853e7375bd4a75f6f053a0368b9914f847bd55dcb6aa98fd (image=quay.io/ceph/ceph:v19, name=amazing_darwin, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True)
Oct 08 09:43:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba485fb532334a40119488da65102cb36200898a705131ab64373b2284784415-merged.mount: Deactivated successfully.
Oct 08 09:43:17 compute-0 podman[74309]: 2025-10-08 09:43:17.082781192 +0000 UTC m=+0.647326484 container remove aa18a1faf3e80ce6853e7375bd4a75f6f053a0368b9914f847bd55dcb6aa98fd (image=quay.io/ceph/ceph:v19, name=amazing_darwin, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:43:17 compute-0 systemd[1]: libpod-conmon-aa18a1faf3e80ce6853e7375bd4a75f6f053a0368b9914f847bd55dcb6aa98fd.scope: Deactivated successfully.
Oct 08 09:43:17 compute-0 podman[74394]: 2025-10-08 09:43:17.145970137 +0000 UTC m=+0.043105539 container create a9737eb4c6a8f2df8e88719d28fc2d7997e0910b8043b193980c1f679cb0b7be (image=quay.io/ceph/ceph:v19, name=jovial_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:43:17 compute-0 systemd[1]: Started libpod-conmon-a9737eb4c6a8f2df8e88719d28fc2d7997e0910b8043b193980c1f679cb0b7be.scope.
Oct 08 09:43:17 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:43:17 compute-0 podman[74394]: 2025-10-08 09:43:17.124801412 +0000 UTC m=+0.021936854 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:43:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c79ae716067f1d69df891c3da9889fe6871f99ff24fc4112052e47f11a064c52/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c79ae716067f1d69df891c3da9889fe6871f99ff24fc4112052e47f11a064c52/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c79ae716067f1d69df891c3da9889fe6871f99ff24fc4112052e47f11a064c52/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:17 compute-0 podman[74394]: 2025-10-08 09:43:17.242960381 +0000 UTC m=+0.140095853 container init a9737eb4c6a8f2df8e88719d28fc2d7997e0910b8043b193980c1f679cb0b7be (image=quay.io/ceph/ceph:v19, name=jovial_shannon, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default)
Oct 08 09:43:17 compute-0 podman[74394]: 2025-10-08 09:43:17.252641196 +0000 UTC m=+0.149776618 container start a9737eb4c6a8f2df8e88719d28fc2d7997e0910b8043b193980c1f679cb0b7be (image=quay.io/ceph/ceph:v19, name=jovial_shannon, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 08 09:43:17 compute-0 podman[74394]: 2025-10-08 09:43:17.257160636 +0000 UTC m=+0.154296028 container attach a9737eb4c6a8f2df8e88719d28fc2d7997e0910b8043b193980c1f679cb0b7be (image=quay.io/ceph/ceph:v19, name=jovial_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:43:17 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/1396328474' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Oct 08 09:43:17 compute-0 ceph-mon[73572]: mgrmap e5: compute-0.ixicfj(active, since 4s)
Oct 08 09:43:17 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/1658657222' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 08 09:43:17 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'crash'
Oct 08 09:43:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:17.469+0000 7fa8781df140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 08 09:43:17 compute-0 ceph-mgr[73869]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 08 09:43:17 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'dashboard'
Oct 08 09:43:18 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'devicehealth'
Oct 08 09:43:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:18.098+0000 7fa8781df140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 08 09:43:18 compute-0 ceph-mgr[73869]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 08 09:43:18 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'diskprediction_local'
Oct 08 09:43:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 08 09:43:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 08 09:43:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]:   from numpy import show_config as show_numpy_config
Oct 08 09:43:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:18.261+0000 7fa8781df140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 08 09:43:18 compute-0 ceph-mgr[73869]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 08 09:43:18 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'influx'
Oct 08 09:43:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:18.338+0000 7fa8781df140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 08 09:43:18 compute-0 ceph-mgr[73869]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 08 09:43:18 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'insights'
Oct 08 09:43:18 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'iostat'
Oct 08 09:43:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:18.468+0000 7fa8781df140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 08 09:43:18 compute-0 ceph-mgr[73869]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 08 09:43:18 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'k8sevents'
Oct 08 09:43:18 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'localpool'
Oct 08 09:43:18 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'mds_autoscaler'
Oct 08 09:43:19 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'mirroring'
Oct 08 09:43:19 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'nfs'
Oct 08 09:43:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:19.436+0000 7fa8781df140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 08 09:43:19 compute-0 ceph-mgr[73869]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 08 09:43:19 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'orchestrator'
Oct 08 09:43:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:19.680+0000 7fa8781df140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 08 09:43:19 compute-0 ceph-mgr[73869]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 08 09:43:19 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'osd_perf_query'
Oct 08 09:43:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:19.768+0000 7fa8781df140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 08 09:43:19 compute-0 ceph-mgr[73869]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 08 09:43:19 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'osd_support'
Oct 08 09:43:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:19.880+0000 7fa8781df140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 08 09:43:19 compute-0 ceph-mgr[73869]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 08 09:43:19 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'pg_autoscaler'
Oct 08 09:43:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:19.960+0000 7fa8781df140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 08 09:43:19 compute-0 ceph-mgr[73869]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 08 09:43:19 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'progress'
Oct 08 09:43:20 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:20.035+0000 7fa8781df140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 08 09:43:20 compute-0 ceph-mgr[73869]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 08 09:43:20 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'prometheus'
Oct 08 09:43:20 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:20.375+0000 7fa8781df140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 08 09:43:20 compute-0 ceph-mgr[73869]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 08 09:43:20 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'rbd_support'
Oct 08 09:43:20 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:20.475+0000 7fa8781df140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 08 09:43:20 compute-0 ceph-mgr[73869]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 08 09:43:20 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'restful'
Oct 08 09:43:20 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'rgw'
Oct 08 09:43:20 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:20.894+0000 7fa8781df140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 08 09:43:20 compute-0 ceph-mgr[73869]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 08 09:43:20 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'rook'
Oct 08 09:43:21 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:21.447+0000 7fa8781df140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 08 09:43:21 compute-0 ceph-mgr[73869]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 08 09:43:21 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'selftest'
Oct 08 09:43:21 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:21.513+0000 7fa8781df140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 08 09:43:21 compute-0 ceph-mgr[73869]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 08 09:43:21 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'snap_schedule'
Oct 08 09:43:21 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:21.588+0000 7fa8781df140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 08 09:43:21 compute-0 ceph-mgr[73869]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 08 09:43:21 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'stats'
Oct 08 09:43:21 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'status'
Oct 08 09:43:21 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:21.731+0000 7fa8781df140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 08 09:43:21 compute-0 ceph-mgr[73869]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 08 09:43:21 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'telegraf'
Oct 08 09:43:21 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:21.798+0000 7fa8781df140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 08 09:43:21 compute-0 ceph-mgr[73869]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 08 09:43:21 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'telemetry'
Oct 08 09:43:21 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:21.941+0000 7fa8781df140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 08 09:43:21 compute-0 ceph-mgr[73869]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 08 09:43:21 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'test_orchestrator'
Oct 08 09:43:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:22.158+0000 7fa8781df140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 08 09:43:22 compute-0 ceph-mgr[73869]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 08 09:43:22 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'volumes'
Oct 08 09:43:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:22.432+0000 7fa8781df140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 08 09:43:22 compute-0 ceph-mgr[73869]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 08 09:43:22 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'zabbix'
Oct 08 09:43:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:22.511+0000 7fa8781df140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 08 09:43:22 compute-0 ceph-mgr[73869]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 08 09:43:22 compute-0 ceph-mon[73572]: log_channel(cluster) log [INF] : Active manager daemon compute-0.ixicfj restarted
Oct 08 09:43:22 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Oct 08 09:43:22 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 08 09:43:22 compute-0 ceph-mon[73572]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.ixicfj
Oct 08 09:43:22 compute-0 ceph-mgr[73869]: ms_deliver_dispatch: unhandled message 0x5624c2d68d00 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Oct 08 09:43:22 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Oct 08 09:43:22 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Oct 08 09:43:22 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Oct 08 09:43:22 compute-0 ceph-mgr[73869]: mgr handle_mgr_map Activating!
Oct 08 09:43:22 compute-0 ceph-mgr[73869]: mgr handle_mgr_map I am now activating
Oct 08 09:43:22 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Oct 08 09:43:22 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.ixicfj(active, starting, since 0.0133128s)
Oct 08 09:43:22 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Oct 08 09:43:22 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 08 09:43:22 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.ixicfj", "id": "compute-0.ixicfj"} v 0)
Oct 08 09:43:22 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mgr metadata", "who": "compute-0.ixicfj", "id": "compute-0.ixicfj"}]: dispatch
Oct 08 09:43:22 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Oct 08 09:43:22 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 08 09:43:22 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).mds e1 all = 1
Oct 08 09:43:22 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Oct 08 09:43:22 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 08 09:43:22 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Oct 08 09:43:22 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 08 09:43:22 compute-0 ceph-mgr[73869]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:43:22 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: balancer
Oct 08 09:43:22 compute-0 ceph-mgr[73869]: [balancer INFO root] Starting
Oct 08 09:43:22 compute-0 ceph-mon[73572]: log_channel(cluster) log [INF] : Manager daemon compute-0.ixicfj is now available
Oct 08 09:43:22 compute-0 ceph-mgr[73869]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:43:22 compute-0 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_09:43:22
Oct 08 09:43:22 compute-0 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 08 09:43:22 compute-0 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct 08 09:43:22 compute-0 ceph-mgr[73869]: [balancer INFO root] No pools available
Oct 08 09:43:22 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Oct 08 09:43:22 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Oct 08 09:43:22 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Oct 08 09:43:22 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:22 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Oct 08 09:43:22 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:22 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: cephadm
Oct 08 09:43:22 compute-0 ceph-mgr[73869]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:43:22 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: crash
Oct 08 09:43:22 compute-0 ceph-mon[73572]: Active manager daemon compute-0.ixicfj restarted
Oct 08 09:43:22 compute-0 ceph-mon[73572]: Activating manager daemon compute-0.ixicfj
Oct 08 09:43:22 compute-0 ceph-mon[73572]: osdmap e2: 0 total, 0 up, 0 in
Oct 08 09:43:22 compute-0 ceph-mon[73572]: mgrmap e6: compute-0.ixicfj(active, starting, since 0.0133128s)
Oct 08 09:43:22 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 08 09:43:22 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mgr metadata", "who": "compute-0.ixicfj", "id": "compute-0.ixicfj"}]: dispatch
Oct 08 09:43:22 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 08 09:43:22 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 08 09:43:22 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 08 09:43:22 compute-0 ceph-mon[73572]: Manager daemon compute-0.ixicfj is now available
Oct 08 09:43:22 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:22 compute-0 ceph-mgr[73869]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:43:22 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: devicehealth
Oct 08 09:43:22 compute-0 ceph-mgr[73869]: [devicehealth INFO root] Starting
Oct 08 09:43:22 compute-0 ceph-mgr[73869]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:43:22 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: iostat
Oct 08 09:43:22 compute-0 ceph-mgr[73869]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:43:22 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: nfs
Oct 08 09:43:22 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Oct 08 09:43:22 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 08 09:43:22 compute-0 ceph-mgr[73869]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:43:22 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: orchestrator
Oct 08 09:43:22 compute-0 ceph-mgr[73869]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:43:22 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: pg_autoscaler
Oct 08 09:43:22 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Oct 08 09:43:22 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 08 09:43:22 compute-0 ceph-mgr[73869]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:43:22 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: progress
Oct 08 09:43:22 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct 08 09:43:22 compute-0 ceph-mgr[73869]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:43:22 compute-0 ceph-mgr[73869]: [progress INFO root] Loading...
Oct 08 09:43:22 compute-0 ceph-mgr[73869]: [progress INFO root] No stored events to load
Oct 08 09:43:22 compute-0 ceph-mgr[73869]: [progress INFO root] Loaded [] historic events
Oct 08 09:43:22 compute-0 ceph-mgr[73869]: [progress INFO root] Loaded OSDMap, ready.
Oct 08 09:43:22 compute-0 ceph-mgr[73869]: [rbd_support INFO root] recovery thread starting
Oct 08 09:43:22 compute-0 ceph-mgr[73869]: [rbd_support INFO root] starting setup
Oct 08 09:43:22 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: rbd_support
Oct 08 09:43:22 compute-0 ceph-mgr[73869]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:43:22 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: restful
Oct 08 09:43:22 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ixicfj/mirror_snapshot_schedule"} v 0)
Oct 08 09:43:22 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ixicfj/mirror_snapshot_schedule"}]: dispatch
Oct 08 09:43:22 compute-0 ceph-mgr[73869]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:43:22 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: status
Oct 08 09:43:22 compute-0 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 08 09:43:22 compute-0 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct 08 09:43:22 compute-0 ceph-mgr[73869]: [restful INFO root] server_addr: :: server_port: 8003
Oct 08 09:43:22 compute-0 ceph-mgr[73869]: [rbd_support INFO root] PerfHandler: starting
Oct 08 09:43:22 compute-0 ceph-mgr[73869]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:43:22 compute-0 ceph-mgr[73869]: [restful WARNING root] server not running: no certificate configured
Oct 08 09:43:22 compute-0 ceph-mgr[73869]: [rbd_support INFO root] TaskHandler: starting
Oct 08 09:43:22 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: telemetry
Oct 08 09:43:22 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ixicfj/trash_purge_schedule"} v 0)
Oct 08 09:43:22 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ixicfj/trash_purge_schedule"}]: dispatch
Oct 08 09:43:22 compute-0 ceph-mgr[73869]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:43:22 compute-0 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 08 09:43:22 compute-0 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct 08 09:43:22 compute-0 ceph-mgr[73869]: [rbd_support INFO root] setup complete
Oct 08 09:43:22 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: volumes
Oct 08 09:43:23 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.agent_endpoint_root_cert}] v 0)
Oct 08 09:43:23 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:23 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.agent_endpoint_key}] v 0)
Oct 08 09:43:23 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:23 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019931263 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:43:23 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.ixicfj(active, since 1.02644s)
Oct 08 09:43:23 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Oct 08 09:43:23 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Oct 08 09:43:23 compute-0 jovial_shannon[74410]: {
Oct 08 09:43:23 compute-0 jovial_shannon[74410]:     "mgrmap_epoch": 7,
Oct 08 09:43:23 compute-0 jovial_shannon[74410]:     "initialized": true
Oct 08 09:43:23 compute-0 jovial_shannon[74410]: }
Oct 08 09:43:23 compute-0 systemd[1]: libpod-a9737eb4c6a8f2df8e88719d28fc2d7997e0910b8043b193980c1f679cb0b7be.scope: Deactivated successfully.
Oct 08 09:43:23 compute-0 podman[74394]: 2025-10-08 09:43:23.567244402 +0000 UTC m=+6.464379794 container died a9737eb4c6a8f2df8e88719d28fc2d7997e0910b8043b193980c1f679cb0b7be (image=quay.io/ceph/ceph:v19, name=jovial_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid)
Oct 08 09:43:23 compute-0 ceph-mon[73572]: Found migration_current of "None". Setting to last migration.
Oct 08 09:43:23 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:23 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 08 09:43:23 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 08 09:43:23 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ixicfj/mirror_snapshot_schedule"}]: dispatch
Oct 08 09:43:23 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ixicfj/trash_purge_schedule"}]: dispatch
Oct 08 09:43:23 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:23 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:23 compute-0 ceph-mon[73572]: mgrmap e7: compute-0.ixicfj(active, since 1.02644s)
Oct 08 09:43:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-c79ae716067f1d69df891c3da9889fe6871f99ff24fc4112052e47f11a064c52-merged.mount: Deactivated successfully.
Oct 08 09:43:23 compute-0 podman[74394]: 2025-10-08 09:43:23.604844342 +0000 UTC m=+6.501979734 container remove a9737eb4c6a8f2df8e88719d28fc2d7997e0910b8043b193980c1f679cb0b7be (image=quay.io/ceph/ceph:v19, name=jovial_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:43:23 compute-0 systemd[1]: libpod-conmon-a9737eb4c6a8f2df8e88719d28fc2d7997e0910b8043b193980c1f679cb0b7be.scope: Deactivated successfully.
Oct 08 09:43:23 compute-0 podman[74559]: 2025-10-08 09:43:23.661079118 +0000 UTC m=+0.034740611 container create e31796ebb4b924e8cb95ade968d4ae798dfc1bf2c58f535ab73ec61827d5b874 (image=quay.io/ceph/ceph:v19, name=jovial_dhawan, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:43:23 compute-0 systemd[1]: Started libpod-conmon-e31796ebb4b924e8cb95ade968d4ae798dfc1bf2c58f535ab73ec61827d5b874.scope.
Oct 08 09:43:23 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:43:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b79590e1f10f35c727f63955059a72e05936128d55f1de85dc5ebe9023c559a7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b79590e1f10f35c727f63955059a72e05936128d55f1de85dc5ebe9023c559a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b79590e1f10f35c727f63955059a72e05936128d55f1de85dc5ebe9023c559a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:23 compute-0 podman[74559]: 2025-10-08 09:43:23.735096332 +0000 UTC m=+0.108757885 container init e31796ebb4b924e8cb95ade968d4ae798dfc1bf2c58f535ab73ec61827d5b874 (image=quay.io/ceph/ceph:v19, name=jovial_dhawan, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Oct 08 09:43:23 compute-0 podman[74559]: 2025-10-08 09:43:23.739946927 +0000 UTC m=+0.113608420 container start e31796ebb4b924e8cb95ade968d4ae798dfc1bf2c58f535ab73ec61827d5b874 (image=quay.io/ceph/ceph:v19, name=jovial_dhawan, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:43:23 compute-0 podman[74559]: 2025-10-08 09:43:23.645619755 +0000 UTC m=+0.019281268 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:43:23 compute-0 podman[74559]: 2025-10-08 09:43:23.744078728 +0000 UTC m=+0.117740271 container attach e31796ebb4b924e8cb95ade968d4ae798dfc1bf2c58f535ab73ec61827d5b874 (image=quay.io/ceph/ceph:v19, name=jovial_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid)
Oct 08 09:43:24 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.14134 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 09:43:24 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Oct 08 09:43:24 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:24 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Oct 08 09:43:24 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 08 09:43:24 compute-0 systemd[1]: libpod-e31796ebb4b924e8cb95ade968d4ae798dfc1bf2c58f535ab73ec61827d5b874.scope: Deactivated successfully.
Oct 08 09:43:24 compute-0 podman[74559]: 2025-10-08 09:43:24.102939689 +0000 UTC m=+0.476601192 container died e31796ebb4b924e8cb95ade968d4ae798dfc1bf2c58f535ab73ec61827d5b874 (image=quay.io/ceph/ceph:v19, name=jovial_dhawan, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:43:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-b79590e1f10f35c727f63955059a72e05936128d55f1de85dc5ebe9023c559a7-merged.mount: Deactivated successfully.
Oct 08 09:43:24 compute-0 podman[74559]: 2025-10-08 09:43:24.136328735 +0000 UTC m=+0.509990228 container remove e31796ebb4b924e8cb95ade968d4ae798dfc1bf2c58f535ab73ec61827d5b874 (image=quay.io/ceph/ceph:v19, name=jovial_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:43:24 compute-0 systemd[1]: libpod-conmon-e31796ebb4b924e8cb95ade968d4ae798dfc1bf2c58f535ab73ec61827d5b874.scope: Deactivated successfully.
Oct 08 09:43:24 compute-0 podman[74614]: 2025-10-08 09:43:24.193834632 +0000 UTC m=+0.038478321 container create db86b883e3fc4cf584317c61f71c57a8cfa013e13a24a76236014539bb80485f (image=quay.io/ceph/ceph:v19, name=vigorous_heyrovsky, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 08 09:43:24 compute-0 systemd[1]: Started libpod-conmon-db86b883e3fc4cf584317c61f71c57a8cfa013e13a24a76236014539bb80485f.scope.
Oct 08 09:43:24 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:43:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45c9b60f63031d8ff53300db5cccb70617253d40b9aeaa4963e5af21e63ada0c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45c9b60f63031d8ff53300db5cccb70617253d40b9aeaa4963e5af21e63ada0c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45c9b60f63031d8ff53300db5cccb70617253d40b9aeaa4963e5af21e63ada0c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:24 compute-0 podman[74614]: 2025-10-08 09:43:24.260728298 +0000 UTC m=+0.105372027 container init db86b883e3fc4cf584317c61f71c57a8cfa013e13a24a76236014539bb80485f (image=quay.io/ceph/ceph:v19, name=vigorous_heyrovsky, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct 08 09:43:24 compute-0 podman[74614]: 2025-10-08 09:43:24.270598723 +0000 UTC m=+0.115242422 container start db86b883e3fc4cf584317c61f71c57a8cfa013e13a24a76236014539bb80485f (image=quay.io/ceph/ceph:v19, name=vigorous_heyrovsky, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 08 09:43:24 compute-0 podman[74614]: 2025-10-08 09:43:24.175428334 +0000 UTC m=+0.020072063 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:43:24 compute-0 podman[74614]: 2025-10-08 09:43:24.273539637 +0000 UTC m=+0.118183326 container attach db86b883e3fc4cf584317c61f71c57a8cfa013e13a24a76236014539bb80485f (image=quay.io/ceph/ceph:v19, name=vigorous_heyrovsky, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 08 09:43:24 compute-0 ceph-mgr[73869]: [cephadm INFO cherrypy.error] [08/Oct/2025:09:43:24] ENGINE Bus STARTING
Oct 08 09:43:24 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : [08/Oct/2025:09:43:24] ENGINE Bus STARTING
Oct 08 09:43:24 compute-0 ceph-mgr[73869]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 08 09:43:24 compute-0 ceph-mgr[73869]: [cephadm INFO cherrypy.error] [08/Oct/2025:09:43:24] ENGINE Serving on http://192.168.122.100:8765
Oct 08 09:43:24 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : [08/Oct/2025:09:43:24] ENGINE Serving on http://192.168.122.100:8765
Oct 08 09:43:24 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 09:43:24 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Oct 08 09:43:24 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:24 compute-0 ceph-mgr[73869]: [cephadm INFO root] Set ssh ssh_user
Oct 08 09:43:24 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Oct 08 09:43:24 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Oct 08 09:43:24 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:24 compute-0 ceph-mgr[73869]: [cephadm INFO root] Set ssh ssh_config
Oct 08 09:43:24 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Oct 08 09:43:24 compute-0 ceph-mgr[73869]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Oct 08 09:43:24 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Oct 08 09:43:24 compute-0 vigorous_heyrovsky[74631]: ssh user set to ceph-admin. sudo will be used
Oct 08 09:43:24 compute-0 systemd[1]: libpod-db86b883e3fc4cf584317c61f71c57a8cfa013e13a24a76236014539bb80485f.scope: Deactivated successfully.
Oct 08 09:43:24 compute-0 podman[74614]: 2025-10-08 09:43:24.626485608 +0000 UTC m=+0.471129337 container died db86b883e3fc4cf584317c61f71c57a8cfa013e13a24a76236014539bb80485f (image=quay.io/ceph/ceph:v19, name=vigorous_heyrovsky, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 08 09:43:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-45c9b60f63031d8ff53300db5cccb70617253d40b9aeaa4963e5af21e63ada0c-merged.mount: Deactivated successfully.
Oct 08 09:43:24 compute-0 ceph-mgr[73869]: [cephadm INFO cherrypy.error] [08/Oct/2025:09:43:24] ENGINE Serving on https://192.168.122.100:7150
Oct 08 09:43:24 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : [08/Oct/2025:09:43:24] ENGINE Serving on https://192.168.122.100:7150
Oct 08 09:43:24 compute-0 ceph-mgr[73869]: [cephadm INFO cherrypy.error] [08/Oct/2025:09:43:24] ENGINE Bus STARTED
Oct 08 09:43:24 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : [08/Oct/2025:09:43:24] ENGINE Bus STARTED
Oct 08 09:43:24 compute-0 ceph-mgr[73869]: [cephadm INFO cherrypy.error] [08/Oct/2025:09:43:24] ENGINE Client ('192.168.122.100', 46604) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 08 09:43:24 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : [08/Oct/2025:09:43:24] ENGINE Client ('192.168.122.100', 46604) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 08 09:43:24 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Oct 08 09:43:24 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 08 09:43:24 compute-0 podman[74614]: 2025-10-08 09:43:24.673597763 +0000 UTC m=+0.518241482 container remove db86b883e3fc4cf584317c61f71c57a8cfa013e13a24a76236014539bb80485f (image=quay.io/ceph/ceph:v19, name=vigorous_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Oct 08 09:43:24 compute-0 systemd[1]: libpod-conmon-db86b883e3fc4cf584317c61f71c57a8cfa013e13a24a76236014539bb80485f.scope: Deactivated successfully.
Oct 08 09:43:24 compute-0 podman[74692]: 2025-10-08 09:43:24.764162635 +0000 UTC m=+0.061802695 container create 39b3822a074ca99a8415127ebae35721aaa00bae342a460076d7baf04ff4a288 (image=quay.io/ceph/ceph:v19, name=musing_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 08 09:43:24 compute-0 systemd[1]: Started libpod-conmon-39b3822a074ca99a8415127ebae35721aaa00bae342a460076d7baf04ff4a288.scope.
Oct 08 09:43:24 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:43:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4b6d830759cb0ccf205899dc1eb794fe516099495807c69f92beacfec569c58/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4b6d830759cb0ccf205899dc1eb794fe516099495807c69f92beacfec569c58/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4b6d830759cb0ccf205899dc1eb794fe516099495807c69f92beacfec569c58/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4b6d830759cb0ccf205899dc1eb794fe516099495807c69f92beacfec569c58/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4b6d830759cb0ccf205899dc1eb794fe516099495807c69f92beacfec569c58/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:24 compute-0 podman[74692]: 2025-10-08 09:43:24.739513138 +0000 UTC m=+0.037153288 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:43:24 compute-0 podman[74692]: 2025-10-08 09:43:24.840688849 +0000 UTC m=+0.138328919 container init 39b3822a074ca99a8415127ebae35721aaa00bae342a460076d7baf04ff4a288 (image=quay.io/ceph/ceph:v19, name=musing_shockley, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 08 09:43:24 compute-0 podman[74692]: 2025-10-08 09:43:24.848338863 +0000 UTC m=+0.145978923 container start 39b3822a074ca99a8415127ebae35721aaa00bae342a460076d7baf04ff4a288 (image=quay.io/ceph/ceph:v19, name=musing_shockley, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:43:24 compute-0 podman[74692]: 2025-10-08 09:43:24.851651659 +0000 UTC m=+0.149291719 container attach 39b3822a074ca99a8415127ebae35721aaa00bae342a460076d7baf04ff4a288 (image=quay.io/ceph/ceph:v19, name=musing_shockley, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid)
Oct 08 09:43:25 compute-0 ceph-mon[73572]: from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Oct 08 09:43:25 compute-0 ceph-mon[73572]: from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Oct 08 09:43:25 compute-0 ceph-mon[73572]: from='client.14134 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 09:43:25 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:25 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 08 09:43:25 compute-0 ceph-mon[73572]: [08/Oct/2025:09:43:24] ENGINE Bus STARTING
Oct 08 09:43:25 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:25 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:25 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 08 09:43:25 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.ixicfj(active, since 2s)
Oct 08 09:43:25 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 09:43:25 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Oct 08 09:43:25 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:25 compute-0 ceph-mgr[73869]: [cephadm INFO root] Set ssh ssh_identity_key
Oct 08 09:43:25 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Oct 08 09:43:25 compute-0 ceph-mgr[73869]: [cephadm INFO root] Set ssh private key
Oct 08 09:43:25 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Set ssh private key
Oct 08 09:43:25 compute-0 systemd[1]: libpod-39b3822a074ca99a8415127ebae35721aaa00bae342a460076d7baf04ff4a288.scope: Deactivated successfully.
Oct 08 09:43:25 compute-0 podman[74692]: 2025-10-08 09:43:25.185099868 +0000 UTC m=+0.482739938 container died 39b3822a074ca99a8415127ebae35721aaa00bae342a460076d7baf04ff4a288 (image=quay.io/ceph/ceph:v19, name=musing_shockley, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 08 09:43:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-e4b6d830759cb0ccf205899dc1eb794fe516099495807c69f92beacfec569c58-merged.mount: Deactivated successfully.
Oct 08 09:43:25 compute-0 podman[74692]: 2025-10-08 09:43:25.228004548 +0000 UTC m=+0.525644638 container remove 39b3822a074ca99a8415127ebae35721aaa00bae342a460076d7baf04ff4a288 (image=quay.io/ceph/ceph:v19, name=musing_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:43:25 compute-0 systemd[1]: libpod-conmon-39b3822a074ca99a8415127ebae35721aaa00bae342a460076d7baf04ff4a288.scope: Deactivated successfully.
Oct 08 09:43:25 compute-0 podman[74746]: 2025-10-08 09:43:25.306501514 +0000 UTC m=+0.052775436 container create 1d672449a8998ab58a655a649b23fbf7aec3c77880c096a77f45cbaeda712131 (image=quay.io/ceph/ceph:v19, name=dreamy_poitras, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:43:25 compute-0 systemd[1]: Started libpod-conmon-1d672449a8998ab58a655a649b23fbf7aec3c77880c096a77f45cbaeda712131.scope.
Oct 08 09:43:25 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:43:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74b10c6081f803c6b56fe8cd5028b526dfdc69e9929049d209c0f7fd8d9f7d60/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74b10c6081f803c6b56fe8cd5028b526dfdc69e9929049d209c0f7fd8d9f7d60/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74b10c6081f803c6b56fe8cd5028b526dfdc69e9929049d209c0f7fd8d9f7d60/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74b10c6081f803c6b56fe8cd5028b526dfdc69e9929049d209c0f7fd8d9f7d60/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74b10c6081f803c6b56fe8cd5028b526dfdc69e9929049d209c0f7fd8d9f7d60/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:25 compute-0 podman[74746]: 2025-10-08 09:43:25.287953492 +0000 UTC m=+0.034227424 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:43:25 compute-0 podman[74746]: 2025-10-08 09:43:25.396013423 +0000 UTC m=+0.142287355 container init 1d672449a8998ab58a655a649b23fbf7aec3c77880c096a77f45cbaeda712131 (image=quay.io/ceph/ceph:v19, name=dreamy_poitras, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:43:25 compute-0 podman[74746]: 2025-10-08 09:43:25.406620853 +0000 UTC m=+0.152894765 container start 1d672449a8998ab58a655a649b23fbf7aec3c77880c096a77f45cbaeda712131 (image=quay.io/ceph/ceph:v19, name=dreamy_poitras, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 08 09:43:25 compute-0 podman[74746]: 2025-10-08 09:43:25.411763507 +0000 UTC m=+0.158037419 container attach 1d672449a8998ab58a655a649b23fbf7aec3c77880c096a77f45cbaeda712131 (image=quay.io/ceph/ceph:v19, name=dreamy_poitras, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:43:25 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 09:43:25 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Oct 08 09:43:25 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:25 compute-0 ceph-mgr[73869]: [cephadm INFO root] Set ssh ssh_identity_pub
Oct 08 09:43:25 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Oct 08 09:43:25 compute-0 systemd[1]: libpod-1d672449a8998ab58a655a649b23fbf7aec3c77880c096a77f45cbaeda712131.scope: Deactivated successfully.
Oct 08 09:43:25 compute-0 podman[74746]: 2025-10-08 09:43:25.769446299 +0000 UTC m=+0.515720231 container died 1d672449a8998ab58a655a649b23fbf7aec3c77880c096a77f45cbaeda712131 (image=quay.io/ceph/ceph:v19, name=dreamy_poitras, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:43:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-74b10c6081f803c6b56fe8cd5028b526dfdc69e9929049d209c0f7fd8d9f7d60-merged.mount: Deactivated successfully.
Oct 08 09:43:25 compute-0 podman[74746]: 2025-10-08 09:43:25.813683142 +0000 UTC m=+0.559957094 container remove 1d672449a8998ab58a655a649b23fbf7aec3c77880c096a77f45cbaeda712131 (image=quay.io/ceph/ceph:v19, name=dreamy_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:43:25 compute-0 systemd[1]: libpod-conmon-1d672449a8998ab58a655a649b23fbf7aec3c77880c096a77f45cbaeda712131.scope: Deactivated successfully.
Oct 08 09:43:25 compute-0 podman[74798]: 2025-10-08 09:43:25.89413233 +0000 UTC m=+0.053832409 container create c8d7c87414edf21b6855c874fc33f356de8a06d32455f87c980ae9da39b69606 (image=quay.io/ceph/ceph:v19, name=gracious_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:43:25 compute-0 systemd[1]: Started libpod-conmon-c8d7c87414edf21b6855c874fc33f356de8a06d32455f87c980ae9da39b69606.scope.
Oct 08 09:43:25 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:43:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0867299247c15c0c59d22cc9c1b69173479677c1a80a9a7eca732a5af663a66/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0867299247c15c0c59d22cc9c1b69173479677c1a80a9a7eca732a5af663a66/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0867299247c15c0c59d22cc9c1b69173479677c1a80a9a7eca732a5af663a66/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:25 compute-0 podman[74798]: 2025-10-08 09:43:25.877118897 +0000 UTC m=+0.036818986 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:43:25 compute-0 podman[74798]: 2025-10-08 09:43:25.995382224 +0000 UTC m=+0.155082313 container init c8d7c87414edf21b6855c874fc33f356de8a06d32455f87c980ae9da39b69606 (image=quay.io/ceph/ceph:v19, name=gracious_ganguly, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct 08 09:43:26 compute-0 podman[74798]: 2025-10-08 09:43:26.004600499 +0000 UTC m=+0.164300598 container start c8d7c87414edf21b6855c874fc33f356de8a06d32455f87c980ae9da39b69606 (image=quay.io/ceph/ceph:v19, name=gracious_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:43:26 compute-0 podman[74798]: 2025-10-08 09:43:26.008149762 +0000 UTC m=+0.167849851 container attach c8d7c87414edf21b6855c874fc33f356de8a06d32455f87c980ae9da39b69606 (image=quay.io/ceph/ceph:v19, name=gracious_ganguly, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Oct 08 09:43:26 compute-0 ceph-mon[73572]: [08/Oct/2025:09:43:24] ENGINE Serving on http://192.168.122.100:8765
Oct 08 09:43:26 compute-0 ceph-mon[73572]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 09:43:26 compute-0 ceph-mon[73572]: Set ssh ssh_user
Oct 08 09:43:26 compute-0 ceph-mon[73572]: Set ssh ssh_config
Oct 08 09:43:26 compute-0 ceph-mon[73572]: ssh user set to ceph-admin. sudo will be used
Oct 08 09:43:26 compute-0 ceph-mon[73572]: [08/Oct/2025:09:43:24] ENGINE Serving on https://192.168.122.100:7150
Oct 08 09:43:26 compute-0 ceph-mon[73572]: [08/Oct/2025:09:43:24] ENGINE Bus STARTED
Oct 08 09:43:26 compute-0 ceph-mon[73572]: [08/Oct/2025:09:43:24] ENGINE Client ('192.168.122.100', 46604) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 08 09:43:26 compute-0 ceph-mon[73572]: mgrmap e8: compute-0.ixicfj(active, since 2s)
Oct 08 09:43:26 compute-0 ceph-mon[73572]: from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 09:43:26 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:26 compute-0 ceph-mon[73572]: Set ssh ssh_identity_key
Oct 08 09:43:26 compute-0 ceph-mon[73572]: Set ssh private key
Oct 08 09:43:26 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:26 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 09:43:26 compute-0 gracious_ganguly[74814]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDMUetrKYz2yzUqXQdz0GMEc7nZWQFiWernMvGrA8oCSXKUWp6oF4zAIrVF9kp7fG8GVxs6O5yNHgIYsMs9v39LHMe/VQPYXxcVu6/8aDnAS2wzSlH1kfOrpdntAo+JesC34iTzRriGvjARpVqmkBrz6RB9QZX8SnrBdZst0W4m1X8OD+O6DYEBMJxWtgiIPmMnOubMs+k1f8ONJcYKxq3HscWukNjnCKBsiyvX3kwhdV590HAFLDaMvqxoan4CH48GeLqNYj86NBeSsJuWftk0wYOtBlTJMmOE4EDYzliyGb+KuHgFYT5qijo1SvM4ayDYzPY3kP0UsGfsLje0plcbILyKEBHHUs1Xf6XfnOnvpCpN6uEr24OyPbe53iYjL/C0ZAjRuU+unEK4t4SmRsyU4cZqe6i+RdjvwcTF8fasBcSM02BpcHbJfWZCp/smBkJdsq3XnVWRBu4mJUByoSrPl3DVwH3GUayVW16yOYMiqo8gro2cCnDPwCmwjrmEzqM= zuul@controller
Oct 08 09:43:26 compute-0 systemd[1]: libpod-c8d7c87414edf21b6855c874fc33f356de8a06d32455f87c980ae9da39b69606.scope: Deactivated successfully.
Oct 08 09:43:26 compute-0 podman[74798]: 2025-10-08 09:43:26.4113937 +0000 UTC m=+0.571093769 container died c8d7c87414edf21b6855c874fc33f356de8a06d32455f87c980ae9da39b69606 (image=quay.io/ceph/ceph:v19, name=gracious_ganguly, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:43:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0867299247c15c0c59d22cc9c1b69173479677c1a80a9a7eca732a5af663a66-merged.mount: Deactivated successfully.
Oct 08 09:43:26 compute-0 podman[74798]: 2025-10-08 09:43:26.449649511 +0000 UTC m=+0.609349590 container remove c8d7c87414edf21b6855c874fc33f356de8a06d32455f87c980ae9da39b69606 (image=quay.io/ceph/ceph:v19, name=gracious_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:43:26 compute-0 systemd[1]: libpod-conmon-c8d7c87414edf21b6855c874fc33f356de8a06d32455f87c980ae9da39b69606.scope: Deactivated successfully.
Oct 08 09:43:26 compute-0 podman[74851]: 2025-10-08 09:43:26.509964027 +0000 UTC m=+0.040967419 container create 35c5c5d406dfd163b1952dc0a052cce4ab75825e6b02629fde695ef31197fae3 (image=quay.io/ceph/ceph:v19, name=suspicious_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:43:26 compute-0 ceph-mgr[73869]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 08 09:43:26 compute-0 systemd[1]: Started libpod-conmon-35c5c5d406dfd163b1952dc0a052cce4ab75825e6b02629fde695ef31197fae3.scope.
Oct 08 09:43:26 compute-0 podman[74851]: 2025-10-08 09:43:26.489281647 +0000 UTC m=+0.020285069 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:43:26 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:43:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fef55c080c77112ea1cb56957cce303d6477f1d9cc5387cf26dc4036e7c62b5b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fef55c080c77112ea1cb56957cce303d6477f1d9cc5387cf26dc4036e7c62b5b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fef55c080c77112ea1cb56957cce303d6477f1d9cc5387cf26dc4036e7c62b5b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:26 compute-0 podman[74851]: 2025-10-08 09:43:26.607565055 +0000 UTC m=+0.138568447 container init 35c5c5d406dfd163b1952dc0a052cce4ab75825e6b02629fde695ef31197fae3 (image=quay.io/ceph/ceph:v19, name=suspicious_wilbur, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:43:26 compute-0 podman[74851]: 2025-10-08 09:43:26.624118423 +0000 UTC m=+0.155121825 container start 35c5c5d406dfd163b1952dc0a052cce4ab75825e6b02629fde695ef31197fae3 (image=quay.io/ceph/ceph:v19, name=suspicious_wilbur, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:43:26 compute-0 podman[74851]: 2025-10-08 09:43:26.627539942 +0000 UTC m=+0.158543374 container attach 35c5c5d406dfd163b1952dc0a052cce4ab75825e6b02629fde695ef31197fae3 (image=quay.io/ceph/ceph:v19, name=suspicious_wilbur, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 08 09:43:27 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 09:43:27 compute-0 ceph-mon[73572]: from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 09:43:27 compute-0 ceph-mon[73572]: Set ssh ssh_identity_pub
Oct 08 09:43:27 compute-0 ceph-mon[73572]: from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 09:43:27 compute-0 sshd-session[74894]: Accepted publickey for ceph-admin from 192.168.122.100 port 56752 ssh2: RSA SHA256:oltPosKfvcqSfDqAHq+rz23Sj7/sQ0zn4f0i/r7NEZA
Oct 08 09:43:27 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Oct 08 09:43:27 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct 08 09:43:27 compute-0 systemd-logind[798]: New session 22 of user ceph-admin.
Oct 08 09:43:27 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct 08 09:43:27 compute-0 systemd[1]: Starting User Manager for UID 42477...
Oct 08 09:43:27 compute-0 systemd[74898]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 08 09:43:27 compute-0 systemd[74898]: Queued start job for default target Main User Target.
Oct 08 09:43:27 compute-0 systemd[74898]: Created slice User Application Slice.
Oct 08 09:43:27 compute-0 systemd[74898]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 08 09:43:27 compute-0 systemd[74898]: Started Daily Cleanup of User's Temporary Directories.
Oct 08 09:43:27 compute-0 systemd[74898]: Reached target Paths.
Oct 08 09:43:27 compute-0 systemd[74898]: Reached target Timers.
Oct 08 09:43:27 compute-0 systemd[74898]: Starting D-Bus User Message Bus Socket...
Oct 08 09:43:27 compute-0 systemd[74898]: Starting Create User's Volatile Files and Directories...
Oct 08 09:43:27 compute-0 sshd-session[74911]: Accepted publickey for ceph-admin from 192.168.122.100 port 56762 ssh2: RSA SHA256:oltPosKfvcqSfDqAHq+rz23Sj7/sQ0zn4f0i/r7NEZA
Oct 08 09:43:27 compute-0 systemd[74898]: Finished Create User's Volatile Files and Directories.
Oct 08 09:43:27 compute-0 systemd[74898]: Listening on D-Bus User Message Bus Socket.
Oct 08 09:43:27 compute-0 systemd[74898]: Reached target Sockets.
Oct 08 09:43:27 compute-0 systemd[74898]: Reached target Basic System.
Oct 08 09:43:27 compute-0 systemd[74898]: Reached target Main User Target.
Oct 08 09:43:27 compute-0 systemd[74898]: Startup finished in 138ms.
Oct 08 09:43:27 compute-0 systemd-logind[798]: New session 24 of user ceph-admin.
Oct 08 09:43:27 compute-0 systemd[1]: Started User Manager for UID 42477.
Oct 08 09:43:27 compute-0 systemd[1]: Started Session 22 of User ceph-admin.
Oct 08 09:43:27 compute-0 systemd[1]: Started Session 24 of User ceph-admin.
Oct 08 09:43:27 compute-0 sshd-session[74894]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 08 09:43:27 compute-0 sshd-session[74911]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 08 09:43:27 compute-0 sudo[74918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:43:27 compute-0 sudo[74918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:27 compute-0 sudo[74918]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:27 compute-0 sshd-session[74943]: Accepted publickey for ceph-admin from 192.168.122.100 port 56768 ssh2: RSA SHA256:oltPosKfvcqSfDqAHq+rz23Sj7/sQ0zn4f0i/r7NEZA
Oct 08 09:43:27 compute-0 systemd-logind[798]: New session 25 of user ceph-admin.
Oct 08 09:43:27 compute-0 systemd[1]: Started Session 25 of User ceph-admin.
Oct 08 09:43:27 compute-0 sshd-session[74943]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 08 09:43:27 compute-0 sudo[74947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-0
Oct 08 09:43:27 compute-0 sudo[74947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:27 compute-0 sudo[74947]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:28 compute-0 ceph-mon[73572]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 09:43:28 compute-0 sshd-session[74972]: Accepted publickey for ceph-admin from 192.168.122.100 port 56778 ssh2: RSA SHA256:oltPosKfvcqSfDqAHq+rz23Sj7/sQ0zn4f0i/r7NEZA
Oct 08 09:43:28 compute-0 systemd-logind[798]: New session 26 of user ceph-admin.
Oct 08 09:43:28 compute-0 systemd[1]: Started Session 26 of User ceph-admin.
Oct 08 09:43:28 compute-0 sshd-session[74972]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 08 09:43:28 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053155 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:43:28 compute-0 sudo[74976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36
Oct 08 09:43:28 compute-0 sudo[74976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:28 compute-0 sudo[74976]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:28 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Oct 08 09:43:28 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Oct 08 09:43:28 compute-0 sshd-session[75001]: Accepted publickey for ceph-admin from 192.168.122.100 port 56780 ssh2: RSA SHA256:oltPosKfvcqSfDqAHq+rz23Sj7/sQ0zn4f0i/r7NEZA
Oct 08 09:43:28 compute-0 ceph-mgr[73869]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 08 09:43:28 compute-0 systemd-logind[798]: New session 27 of user ceph-admin.
Oct 08 09:43:28 compute-0 systemd[1]: Started Session 27 of User ceph-admin.
Oct 08 09:43:28 compute-0 sshd-session[75001]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 08 09:43:28 compute-0 sudo[75005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149
Oct 08 09:43:28 compute-0 sudo[75005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:28 compute-0 sudo[75005]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:28 compute-0 sshd-session[75030]: Accepted publickey for ceph-admin from 192.168.122.100 port 56786 ssh2: RSA SHA256:oltPosKfvcqSfDqAHq+rz23Sj7/sQ0zn4f0i/r7NEZA
Oct 08 09:43:28 compute-0 systemd-logind[798]: New session 28 of user ceph-admin.
Oct 08 09:43:28 compute-0 systemd[1]: Started Session 28 of User ceph-admin.
Oct 08 09:43:28 compute-0 sshd-session[75030]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 08 09:43:28 compute-0 sudo[75034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149
Oct 08 09:43:28 compute-0 sudo[75034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:28 compute-0 sudo[75034]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:29 compute-0 ceph-mon[73572]: Deploying cephadm binary to compute-0
Oct 08 09:43:29 compute-0 sshd-session[75059]: Accepted publickey for ceph-admin from 192.168.122.100 port 56800 ssh2: RSA SHA256:oltPosKfvcqSfDqAHq+rz23Sj7/sQ0zn4f0i/r7NEZA
Oct 08 09:43:29 compute-0 systemd-logind[798]: New session 29 of user ceph-admin.
Oct 08 09:43:29 compute-0 systemd[1]: Started Session 29 of User ceph-admin.
Oct 08 09:43:29 compute-0 sshd-session[75059]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 08 09:43:29 compute-0 sudo[75063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new
Oct 08 09:43:29 compute-0 sudo[75063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:29 compute-0 sudo[75063]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:29 compute-0 sshd-session[75088]: Accepted publickey for ceph-admin from 192.168.122.100 port 56814 ssh2: RSA SHA256:oltPosKfvcqSfDqAHq+rz23Sj7/sQ0zn4f0i/r7NEZA
Oct 08 09:43:29 compute-0 systemd-logind[798]: New session 30 of user ceph-admin.
Oct 08 09:43:29 compute-0 systemd[1]: Started Session 30 of User ceph-admin.
Oct 08 09:43:29 compute-0 sshd-session[75088]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 08 09:43:29 compute-0 sudo[75092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149
Oct 08 09:43:29 compute-0 sudo[75092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:29 compute-0 sudo[75092]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:29 compute-0 sshd-session[75117]: Accepted publickey for ceph-admin from 192.168.122.100 port 56820 ssh2: RSA SHA256:oltPosKfvcqSfDqAHq+rz23Sj7/sQ0zn4f0i/r7NEZA
Oct 08 09:43:29 compute-0 systemd-logind[798]: New session 31 of user ceph-admin.
Oct 08 09:43:29 compute-0 systemd[1]: Started Session 31 of User ceph-admin.
Oct 08 09:43:29 compute-0 sshd-session[75117]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 08 09:43:30 compute-0 sudo[75121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new
Oct 08 09:43:30 compute-0 sudo[75121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:30 compute-0 sudo[75121]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:30 compute-0 sshd-session[75146]: Accepted publickey for ceph-admin from 192.168.122.100 port 56836 ssh2: RSA SHA256:oltPosKfvcqSfDqAHq+rz23Sj7/sQ0zn4f0i/r7NEZA
Oct 08 09:43:30 compute-0 systemd-logind[798]: New session 32 of user ceph-admin.
Oct 08 09:43:30 compute-0 systemd[1]: Started Session 32 of User ceph-admin.
Oct 08 09:43:30 compute-0 sshd-session[75146]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 08 09:43:30 compute-0 ceph-mgr[73869]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 08 09:43:31 compute-0 sshd-session[75173]: Accepted publickey for ceph-admin from 192.168.122.100 port 56844 ssh2: RSA SHA256:oltPosKfvcqSfDqAHq+rz23Sj7/sQ0zn4f0i/r7NEZA
Oct 08 09:43:31 compute-0 systemd-logind[798]: New session 33 of user ceph-admin.
Oct 08 09:43:31 compute-0 systemd[1]: Started Session 33 of User ceph-admin.
Oct 08 09:43:31 compute-0 sshd-session[75173]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 08 09:43:31 compute-0 sudo[75177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36
Oct 08 09:43:31 compute-0 sudo[75177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:31 compute-0 sudo[75177]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:31 compute-0 sshd-session[75202]: Accepted publickey for ceph-admin from 192.168.122.100 port 56854 ssh2: RSA SHA256:oltPosKfvcqSfDqAHq+rz23Sj7/sQ0zn4f0i/r7NEZA
Oct 08 09:43:31 compute-0 systemd-logind[798]: New session 34 of user ceph-admin.
Oct 08 09:43:31 compute-0 systemd[1]: Started Session 34 of User ceph-admin.
Oct 08 09:43:31 compute-0 sshd-session[75202]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 08 09:43:31 compute-0 sudo[75206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-0
Oct 08 09:43:31 compute-0 sudo[75206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:32 compute-0 sudo[75206]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Oct 08 09:43:32 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:32 compute-0 ceph-mgr[73869]: [cephadm INFO root] Added host compute-0
Oct 08 09:43:32 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Added host compute-0
Oct 08 09:43:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Oct 08 09:43:32 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 08 09:43:32 compute-0 suspicious_wilbur[74868]: Added host 'compute-0' with addr '192.168.122.100'
Oct 08 09:43:32 compute-0 systemd[1]: libpod-35c5c5d406dfd163b1952dc0a052cce4ab75825e6b02629fde695ef31197fae3.scope: Deactivated successfully.
Oct 08 09:43:32 compute-0 podman[74851]: 2025-10-08 09:43:32.203429459 +0000 UTC m=+5.734432851 container died 35c5c5d406dfd163b1952dc0a052cce4ab75825e6b02629fde695ef31197fae3 (image=quay.io/ceph/ceph:v19, name=suspicious_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct 08 09:43:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-fef55c080c77112ea1cb56957cce303d6477f1d9cc5387cf26dc4036e7c62b5b-merged.mount: Deactivated successfully.
Oct 08 09:43:32 compute-0 podman[74851]: 2025-10-08 09:43:32.253197538 +0000 UTC m=+5.784200940 container remove 35c5c5d406dfd163b1952dc0a052cce4ab75825e6b02629fde695ef31197fae3 (image=quay.io/ceph/ceph:v19, name=suspicious_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:43:32 compute-0 systemd[1]: libpod-conmon-35c5c5d406dfd163b1952dc0a052cce4ab75825e6b02629fde695ef31197fae3.scope: Deactivated successfully.
Oct 08 09:43:32 compute-0 sudo[75253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:43:32 compute-0 sudo[75253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:32 compute-0 sudo[75253]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:32 compute-0 sudo[75296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 pull
Oct 08 09:43:32 compute-0 podman[75288]: 2025-10-08 09:43:32.327334476 +0000 UTC m=+0.043966365 container create a7025fd0221c295425e09a4c3f8f9ac3d3150c8d7421a1781fe8386baf2abbdb (image=quay.io/ceph/ceph:v19, name=gifted_keldysh, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:43:32 compute-0 sudo[75296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:32 compute-0 systemd[1]: Started libpod-conmon-a7025fd0221c295425e09a4c3f8f9ac3d3150c8d7421a1781fe8386baf2abbdb.scope.
Oct 08 09:43:32 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:43:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99606dc3eec62013e8eb6f6b86e5f3d1a82885ac6c795aaf02c84982184d7e8d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99606dc3eec62013e8eb6f6b86e5f3d1a82885ac6c795aaf02c84982184d7e8d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99606dc3eec62013e8eb6f6b86e5f3d1a82885ac6c795aaf02c84982184d7e8d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:32 compute-0 podman[75288]: 2025-10-08 09:43:32.399599694 +0000 UTC m=+0.116231613 container init a7025fd0221c295425e09a4c3f8f9ac3d3150c8d7421a1781fe8386baf2abbdb (image=quay.io/ceph/ceph:v19, name=gifted_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 08 09:43:32 compute-0 podman[75288]: 2025-10-08 09:43:32.308087091 +0000 UTC m=+0.024719040 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:43:32 compute-0 podman[75288]: 2025-10-08 09:43:32.40793836 +0000 UTC m=+0.124570249 container start a7025fd0221c295425e09a4c3f8f9ac3d3150c8d7421a1781fe8386baf2abbdb (image=quay.io/ceph/ceph:v19, name=gifted_keldysh, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:43:32 compute-0 podman[75288]: 2025-10-08 09:43:32.411297177 +0000 UTC m=+0.127929086 container attach a7025fd0221c295425e09a4c3f8f9ac3d3150c8d7421a1781fe8386baf2abbdb (image=quay.io/ceph/ceph:v19, name=gifted_keldysh, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:43:32 compute-0 ceph-mgr[73869]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 08 09:43:32 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 09:43:32 compute-0 ceph-mgr[73869]: [cephadm INFO root] Saving service mon spec with placement count:5
Oct 08 09:43:32 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Oct 08 09:43:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Oct 08 09:43:32 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:32 compute-0 gifted_keldysh[75330]: Scheduled mon update...
Oct 08 09:43:32 compute-0 systemd[1]: libpod-a7025fd0221c295425e09a4c3f8f9ac3d3150c8d7421a1781fe8386baf2abbdb.scope: Deactivated successfully.
Oct 08 09:43:32 compute-0 podman[75288]: 2025-10-08 09:43:32.850978328 +0000 UTC m=+0.567610227 container died a7025fd0221c295425e09a4c3f8f9ac3d3150c8d7421a1781fe8386baf2abbdb (image=quay.io/ceph/ceph:v19, name=gifted_keldysh, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 08 09:43:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-99606dc3eec62013e8eb6f6b86e5f3d1a82885ac6c795aaf02c84982184d7e8d-merged.mount: Deactivated successfully.
Oct 08 09:43:32 compute-0 podman[75288]: 2025-10-08 09:43:32.892451743 +0000 UTC m=+0.609083632 container remove a7025fd0221c295425e09a4c3f8f9ac3d3150c8d7421a1781fe8386baf2abbdb (image=quay.io/ceph/ceph:v19, name=gifted_keldysh, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:43:32 compute-0 systemd[1]: libpod-conmon-a7025fd0221c295425e09a4c3f8f9ac3d3150c8d7421a1781fe8386baf2abbdb.scope: Deactivated successfully.
Oct 08 09:43:32 compute-0 podman[75392]: 2025-10-08 09:43:32.976492477 +0000 UTC m=+0.057849579 container create d9c41c7da8290ff74914df1169a4d11136ab1f1d5e39883fb5bd0a0cb09f0798 (image=quay.io/ceph/ceph:v19, name=confident_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:43:33 compute-0 systemd[1]: Started libpod-conmon-d9c41c7da8290ff74914df1169a4d11136ab1f1d5e39883fb5bd0a0cb09f0798.scope.
Oct 08 09:43:33 compute-0 podman[75392]: 2025-10-08 09:43:32.948818553 +0000 UTC m=+0.030175745 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:43:33 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:43:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dd8e8c4baf10b690e1c1396698681a0665d0bd8f1173c6d4bbd9cb097083fea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dd8e8c4baf10b690e1c1396698681a0665d0bd8f1173c6d4bbd9cb097083fea/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dd8e8c4baf10b690e1c1396698681a0665d0bd8f1173c6d4bbd9cb097083fea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:33 compute-0 podman[75392]: 2025-10-08 09:43:33.087326266 +0000 UTC m=+0.168683358 container init d9c41c7da8290ff74914df1169a4d11136ab1f1d5e39883fb5bd0a0cb09f0798 (image=quay.io/ceph/ceph:v19, name=confident_morse, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:43:33 compute-0 podman[75392]: 2025-10-08 09:43:33.101879631 +0000 UTC m=+0.183236753 container start d9c41c7da8290ff74914df1169a4d11136ab1f1d5e39883fb5bd0a0cb09f0798 (image=quay.io/ceph/ceph:v19, name=confident_morse, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Oct 08 09:43:33 compute-0 podman[75392]: 2025-10-08 09:43:33.10561242 +0000 UTC m=+0.186969522 container attach d9c41c7da8290ff74914df1169a4d11136ab1f1d5e39883fb5bd0a0cb09f0798 (image=quay.io/ceph/ceph:v19, name=confident_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:43:33 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:33 compute-0 ceph-mon[73572]: Added host compute-0
Oct 08 09:43:33 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 08 09:43:33 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:33 compute-0 podman[75365]: 2025-10-08 09:43:33.234642241 +0000 UTC m=+0.657486448 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:43:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054712 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:43:33 compute-0 podman[75446]: 2025-10-08 09:43:33.359043974 +0000 UTC m=+0.047441446 container create d39875fd4bd1bb67b5276a9a3125460f2d829854047b66618d1d93fc78f5faf7 (image=quay.io/ceph/ceph:v19, name=sharp_euclid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:43:33 compute-0 systemd[1]: Started libpod-conmon-d39875fd4bd1bb67b5276a9a3125460f2d829854047b66618d1d93fc78f5faf7.scope.
Oct 08 09:43:33 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:43:33 compute-0 podman[75446]: 2025-10-08 09:43:33.418458541 +0000 UTC m=+0.106856023 container init d39875fd4bd1bb67b5276a9a3125460f2d829854047b66618d1d93fc78f5faf7 (image=quay.io/ceph/ceph:v19, name=sharp_euclid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:43:33 compute-0 podman[75446]: 2025-10-08 09:43:33.42532295 +0000 UTC m=+0.113720412 container start d39875fd4bd1bb67b5276a9a3125460f2d829854047b66618d1d93fc78f5faf7 (image=quay.io/ceph/ceph:v19, name=sharp_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct 08 09:43:33 compute-0 podman[75446]: 2025-10-08 09:43:33.427965444 +0000 UTC m=+0.116362926 container attach d39875fd4bd1bb67b5276a9a3125460f2d829854047b66618d1d93fc78f5faf7 (image=quay.io/ceph/ceph:v19, name=sharp_euclid, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:43:33 compute-0 podman[75446]: 2025-10-08 09:43:33.333750096 +0000 UTC m=+0.022147638 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:43:33 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 09:43:33 compute-0 ceph-mgr[73869]: [cephadm INFO root] Saving service mgr spec with placement count:2
Oct 08 09:43:33 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Oct 08 09:43:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Oct 08 09:43:33 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:33 compute-0 confident_morse[75408]: Scheduled mgr update...
Oct 08 09:43:33 compute-0 podman[75392]: 2025-10-08 09:43:33.512839425 +0000 UTC m=+0.594196527 container died d9c41c7da8290ff74914df1169a4d11136ab1f1d5e39883fb5bd0a0cb09f0798 (image=quay.io/ceph/ceph:v19, name=confident_morse, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Oct 08 09:43:33 compute-0 systemd[1]: libpod-d9c41c7da8290ff74914df1169a4d11136ab1f1d5e39883fb5bd0a0cb09f0798.scope: Deactivated successfully.
Oct 08 09:43:33 compute-0 sharp_euclid[75463]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
Oct 08 09:43:33 compute-0 systemd[1]: libpod-d39875fd4bd1bb67b5276a9a3125460f2d829854047b66618d1d93fc78f5faf7.scope: Deactivated successfully.
Oct 08 09:43:33 compute-0 podman[75446]: 2025-10-08 09:43:33.527819924 +0000 UTC m=+0.216217416 container died d39875fd4bd1bb67b5276a9a3125460f2d829854047b66618d1d93fc78f5faf7 (image=quay.io/ceph/ceph:v19, name=sharp_euclid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 08 09:43:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-0dd8e8c4baf10b690e1c1396698681a0665d0bd8f1173c6d4bbd9cb097083fea-merged.mount: Deactivated successfully.
Oct 08 09:43:33 compute-0 podman[75392]: 2025-10-08 09:43:33.555460627 +0000 UTC m=+0.636817729 container remove d9c41c7da8290ff74914df1169a4d11136ab1f1d5e39883fb5bd0a0cb09f0798 (image=quay.io/ceph/ceph:v19, name=confident_morse, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:43:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d8e68a1821da843b707249232c6675479a71da5d3ffb2a8a6b5ef4bb2229bda-merged.mount: Deactivated successfully.
Oct 08 09:43:33 compute-0 systemd[1]: libpod-conmon-d9c41c7da8290ff74914df1169a4d11136ab1f1d5e39883fb5bd0a0cb09f0798.scope: Deactivated successfully.
Oct 08 09:43:33 compute-0 podman[75446]: 2025-10-08 09:43:33.584995129 +0000 UTC m=+0.273392611 container remove d39875fd4bd1bb67b5276a9a3125460f2d829854047b66618d1d93fc78f5faf7 (image=quay.io/ceph/ceph:v19, name=sharp_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Oct 08 09:43:33 compute-0 systemd[1]: libpod-conmon-d39875fd4bd1bb67b5276a9a3125460f2d829854047b66618d1d93fc78f5faf7.scope: Deactivated successfully.
Oct 08 09:43:33 compute-0 podman[75489]: 2025-10-08 09:43:33.608864892 +0000 UTC m=+0.035401242 container create 041d67c20ac76725947436d95b811b26184c242c7b80eb4514349e12dfc1d066 (image=quay.io/ceph/ceph:v19, name=funny_ganguly, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:43:33 compute-0 sudo[75296]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Oct 08 09:43:33 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:33 compute-0 systemd[1]: Started libpod-conmon-041d67c20ac76725947436d95b811b26184c242c7b80eb4514349e12dfc1d066.scope.
Oct 08 09:43:33 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:43:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb98cd2ebf64193ff239b1b310d8bf68197256ab91b27ee7743e3eb68a49a13f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb98cd2ebf64193ff239b1b310d8bf68197256ab91b27ee7743e3eb68a49a13f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb98cd2ebf64193ff239b1b310d8bf68197256ab91b27ee7743e3eb68a49a13f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:33 compute-0 podman[75489]: 2025-10-08 09:43:33.593604374 +0000 UTC m=+0.020140754 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:43:33 compute-0 podman[75489]: 2025-10-08 09:43:33.698227335 +0000 UTC m=+0.124763705 container init 041d67c20ac76725947436d95b811b26184c242c7b80eb4514349e12dfc1d066 (image=quay.io/ceph/ceph:v19, name=funny_ganguly, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct 08 09:43:33 compute-0 sudo[75505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:43:33 compute-0 sudo[75505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:33 compute-0 sudo[75505]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:33 compute-0 podman[75489]: 2025-10-08 09:43:33.712813072 +0000 UTC m=+0.139349422 container start 041d67c20ac76725947436d95b811b26184c242c7b80eb4514349e12dfc1d066 (image=quay.io/ceph/ceph:v19, name=funny_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct 08 09:43:33 compute-0 podman[75489]: 2025-10-08 09:43:33.716192159 +0000 UTC m=+0.142728529 container attach 041d67c20ac76725947436d95b811b26184c242c7b80eb4514349e12dfc1d066 (image=quay.io/ceph/ceph:v19, name=funny_ganguly, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:43:33 compute-0 sudo[75534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Oct 08 09:43:33 compute-0 sudo[75534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:34 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 09:43:34 compute-0 ceph-mgr[73869]: [cephadm INFO root] Saving service crash spec with placement *
Oct 08 09:43:34 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Oct 08 09:43:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Oct 08 09:43:34 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:34 compute-0 funny_ganguly[75506]: Scheduled crash update...
Oct 08 09:43:34 compute-0 sudo[75534]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:34 compute-0 systemd[1]: libpod-041d67c20ac76725947436d95b811b26184c242c7b80eb4514349e12dfc1d066.scope: Deactivated successfully.
Oct 08 09:43:34 compute-0 podman[75489]: 2025-10-08 09:43:34.062485759 +0000 UTC m=+0.489022109 container died 041d67c20ac76725947436d95b811b26184c242c7b80eb4514349e12dfc1d066 (image=quay.io/ceph/ceph:v19, name=funny_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2)
Oct 08 09:43:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 09:43:34 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb98cd2ebf64193ff239b1b310d8bf68197256ab91b27ee7743e3eb68a49a13f-merged.mount: Deactivated successfully.
Oct 08 09:43:34 compute-0 podman[75489]: 2025-10-08 09:43:34.101266587 +0000 UTC m=+0.527802937 container remove 041d67c20ac76725947436d95b811b26184c242c7b80eb4514349e12dfc1d066 (image=quay.io/ceph/ceph:v19, name=funny_ganguly, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:43:34 compute-0 systemd[1]: libpod-conmon-041d67c20ac76725947436d95b811b26184c242c7b80eb4514349e12dfc1d066.scope: Deactivated successfully.
Oct 08 09:43:34 compute-0 sudo[75603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:43:34 compute-0 sudo[75603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:34 compute-0 sudo[75603]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:34 compute-0 podman[75637]: 2025-10-08 09:43:34.163297048 +0000 UTC m=+0.045609298 container create 3d8fb84c356bc9b02f73c3d40e762c6eea3150a403c309a8780157383acb1e84 (image=quay.io/ceph/ceph:v19, name=bold_ramanujan, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 08 09:43:34 compute-0 sudo[75647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Oct 08 09:43:34 compute-0 sudo[75647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:34 compute-0 ceph-mon[73572]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 09:43:34 compute-0 ceph-mon[73572]: Saving service mon spec with placement count:5
Oct 08 09:43:34 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:34 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:34 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:34 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:34 compute-0 systemd[1]: Started libpod-conmon-3d8fb84c356bc9b02f73c3d40e762c6eea3150a403c309a8780157383acb1e84.scope.
Oct 08 09:43:34 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:43:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb57213f3de74c89187c7e8ac1cfd3331e64f4a3b4d7a520a70056aa8f13c7a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb57213f3de74c89187c7e8ac1cfd3331e64f4a3b4d7a520a70056aa8f13c7a6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb57213f3de74c89187c7e8ac1cfd3331e64f4a3b4d7a520a70056aa8f13c7a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:34 compute-0 podman[75637]: 2025-10-08 09:43:34.228974485 +0000 UTC m=+0.111286745 container init 3d8fb84c356bc9b02f73c3d40e762c6eea3150a403c309a8780157383acb1e84 (image=quay.io/ceph/ceph:v19, name=bold_ramanujan, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:43:34 compute-0 podman[75637]: 2025-10-08 09:43:34.233364116 +0000 UTC m=+0.115676366 container start 3d8fb84c356bc9b02f73c3d40e762c6eea3150a403c309a8780157383acb1e84 (image=quay.io/ceph/ceph:v19, name=bold_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 08 09:43:34 compute-0 podman[75637]: 2025-10-08 09:43:34.136852733 +0000 UTC m=+0.019165013 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:43:34 compute-0 podman[75637]: 2025-10-08 09:43:34.235963599 +0000 UTC m=+0.118275869 container attach 3d8fb84c356bc9b02f73c3d40e762c6eea3150a403c309a8780157383acb1e84 (image=quay.io/ceph/ceph:v19, name=bold_ramanujan, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct 08 09:43:34 compute-0 ceph-mgr[73869]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 08 09:43:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Oct 08 09:43:34 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2569311365' entity='client.admin' 
Oct 08 09:43:34 compute-0 systemd[1]: libpod-3d8fb84c356bc9b02f73c3d40e762c6eea3150a403c309a8780157383acb1e84.scope: Deactivated successfully.
Oct 08 09:43:34 compute-0 podman[75637]: 2025-10-08 09:43:34.583223208 +0000 UTC m=+0.465535518 container died 3d8fb84c356bc9b02f73c3d40e762c6eea3150a403c309a8780157383acb1e84 (image=quay.io/ceph/ceph:v19, name=bold_ramanujan, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:43:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb57213f3de74c89187c7e8ac1cfd3331e64f4a3b4d7a520a70056aa8f13c7a6-merged.mount: Deactivated successfully.
Oct 08 09:43:34 compute-0 podman[75637]: 2025-10-08 09:43:34.644974971 +0000 UTC m=+0.527287261 container remove 3d8fb84c356bc9b02f73c3d40e762c6eea3150a403c309a8780157383acb1e84 (image=quay.io/ceph/ceph:v19, name=bold_ramanujan, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:43:34 compute-0 systemd[1]: libpod-conmon-3d8fb84c356bc9b02f73c3d40e762c6eea3150a403c309a8780157383acb1e84.scope: Deactivated successfully.
Oct 08 09:43:34 compute-0 podman[75777]: 2025-10-08 09:43:34.720767721 +0000 UTC m=+0.099799979 container exec 01c666addd8584f02eb7d29040d97e45d8cb7f834fd134029c1f7cca550aeb9d (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct 08 09:43:34 compute-0 podman[75800]: 2025-10-08 09:43:34.734403576 +0000 UTC m=+0.060197264 container create e8e3fc0dceee083e5cafa18e1c67337eacc59ee698584fd2911b53e2e26e0b9d (image=quay.io/ceph/ceph:v19, name=quirky_proskuriakova, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:43:34 compute-0 systemd[1]: Started libpod-conmon-e8e3fc0dceee083e5cafa18e1c67337eacc59ee698584fd2911b53e2e26e0b9d.scope.
Oct 08 09:43:34 compute-0 podman[75800]: 2025-10-08 09:43:34.708291152 +0000 UTC m=+0.034084860 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:43:34 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:43:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f7fbf2a34d80000fb3c491a25e9767b2ca5f667e5e701c94e07dc3620bd76dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f7fbf2a34d80000fb3c491a25e9767b2ca5f667e5e701c94e07dc3620bd76dc/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f7fbf2a34d80000fb3c491a25e9767b2ca5f667e5e701c94e07dc3620bd76dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:34 compute-0 podman[75800]: 2025-10-08 09:43:34.832653564 +0000 UTC m=+0.158447282 container init e8e3fc0dceee083e5cafa18e1c67337eacc59ee698584fd2911b53e2e26e0b9d (image=quay.io/ceph/ceph:v19, name=quirky_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:43:34 compute-0 podman[75800]: 2025-10-08 09:43:34.84287266 +0000 UTC m=+0.168666348 container start e8e3fc0dceee083e5cafa18e1c67337eacc59ee698584fd2911b53e2e26e0b9d (image=quay.io/ceph/ceph:v19, name=quirky_proskuriakova, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct 08 09:43:34 compute-0 podman[75800]: 2025-10-08 09:43:34.846557588 +0000 UTC m=+0.172351276 container attach e8e3fc0dceee083e5cafa18e1c67337eacc59ee698584fd2911b53e2e26e0b9d (image=quay.io/ceph/ceph:v19, name=quirky_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct 08 09:43:34 compute-0 podman[75777]: 2025-10-08 09:43:34.84943482 +0000 UTC m=+0.228466988 container exec_died 01c666addd8584f02eb7d29040d97e45d8cb7f834fd134029c1f7cca550aeb9d (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 08 09:43:34 compute-0 sudo[75647]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 09:43:34 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:35 compute-0 sudo[75880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:43:35 compute-0 sudo[75880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:35 compute-0 sudo[75880]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:35 compute-0 sudo[75905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 08 09:43:35 compute-0 sudo[75905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:35 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 09:43:35 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Oct 08 09:43:35 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:35 compute-0 ceph-mon[73572]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 09:43:35 compute-0 ceph-mon[73572]: Saving service mgr spec with placement count:2
Oct 08 09:43:35 compute-0 ceph-mon[73572]: from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 09:43:35 compute-0 ceph-mon[73572]: Saving service crash spec with placement *
Oct 08 09:43:35 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/2569311365' entity='client.admin' 
Oct 08 09:43:35 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:35 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:35 compute-0 systemd[1]: libpod-e8e3fc0dceee083e5cafa18e1c67337eacc59ee698584fd2911b53e2e26e0b9d.scope: Deactivated successfully.
Oct 08 09:43:35 compute-0 podman[75800]: 2025-10-08 09:43:35.199180509 +0000 UTC m=+0.524974197 container died e8e3fc0dceee083e5cafa18e1c67337eacc59ee698584fd2911b53e2e26e0b9d (image=quay.io/ceph/ceph:v19, name=quirky_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:43:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f7fbf2a34d80000fb3c491a25e9767b2ca5f667e5e701c94e07dc3620bd76dc-merged.mount: Deactivated successfully.
Oct 08 09:43:35 compute-0 podman[75800]: 2025-10-08 09:43:35.242488362 +0000 UTC m=+0.568282060 container remove e8e3fc0dceee083e5cafa18e1c67337eacc59ee698584fd2911b53e2e26e0b9d (image=quay.io/ceph/ceph:v19, name=quirky_proskuriakova, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 08 09:43:35 compute-0 systemd[1]: libpod-conmon-e8e3fc0dceee083e5cafa18e1c67337eacc59ee698584fd2911b53e2e26e0b9d.scope: Deactivated successfully.
Oct 08 09:43:35 compute-0 podman[75946]: 2025-10-08 09:43:35.317347262 +0000 UTC m=+0.052448686 container create 9b4eac011805acaafc5ad70e34e9ad2a5162e05957012a201bfcfd4e66d44923 (image=quay.io/ceph/ceph:v19, name=vigilant_haibt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Oct 08 09:43:35 compute-0 systemd[1]: Started libpod-conmon-9b4eac011805acaafc5ad70e34e9ad2a5162e05957012a201bfcfd4e66d44923.scope.
Oct 08 09:43:35 compute-0 podman[75946]: 2025-10-08 09:43:35.288722789 +0000 UTC m=+0.023824263 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:43:35 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:43:35 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 75978 (sysctl)
Oct 08 09:43:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/762ff2f03f17c75d6e209d5d454bce38f82c7719a94a57829218b752f84a6867/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:35 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Oct 08 09:43:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/762ff2f03f17c75d6e209d5d454bce38f82c7719a94a57829218b752f84a6867/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/762ff2f03f17c75d6e209d5d454bce38f82c7719a94a57829218b752f84a6867/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:35 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Oct 08 09:43:35 compute-0 podman[75946]: 2025-10-08 09:43:35.426760436 +0000 UTC m=+0.161861910 container init 9b4eac011805acaafc5ad70e34e9ad2a5162e05957012a201bfcfd4e66d44923 (image=quay.io/ceph/ceph:v19, name=vigilant_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:43:35 compute-0 podman[75946]: 2025-10-08 09:43:35.434259646 +0000 UTC m=+0.169361040 container start 9b4eac011805acaafc5ad70e34e9ad2a5162e05957012a201bfcfd4e66d44923 (image=quay.io/ceph/ceph:v19, name=vigilant_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:43:35 compute-0 podman[75946]: 2025-10-08 09:43:35.438128609 +0000 UTC m=+0.173230033 container attach 9b4eac011805acaafc5ad70e34e9ad2a5162e05957012a201bfcfd4e66d44923 (image=quay.io/ceph/ceph:v19, name=vigilant_haibt, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:43:35 compute-0 sudo[75905]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:35 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 09:43:35 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Oct 08 09:43:35 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:35 compute-0 ceph-mgr[73869]: [cephadm INFO root] Added label _admin to host compute-0
Oct 08 09:43:35 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Oct 08 09:43:35 compute-0 vigilant_haibt[75975]: Added label _admin to host compute-0
Oct 08 09:43:35 compute-0 sudo[76021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:43:35 compute-0 sudo[76021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:35 compute-0 sudo[76021]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:35 compute-0 systemd[1]: libpod-9b4eac011805acaafc5ad70e34e9ad2a5162e05957012a201bfcfd4e66d44923.scope: Deactivated successfully.
Oct 08 09:43:35 compute-0 podman[75946]: 2025-10-08 09:43:35.843415919 +0000 UTC m=+0.578517303 container died 9b4eac011805acaafc5ad70e34e9ad2a5162e05957012a201bfcfd4e66d44923 (image=quay.io/ceph/ceph:v19, name=vigilant_haibt, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct 08 09:43:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-762ff2f03f17c75d6e209d5d454bce38f82c7719a94a57829218b752f84a6867-merged.mount: Deactivated successfully.
Oct 08 09:43:35 compute-0 podman[75946]: 2025-10-08 09:43:35.8890265 +0000 UTC m=+0.624127904 container remove 9b4eac011805acaafc5ad70e34e9ad2a5162e05957012a201bfcfd4e66d44923 (image=quay.io/ceph/ceph:v19, name=vigilant_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Oct 08 09:43:35 compute-0 sudo[76048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Oct 08 09:43:35 compute-0 systemd[1]: libpod-conmon-9b4eac011805acaafc5ad70e34e9ad2a5162e05957012a201bfcfd4e66d44923.scope: Deactivated successfully.
Oct 08 09:43:35 compute-0 sudo[76048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:35 compute-0 podman[76083]: 2025-10-08 09:43:35.943441321 +0000 UTC m=+0.033376536 container create 6c432e43c3eae8c2adbc6a69abb0f404ce518898050af0739df01d1a5a451a45 (image=quay.io/ceph/ceph:v19, name=upbeat_wiles, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 08 09:43:35 compute-0 systemd[1]: Started libpod-conmon-6c432e43c3eae8c2adbc6a69abb0f404ce518898050af0739df01d1a5a451a45.scope.
Oct 08 09:43:35 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:43:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d9f3da93f3e1bd0ec9b8f530e9a61afa5013691aa89bcf118e58105efe218c3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d9f3da93f3e1bd0ec9b8f530e9a61afa5013691aa89bcf118e58105efe218c3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d9f3da93f3e1bd0ec9b8f530e9a61afa5013691aa89bcf118e58105efe218c3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:36 compute-0 podman[76083]: 2025-10-08 09:43:36.015813374 +0000 UTC m=+0.105748679 container init 6c432e43c3eae8c2adbc6a69abb0f404ce518898050af0739df01d1a5a451a45 (image=quay.io/ceph/ceph:v19, name=upbeat_wiles, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:43:36 compute-0 podman[76083]: 2025-10-08 09:43:36.021343954 +0000 UTC m=+0.111279169 container start 6c432e43c3eae8c2adbc6a69abb0f404ce518898050af0739df01d1a5a451a45 (image=quay.io/ceph/ceph:v19, name=upbeat_wiles, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 08 09:43:36 compute-0 podman[76083]: 2025-10-08 09:43:36.024547752 +0000 UTC m=+0.114482997 container attach 6c432e43c3eae8c2adbc6a69abb0f404ce518898050af0739df01d1a5a451a45 (image=quay.io/ceph/ceph:v19, name=upbeat_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:43:36 compute-0 podman[76083]: 2025-10-08 09:43:35.929275926 +0000 UTC m=+0.019211171 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:43:36 compute-0 sudo[76048]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:36 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 09:43:36 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:36 compute-0 ceph-mon[73572]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 09:43:36 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:36 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:36 compute-0 sudo[76142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:43:36 compute-0 sudo[76142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:36 compute-0 sudo[76142]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:36 compute-0 sudo[76167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- inventory --format=json-pretty --filter-for-batch
Oct 08 09:43:36 compute-0 sudo[76167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:36 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Oct 08 09:43:36 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4076938877' entity='client.admin' 
Oct 08 09:43:36 compute-0 upbeat_wiles[76101]: set mgr/dashboard/cluster/status
Oct 08 09:43:36 compute-0 ceph-mgr[73869]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 08 09:43:36 compute-0 systemd[1]: libpod-6c432e43c3eae8c2adbc6a69abb0f404ce518898050af0739df01d1a5a451a45.scope: Deactivated successfully.
Oct 08 09:43:36 compute-0 podman[76083]: 2025-10-08 09:43:36.53563914 +0000 UTC m=+0.625574355 container died 6c432e43c3eae8c2adbc6a69abb0f404ce518898050af0739df01d1a5a451a45 (image=quay.io/ceph/ceph:v19, name=upbeat_wiles, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct 08 09:43:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d9f3da93f3e1bd0ec9b8f530e9a61afa5013691aa89bcf118e58105efe218c3-merged.mount: Deactivated successfully.
Oct 08 09:43:36 compute-0 podman[76083]: 2025-10-08 09:43:36.57471697 +0000 UTC m=+0.664652185 container remove 6c432e43c3eae8c2adbc6a69abb0f404ce518898050af0739df01d1a5a451a45 (image=quay.io/ceph/ceph:v19, name=upbeat_wiles, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:43:36 compute-0 systemd[1]: libpod-conmon-6c432e43c3eae8c2adbc6a69abb0f404ce518898050af0739df01d1a5a451a45.scope: Deactivated successfully.
Oct 08 09:43:36 compute-0 sudo[72531]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:36 compute-0 podman[76243]: 2025-10-08 09:43:36.665612862 +0000 UTC m=+0.040681471 container create a3a927f750cd8eb17e3bdfde9aeb15dba8db8215490a073087561a163d19d8d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:43:36 compute-0 systemd[1]: Started libpod-conmon-a3a927f750cd8eb17e3bdfde9aeb15dba8db8215490a073087561a163d19d8d7.scope.
Oct 08 09:43:36 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:43:36 compute-0 podman[76243]: 2025-10-08 09:43:36.646071362 +0000 UTC m=+0.021140011 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:43:36 compute-0 podman[76243]: 2025-10-08 09:43:36.743640129 +0000 UTC m=+0.118708818 container init a3a927f750cd8eb17e3bdfde9aeb15dba8db8215490a073087561a163d19d8d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_brattain, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Oct 08 09:43:36 compute-0 podman[76243]: 2025-10-08 09:43:36.748658113 +0000 UTC m=+0.123726722 container start a3a927f750cd8eb17e3bdfde9aeb15dba8db8215490a073087561a163d19d8d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct 08 09:43:36 compute-0 podman[76243]: 2025-10-08 09:43:36.752612915 +0000 UTC m=+0.127681554 container attach a3a927f750cd8eb17e3bdfde9aeb15dba8db8215490a073087561a163d19d8d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_brattain, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:43:36 compute-0 frosty_brattain[76259]: 167 167
Oct 08 09:43:36 compute-0 systemd[1]: libpod-a3a927f750cd8eb17e3bdfde9aeb15dba8db8215490a073087561a163d19d8d7.scope: Deactivated successfully.
Oct 08 09:43:36 compute-0 podman[76243]: 2025-10-08 09:43:36.754479122 +0000 UTC m=+0.129547721 container died a3a927f750cd8eb17e3bdfde9aeb15dba8db8215490a073087561a163d19d8d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct 08 09:43:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-152182e9440dfd12b7ef2a274c6b24dda8417d46d938a5e295e6281a40eafd4d-merged.mount: Deactivated successfully.
Oct 08 09:43:36 compute-0 podman[76243]: 2025-10-08 09:43:36.794407668 +0000 UTC m=+0.169476267 container remove a3a927f750cd8eb17e3bdfde9aeb15dba8db8215490a073087561a163d19d8d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_brattain, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:43:36 compute-0 systemd[1]: libpod-conmon-a3a927f750cd8eb17e3bdfde9aeb15dba8db8215490a073087561a163d19d8d7.scope: Deactivated successfully.
Oct 08 09:43:36 compute-0 sudo[76303]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpxctginhivzlhewqffidbxbrilkzpcr ; /usr/bin/python3'
Oct 08 09:43:36 compute-0 sudo[76303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:43:37 compute-0 podman[76304]: 2025-10-08 09:43:37.036485593 +0000 UTC m=+0.059714134 container create dc44986207888b3692f259a036a12a39bd6f696f7c53102f0edac29a0f16630d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct 08 09:43:37 compute-0 systemd[1]: Started libpod-conmon-dc44986207888b3692f259a036a12a39bd6f696f7c53102f0edac29a0f16630d.scope.
Oct 08 09:43:37 compute-0 podman[76304]: 2025-10-08 09:43:37.015014844 +0000 UTC m=+0.038243415 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:43:37 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:43:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/304b9367aaefcc351c0b438294abcf52b35b659af01a0ad35f073f047fbdb432/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/304b9367aaefcc351c0b438294abcf52b35b659af01a0ad35f073f047fbdb432/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/304b9367aaefcc351c0b438294abcf52b35b659af01a0ad35f073f047fbdb432/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/304b9367aaefcc351c0b438294abcf52b35b659af01a0ad35f073f047fbdb432/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:37 compute-0 podman[76304]: 2025-10-08 09:43:37.143306304 +0000 UTC m=+0.166534915 container init dc44986207888b3692f259a036a12a39bd6f696f7c53102f0edac29a0f16630d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:43:37 compute-0 podman[76304]: 2025-10-08 09:43:37.156009135 +0000 UTC m=+0.179237706 container start dc44986207888b3692f259a036a12a39bd6f696f7c53102f0edac29a0f16630d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct 08 09:43:37 compute-0 podman[76304]: 2025-10-08 09:43:37.160891124 +0000 UTC m=+0.184119755 container attach dc44986207888b3692f259a036a12a39bd6f696f7c53102f0edac29a0f16630d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_heyrovsky, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 08 09:43:37 compute-0 python3[76307]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:43:37 compute-0 ceph-mon[73572]: from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 09:43:37 compute-0 ceph-mon[73572]: Added label _admin to host compute-0
Oct 08 09:43:37 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/4076938877' entity='client.admin' 
Oct 08 09:43:37 compute-0 podman[76327]: 2025-10-08 09:43:37.292479686 +0000 UTC m=+0.072359263 container create 082e130216421cfa7b3c73d9d3eab37c07ade02738a491dcdba6715202b6d25a (image=quay.io/ceph/ceph:v19, name=ecstatic_merkle, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 08 09:43:37 compute-0 systemd[1]: Started libpod-conmon-082e130216421cfa7b3c73d9d3eab37c07ade02738a491dcdba6715202b6d25a.scope.
Oct 08 09:43:37 compute-0 podman[76327]: 2025-10-08 09:43:37.262240687 +0000 UTC m=+0.042120334 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:43:37 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:43:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b04fc885ce2b3a7a63e9d44de7d9ab3300af15034edb2d45945015c32842f485/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b04fc885ce2b3a7a63e9d44de7d9ab3300af15034edb2d45945015c32842f485/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:37 compute-0 podman[76327]: 2025-10-08 09:43:37.419816227 +0000 UTC m=+0.199695844 container init 082e130216421cfa7b3c73d9d3eab37c07ade02738a491dcdba6715202b6d25a (image=quay.io/ceph/ceph:v19, name=ecstatic_merkle, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct 08 09:43:37 compute-0 podman[76327]: 2025-10-08 09:43:37.426656967 +0000 UTC m=+0.206536544 container start 082e130216421cfa7b3c73d9d3eab37c07ade02738a491dcdba6715202b6d25a (image=quay.io/ceph/ceph:v19, name=ecstatic_merkle, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:43:37 compute-0 podman[76327]: 2025-10-08 09:43:37.430545287 +0000 UTC m=+0.210424844 container attach 082e130216421cfa7b3c73d9d3eab37c07ade02738a491dcdba6715202b6d25a (image=quay.io/ceph/ceph:v19, name=ecstatic_merkle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct 08 09:43:37 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Oct 08 09:43:37 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/948747476' entity='client.admin' 
Oct 08 09:43:37 compute-0 systemd[1]: libpod-082e130216421cfa7b3c73d9d3eab37c07ade02738a491dcdba6715202b6d25a.scope: Deactivated successfully.
Oct 08 09:43:37 compute-0 podman[76327]: 2025-10-08 09:43:37.855260832 +0000 UTC m=+0.635140409 container died 082e130216421cfa7b3c73d9d3eab37c07ade02738a491dcdba6715202b6d25a (image=quay.io/ceph/ceph:v19, name=ecstatic_merkle, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:43:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-b04fc885ce2b3a7a63e9d44de7d9ab3300af15034edb2d45945015c32842f485-merged.mount: Deactivated successfully.
Oct 08 09:43:37 compute-0 podman[76327]: 2025-10-08 09:43:37.898139429 +0000 UTC m=+0.678018966 container remove 082e130216421cfa7b3c73d9d3eab37c07ade02738a491dcdba6715202b6d25a (image=quay.io/ceph/ceph:v19, name=ecstatic_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:43:37 compute-0 systemd[1]: libpod-conmon-082e130216421cfa7b3c73d9d3eab37c07ade02738a491dcdba6715202b6d25a.scope: Deactivated successfully.
Oct 08 09:43:37 compute-0 sudo[76303]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:37 compute-0 distracted_heyrovsky[76322]: [
Oct 08 09:43:37 compute-0 distracted_heyrovsky[76322]:     {
Oct 08 09:43:37 compute-0 distracted_heyrovsky[76322]:         "available": false,
Oct 08 09:43:37 compute-0 distracted_heyrovsky[76322]:         "being_replaced": false,
Oct 08 09:43:37 compute-0 distracted_heyrovsky[76322]:         "ceph_device_lvm": false,
Oct 08 09:43:37 compute-0 distracted_heyrovsky[76322]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 08 09:43:37 compute-0 distracted_heyrovsky[76322]:         "lsm_data": {},
Oct 08 09:43:37 compute-0 distracted_heyrovsky[76322]:         "lvs": [],
Oct 08 09:43:37 compute-0 distracted_heyrovsky[76322]:         "path": "/dev/sr0",
Oct 08 09:43:37 compute-0 distracted_heyrovsky[76322]:         "rejected_reasons": [
Oct 08 09:43:37 compute-0 distracted_heyrovsky[76322]:             "Has a FileSystem",
Oct 08 09:43:37 compute-0 distracted_heyrovsky[76322]:             "Insufficient space (<5GB)"
Oct 08 09:43:37 compute-0 distracted_heyrovsky[76322]:         ],
Oct 08 09:43:37 compute-0 distracted_heyrovsky[76322]:         "sys_api": {
Oct 08 09:43:37 compute-0 distracted_heyrovsky[76322]:             "actuators": null,
Oct 08 09:43:37 compute-0 distracted_heyrovsky[76322]:             "device_nodes": [
Oct 08 09:43:37 compute-0 distracted_heyrovsky[76322]:                 "sr0"
Oct 08 09:43:37 compute-0 distracted_heyrovsky[76322]:             ],
Oct 08 09:43:37 compute-0 distracted_heyrovsky[76322]:             "devname": "sr0",
Oct 08 09:43:37 compute-0 distracted_heyrovsky[76322]:             "human_readable_size": "482.00 KB",
Oct 08 09:43:37 compute-0 distracted_heyrovsky[76322]:             "id_bus": "ata",
Oct 08 09:43:37 compute-0 distracted_heyrovsky[76322]:             "model": "QEMU DVD-ROM",
Oct 08 09:43:37 compute-0 distracted_heyrovsky[76322]:             "nr_requests": "2",
Oct 08 09:43:37 compute-0 distracted_heyrovsky[76322]:             "parent": "/dev/sr0",
Oct 08 09:43:37 compute-0 distracted_heyrovsky[76322]:             "partitions": {},
Oct 08 09:43:37 compute-0 distracted_heyrovsky[76322]:             "path": "/dev/sr0",
Oct 08 09:43:37 compute-0 distracted_heyrovsky[76322]:             "removable": "1",
Oct 08 09:43:37 compute-0 distracted_heyrovsky[76322]:             "rev": "2.5+",
Oct 08 09:43:37 compute-0 distracted_heyrovsky[76322]:             "ro": "0",
Oct 08 09:43:37 compute-0 distracted_heyrovsky[76322]:             "rotational": "0",
Oct 08 09:43:37 compute-0 distracted_heyrovsky[76322]:             "sas_address": "",
Oct 08 09:43:37 compute-0 distracted_heyrovsky[76322]:             "sas_device_handle": "",
Oct 08 09:43:37 compute-0 distracted_heyrovsky[76322]:             "scheduler_mode": "mq-deadline",
Oct 08 09:43:37 compute-0 distracted_heyrovsky[76322]:             "sectors": 0,
Oct 08 09:43:37 compute-0 distracted_heyrovsky[76322]:             "sectorsize": "2048",
Oct 08 09:43:37 compute-0 distracted_heyrovsky[76322]:             "size": 493568.0,
Oct 08 09:43:37 compute-0 distracted_heyrovsky[76322]:             "support_discard": "2048",
Oct 08 09:43:37 compute-0 distracted_heyrovsky[76322]:             "type": "disk",
Oct 08 09:43:37 compute-0 distracted_heyrovsky[76322]:             "vendor": "QEMU"
Oct 08 09:43:37 compute-0 distracted_heyrovsky[76322]:         }
Oct 08 09:43:37 compute-0 distracted_heyrovsky[76322]:     }
Oct 08 09:43:37 compute-0 distracted_heyrovsky[76322]: ]
Oct 08 09:43:37 compute-0 systemd[1]: libpod-dc44986207888b3692f259a036a12a39bd6f696f7c53102f0edac29a0f16630d.scope: Deactivated successfully.
Oct 08 09:43:38 compute-0 podman[77318]: 2025-10-08 09:43:38.032789444 +0000 UTC m=+0.029127615 container died dc44986207888b3692f259a036a12a39bd6f696f7c53102f0edac29a0f16630d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:43:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-304b9367aaefcc351c0b438294abcf52b35b659af01a0ad35f073f047fbdb432-merged.mount: Deactivated successfully.
Oct 08 09:43:38 compute-0 podman[77318]: 2025-10-08 09:43:38.077441196 +0000 UTC m=+0.073779267 container remove dc44986207888b3692f259a036a12a39bd6f696f7c53102f0edac29a0f16630d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct 08 09:43:38 compute-0 systemd[1]: libpod-conmon-dc44986207888b3692f259a036a12a39bd6f696f7c53102f0edac29a0f16630d.scope: Deactivated successfully.
Oct 08 09:43:38 compute-0 sudo[76167]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 09:43:38 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 09:43:38 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 09:43:38 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 09:43:38 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Oct 08 09:43:38 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 08 09:43:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:43:38 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:43:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 08 09:43:38 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 09:43:38 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Oct 08 09:43:38 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Oct 08 09:43:38 compute-0 sudo[77333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 08 09:43:38 compute-0 sudo[77333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:38 compute-0 sudo[77333]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:43:38 compute-0 sudo[77358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/etc/ceph
Oct 08 09:43:38 compute-0 sudo[77358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:38 compute-0 sudo[77358]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:38 compute-0 sudo[77406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/etc/ceph/ceph.conf.new
Oct 08 09:43:38 compute-0 sudo[77406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:38 compute-0 sudo[77406]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:38 compute-0 sudo[77455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149
Oct 08 09:43:38 compute-0 sudo[77455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:38 compute-0 sudo[77455]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:38 compute-0 sudo[77503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/etc/ceph/ceph.conf.new
Oct 08 09:43:38 compute-0 sudo[77503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:38 compute-0 sudo[77503]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:38 compute-0 ceph-mgr[73869]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 08 09:43:38 compute-0 sudo[77556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/etc/ceph/ceph.conf.new
Oct 08 09:43:38 compute-0 sudo[77556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:38 compute-0 sudo[77556]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:38 compute-0 sudo[77581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/etc/ceph/ceph.conf.new
Oct 08 09:43:38 compute-0 sudo[77581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:38 compute-0 sudo[77581]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:38 compute-0 sudo[77629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Oct 08 09:43:38 compute-0 sudo[77629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:38 compute-0 sudo[77629]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:38 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct 08 09:43:38 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct 08 09:43:38 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/948747476' entity='client.admin' 
Oct 08 09:43:38 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:38 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:38 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:38 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:38 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 08 09:43:38 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:43:38 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 09:43:38 compute-0 ceph-mon[73572]: Updating compute-0:/etc/ceph/ceph.conf
Oct 08 09:43:38 compute-0 sudo[77679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config
Oct 08 09:43:38 compute-0 sudo[77726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhethmigfadppwlcrvxujiwdbkmksxxs ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759916618.3351061-33583-210432935456000/async_wrapper.py j735635272403 30 /home/zuul/.ansible/tmp/ansible-tmp-1759916618.3351061-33583-210432935456000/AnsiballZ_command.py _'
Oct 08 09:43:38 compute-0 sudo[77679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:38 compute-0 sudo[77726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:43:38 compute-0 sudo[77679]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:38 compute-0 sudo[77731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config
Oct 08 09:43:38 compute-0 sudo[77731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:38 compute-0 sudo[77731]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:39 compute-0 ansible-async_wrapper.py[77730]: Invoked with j735635272403 30 /home/zuul/.ansible/tmp/ansible-tmp-1759916618.3351061-33583-210432935456000/AnsiballZ_command.py _
Oct 08 09:43:39 compute-0 ansible-async_wrapper.py[77782]: Starting module and watcher
Oct 08 09:43:39 compute-0 sudo[77756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf.new
Oct 08 09:43:39 compute-0 ansible-async_wrapper.py[77782]: Start watching 77783 (30)
Oct 08 09:43:39 compute-0 ansible-async_wrapper.py[77783]: Start module (77783)
Oct 08 09:43:39 compute-0 ansible-async_wrapper.py[77730]: Return async_wrapper task started.
Oct 08 09:43:39 compute-0 sudo[77756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:39 compute-0 sudo[77756]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:39 compute-0 sudo[77726]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:39 compute-0 sudo[77786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149
Oct 08 09:43:39 compute-0 sudo[77786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:39 compute-0 sudo[77786]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:39 compute-0 sudo[77811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf.new
Oct 08 09:43:39 compute-0 sudo[77811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:39 compute-0 sudo[77811]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:39 compute-0 python3[77785]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:43:39 compute-0 podman[77844]: 2025-10-08 09:43:39.278501546 +0000 UTC m=+0.042928330 container create 179eed01d974bb4a03908dd3db019d026f24ee3be1e33b354552a3cb8175e5b9 (image=quay.io/ceph/ceph:v19, name=stupefied_gauss, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:43:39 compute-0 systemd[1]: Started libpod-conmon-179eed01d974bb4a03908dd3db019d026f24ee3be1e33b354552a3cb8175e5b9.scope.
Oct 08 09:43:39 compute-0 sudo[77870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf.new
Oct 08 09:43:39 compute-0 sudo[77870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:39 compute-0 sudo[77870]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:39 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:43:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27249e971f01809786676cb2ca44f2ddac5ba0f44fbc05e3469ae12b95201df2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27249e971f01809786676cb2ca44f2ddac5ba0f44fbc05e3469ae12b95201df2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:39 compute-0 podman[77844]: 2025-10-08 09:43:39.262679689 +0000 UTC m=+0.027106493 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:43:39 compute-0 podman[77844]: 2025-10-08 09:43:39.380194479 +0000 UTC m=+0.144621273 container init 179eed01d974bb4a03908dd3db019d026f24ee3be1e33b354552a3cb8175e5b9 (image=quay.io/ceph/ceph:v19, name=stupefied_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct 08 09:43:39 compute-0 podman[77844]: 2025-10-08 09:43:39.390484736 +0000 UTC m=+0.154911540 container start 179eed01d974bb4a03908dd3db019d026f24ee3be1e33b354552a3cb8175e5b9 (image=quay.io/ceph/ceph:v19, name=stupefied_gauss, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:43:39 compute-0 podman[77844]: 2025-10-08 09:43:39.394633212 +0000 UTC m=+0.159060046 container attach 179eed01d974bb4a03908dd3db019d026f24ee3be1e33b354552a3cb8175e5b9 (image=quay.io/ceph/ceph:v19, name=stupefied_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct 08 09:43:39 compute-0 sudo[77903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf.new
Oct 08 09:43:39 compute-0 sudo[77903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:39 compute-0 sudo[77903]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:39 compute-0 sudo[77929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf.new /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct 08 09:43:39 compute-0 sudo[77929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:39 compute-0 sudo[77929]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:39 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 08 09:43:39 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 08 09:43:39 compute-0 sudo[77954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 08 09:43:39 compute-0 sudo[77954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:39 compute-0 sudo[77954]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:39 compute-0 sudo[77998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/etc/ceph
Oct 08 09:43:39 compute-0 sudo[77998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:39 compute-0 sudo[77998]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:39 compute-0 sudo[78023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/etc/ceph/ceph.client.admin.keyring.new
Oct 08 09:43:39 compute-0 sudo[78023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:39 compute-0 sudo[78023]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:39 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 08 09:43:39 compute-0 stupefied_gauss[77899]: 
Oct 08 09:43:39 compute-0 stupefied_gauss[77899]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct 08 09:43:39 compute-0 sudo[78048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149
Oct 08 09:43:39 compute-0 sudo[78048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:39 compute-0 sudo[78048]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:39 compute-0 systemd[1]: libpod-179eed01d974bb4a03908dd3db019d026f24ee3be1e33b354552a3cb8175e5b9.scope: Deactivated successfully.
Oct 08 09:43:39 compute-0 podman[77844]: 2025-10-08 09:43:39.788810339 +0000 UTC m=+0.553237123 container died 179eed01d974bb4a03908dd3db019d026f24ee3be1e33b354552a3cb8175e5b9 (image=quay.io/ceph/ceph:v19, name=stupefied_gauss, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:43:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-27249e971f01809786676cb2ca44f2ddac5ba0f44fbc05e3469ae12b95201df2-merged.mount: Deactivated successfully.
Oct 08 09:43:39 compute-0 podman[77844]: 2025-10-08 09:43:39.840000932 +0000 UTC m=+0.604427726 container remove 179eed01d974bb4a03908dd3db019d026f24ee3be1e33b354552a3cb8175e5b9 (image=quay.io/ceph/ceph:v19, name=stupefied_gauss, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:43:39 compute-0 ceph-mon[73572]: Updating compute-0:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct 08 09:43:39 compute-0 systemd[1]: libpod-conmon-179eed01d974bb4a03908dd3db019d026f24ee3be1e33b354552a3cb8175e5b9.scope: Deactivated successfully.
Oct 08 09:43:39 compute-0 sudo[78075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/etc/ceph/ceph.client.admin.keyring.new
Oct 08 09:43:39 compute-0 sudo[78075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:39 compute-0 ansible-async_wrapper.py[77783]: Module complete (77783)
Oct 08 09:43:39 compute-0 sudo[78075]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:39 compute-0 sudo[78134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/etc/ceph/ceph.client.admin.keyring.new
Oct 08 09:43:39 compute-0 sudo[78134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:39 compute-0 sudo[78134]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:40 compute-0 sudo[78159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/etc/ceph/ceph.client.admin.keyring.new
Oct 08 09:43:40 compute-0 sudo[78159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:40 compute-0 sudo[78159]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:40 compute-0 sudo[78184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Oct 08 09:43:40 compute-0 sudo[78184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:40 compute-0 sudo[78184]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:40 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct 08 09:43:40 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct 08 09:43:40 compute-0 sudo[78215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config
Oct 08 09:43:40 compute-0 sudo[78215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:40 compute-0 sudo[78215]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:40 compute-0 sudo[78257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config
Oct 08 09:43:40 compute-0 sudo[78257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:40 compute-0 sudo[78257]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:40 compute-0 sudo[78282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring.new
Oct 08 09:43:40 compute-0 sudo[78282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:40 compute-0 sudo[78282]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:40 compute-0 sudo[78307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149
Oct 08 09:43:40 compute-0 sudo[78307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:40 compute-0 sudo[78307]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:40 compute-0 sudo[78332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring.new
Oct 08 09:43:40 compute-0 sudo[78332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:40 compute-0 sudo[78332]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:40 compute-0 sudo[78379]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zaafefhcajebhmvoeqdnbsruobuqjycq ; /usr/bin/python3'
Oct 08 09:43:40 compute-0 sudo[78379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:43:40 compute-0 sudo[78406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring.new
Oct 08 09:43:40 compute-0 sudo[78406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:40 compute-0 sudo[78406]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:40 compute-0 ceph-mgr[73869]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 08 09:43:40 compute-0 python3[78385]: ansible-ansible.legacy.async_status Invoked with jid=j735635272403.77730 mode=status _async_dir=/root/.ansible_async
Oct 08 09:43:40 compute-0 sudo[78379]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:40 compute-0 sudo[78431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring.new
Oct 08 09:43:40 compute-0 sudo[78431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:40 compute-0 sudo[78431]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:40 compute-0 sudo[78456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring.new /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct 08 09:43:40 compute-0 sudo[78456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:40 compute-0 sudo[78456]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:40 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 09:43:40 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:40 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 09:43:40 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:40 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 08 09:43:40 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:40 compute-0 ceph-mgr[73869]: [progress INFO root] update: starting ev 25a7b103-1f46-4154-b4d3-4ab41f29742b (Updating crash deployment (+1 -> 1))
Oct 08 09:43:40 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Oct 08 09:43:40 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 08 09:43:40 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct 08 09:43:40 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:43:40 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:43:40 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Oct 08 09:43:40 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Oct 08 09:43:40 compute-0 sudo[78550]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bisjphbtdiutibpeiagctrroydafvymr ; /usr/bin/python3'
Oct 08 09:43:40 compute-0 sudo[78550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:43:40 compute-0 sudo[78509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:43:40 compute-0 sudo[78509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:40 compute-0 sudo[78509]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:40 compute-0 sudo[78555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 787292cc-8154-50c4-9e00-e9be3e817149
Oct 08 09:43:40 compute-0 sudo[78555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:40 compute-0 ceph-mon[73572]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 08 09:43:40 compute-0 ceph-mon[73572]: from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 08 09:43:40 compute-0 ceph-mon[73572]: Updating compute-0:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct 08 09:43:40 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:40 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:40 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:40 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 08 09:43:40 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct 08 09:43:40 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:43:40 compute-0 python3[78552]: ansible-ansible.legacy.async_status Invoked with jid=j735635272403.77730 mode=cleanup _async_dir=/root/.ansible_async
Oct 08 09:43:40 compute-0 sudo[78550]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:41 compute-0 podman[78623]: 2025-10-08 09:43:41.145055486 +0000 UTC m=+0.037467262 container create 0f13b2bd85961e334f73a4ab7ee10aa36eb6a315e56a3998ab4cb3915b013a87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_mcclintock, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 08 09:43:41 compute-0 systemd[1]: Started libpod-conmon-0f13b2bd85961e334f73a4ab7ee10aa36eb6a315e56a3998ab4cb3915b013a87.scope.
Oct 08 09:43:41 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:43:41 compute-0 podman[78623]: 2025-10-08 09:43:41.130770227 +0000 UTC m=+0.023182023 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:43:41 compute-0 podman[78623]: 2025-10-08 09:43:41.232282775 +0000 UTC m=+0.124694581 container init 0f13b2bd85961e334f73a4ab7ee10aa36eb6a315e56a3998ab4cb3915b013a87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_mcclintock, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:43:41 compute-0 podman[78623]: 2025-10-08 09:43:41.238600319 +0000 UTC m=+0.131012125 container start 0f13b2bd85961e334f73a4ab7ee10aa36eb6a315e56a3998ab4cb3915b013a87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:43:41 compute-0 lucid_mcclintock[78639]: 167 167
Oct 08 09:43:41 compute-0 systemd[1]: libpod-0f13b2bd85961e334f73a4ab7ee10aa36eb6a315e56a3998ab4cb3915b013a87.scope: Deactivated successfully.
Oct 08 09:43:41 compute-0 podman[78623]: 2025-10-08 09:43:41.242747726 +0000 UTC m=+0.135159522 container attach 0f13b2bd85961e334f73a4ab7ee10aa36eb6a315e56a3998ab4cb3915b013a87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_mcclintock, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 08 09:43:41 compute-0 podman[78623]: 2025-10-08 09:43:41.243002154 +0000 UTC m=+0.135413950 container died 0f13b2bd85961e334f73a4ab7ee10aa36eb6a315e56a3998ab4cb3915b013a87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_mcclintock, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 08 09:43:41 compute-0 sudo[78665]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgrzfyrhzsmxyffrqfdozuozzzalfrmt ; /usr/bin/python3'
Oct 08 09:43:41 compute-0 sudo[78665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:43:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-787f82196a0829ebe12cd6ecb93d3c4be046e8791b6e1816f762cdecf98571db-merged.mount: Deactivated successfully.
Oct 08 09:43:41 compute-0 podman[78623]: 2025-10-08 09:43:41.277167514 +0000 UTC m=+0.169579290 container remove 0f13b2bd85961e334f73a4ab7ee10aa36eb6a315e56a3998ab4cb3915b013a87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:43:41 compute-0 systemd[1]: libpod-conmon-0f13b2bd85961e334f73a4ab7ee10aa36eb6a315e56a3998ab4cb3915b013a87.scope: Deactivated successfully.
Oct 08 09:43:41 compute-0 systemd[1]: Reloading.
Oct 08 09:43:41 compute-0 python3[78670]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 09:43:41 compute-0 systemd-rc-local-generator[78708]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:43:41 compute-0 systemd-sysv-generator[78714]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:43:41 compute-0 sudo[78665]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:41 compute-0 systemd[1]: Reloading.
Oct 08 09:43:41 compute-0 systemd-rc-local-generator[78749]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:43:41 compute-0 systemd-sysv-generator[78752]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:43:41 compute-0 ceph-mon[73572]: Deploying daemon crash.compute-0 on compute-0
Oct 08 09:43:41 compute-0 systemd[1]: Starting Ceph crash.compute-0 for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct 08 09:43:41 compute-0 sudo[78784]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orjrsbhhojzwpgsmvcejzafcsrctveqj ; /usr/bin/python3'
Oct 08 09:43:41 compute-0 sudo[78784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:43:42 compute-0 python3[78788]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:43:42 compute-0 podman[78830]: 2025-10-08 09:43:42.1668558 +0000 UTC m=+0.062216762 container create b397c2eb05b79f4dbe5166fc9ce9ec74d492714eade3bed1c5a2f56d1061bd45 (image=quay.io/ceph/ceph:v19, name=gracious_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:43:42 compute-0 podman[78836]: 2025-10-08 09:43:42.173072181 +0000 UTC m=+0.057911660 container create f2b90c859a7310489a10feb4ada2b4bf5595269880e09d000b5461d6bc9e0698 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-crash-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Oct 08 09:43:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/298ad304a810b11c2de94c3170e19f6087dccaf6328800bae4fc4a34e9d5f5b5/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/298ad304a810b11c2de94c3170e19f6087dccaf6328800bae4fc4a34e9d5f5b5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/298ad304a810b11c2de94c3170e19f6087dccaf6328800bae4fc4a34e9d5f5b5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/298ad304a810b11c2de94c3170e19f6087dccaf6328800bae4fc4a34e9d5f5b5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:42 compute-0 systemd[1]: Started libpod-conmon-b397c2eb05b79f4dbe5166fc9ce9ec74d492714eade3bed1c5a2f56d1061bd45.scope.
Oct 08 09:43:42 compute-0 podman[78836]: 2025-10-08 09:43:42.234675553 +0000 UTC m=+0.119515032 container init f2b90c859a7310489a10feb4ada2b4bf5595269880e09d000b5461d6bc9e0698 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-crash-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 08 09:43:42 compute-0 podman[78830]: 2025-10-08 09:43:42.144540035 +0000 UTC m=+0.039901087 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:43:42 compute-0 podman[78836]: 2025-10-08 09:43:42.149155206 +0000 UTC m=+0.033994705 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:43:42 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:43:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f57fec2920682b544793e447e9a3500e0326a9c9f4c51ec3ae862c6378011c6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f57fec2920682b544793e447e9a3500e0326a9c9f4c51ec3ae862c6378011c6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f57fec2920682b544793e447e9a3500e0326a9c9f4c51ec3ae862c6378011c6/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:42 compute-0 podman[78836]: 2025-10-08 09:43:42.25020127 +0000 UTC m=+0.135040759 container start f2b90c859a7310489a10feb4ada2b4bf5595269880e09d000b5461d6bc9e0698 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-crash-compute-0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:43:42 compute-0 bash[78836]: f2b90c859a7310489a10feb4ada2b4bf5595269880e09d000b5461d6bc9e0698
Oct 08 09:43:42 compute-0 podman[78830]: 2025-10-08 09:43:42.263339043 +0000 UTC m=+0.158700005 container init b397c2eb05b79f4dbe5166fc9ce9ec74d492714eade3bed1c5a2f56d1061bd45 (image=quay.io/ceph/ceph:v19, name=gracious_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 08 09:43:42 compute-0 systemd[1]: Started Ceph crash.compute-0 for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct 08 09:43:42 compute-0 podman[78830]: 2025-10-08 09:43:42.276312682 +0000 UTC m=+0.171673634 container start b397c2eb05b79f4dbe5166fc9ce9ec74d492714eade3bed1c5a2f56d1061bd45 (image=quay.io/ceph/ceph:v19, name=gracious_allen, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:43:42 compute-0 podman[78830]: 2025-10-08 09:43:42.280248922 +0000 UTC m=+0.175609904 container attach b397c2eb05b79f4dbe5166fc9ce9ec74d492714eade3bed1c5a2f56d1061bd45 (image=quay.io/ceph/ceph:v19, name=gracious_allen, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:43:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-crash-compute-0[78863]: INFO:ceph-crash:pinging cluster to exercise our key
Oct 08 09:43:42 compute-0 sudo[78555]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:42 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 09:43:42 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:42 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 09:43:42 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:42 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Oct 08 09:43:42 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:42 compute-0 ceph-mgr[73869]: [progress INFO root] complete: finished ev 25a7b103-1f46-4154-b4d3-4ab41f29742b (Updating crash deployment (+1 -> 1))
Oct 08 09:43:42 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Oct 08 09:43:42 compute-0 ceph-mgr[73869]: [progress INFO root] Completed event 25a7b103-1f46-4154-b4d3-4ab41f29742b (Updating crash deployment (+1 -> 1)) in 2 seconds
Oct 08 09:43:42 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:42 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Oct 08 09:43:42 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:42 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Oct 08 09:43:42 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:42 compute-0 sudo[78876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 08 09:43:42 compute-0 sudo[78876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:42 compute-0 sudo[78876]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-crash-compute-0[78863]: 2025-10-08T09:43:42.433+0000 7fdfd4548640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct 08 09:43:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-crash-compute-0[78863]: 2025-10-08T09:43:42.433+0000 7fdfd4548640 -1 AuthRegistry(0x7fdfcc0698f0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct 08 09:43:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-crash-compute-0[78863]: 2025-10-08T09:43:42.434+0000 7fdfd4548640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct 08 09:43:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-crash-compute-0[78863]: 2025-10-08T09:43:42.434+0000 7fdfd4548640 -1 AuthRegistry(0x7fdfd4546ff0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct 08 09:43:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-crash-compute-0[78863]: 2025-10-08T09:43:42.437+0000 7fdfd22bd640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Oct 08 09:43:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-crash-compute-0[78863]: 2025-10-08T09:43:42.437+0000 7fdfd4548640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Oct 08 09:43:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-crash-compute-0[78863]: [errno 13] RADOS permission denied (error connecting to the cluster)
Oct 08 09:43:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-crash-compute-0[78863]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Oct 08 09:43:42 compute-0 sudo[78920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:43:42 compute-0 sudo[78920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:42 compute-0 sudo[78920]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:42 compute-0 sudo[78955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Oct 08 09:43:42 compute-0 sudo[78955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:42 compute-0 ceph-mgr[73869]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Oct 08 09:43:42 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 08 09:43:42 compute-0 ceph-mon[73572]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Oct 08 09:43:42 compute-0 ceph-mgr[73869]: [progress INFO root] Writing back 1 completed events
Oct 08 09:43:42 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 08 09:43:42 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:42 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 08 09:43:42 compute-0 gracious_allen[78868]: 
Oct 08 09:43:42 compute-0 gracious_allen[78868]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct 08 09:43:42 compute-0 systemd[1]: libpod-b397c2eb05b79f4dbe5166fc9ce9ec74d492714eade3bed1c5a2f56d1061bd45.scope: Deactivated successfully.
Oct 08 09:43:42 compute-0 conmon[78868]: conmon b397c2eb05b79f4dbe51 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b397c2eb05b79f4dbe5166fc9ce9ec74d492714eade3bed1c5a2f56d1061bd45.scope/container/memory.events
Oct 08 09:43:42 compute-0 podman[78830]: 2025-10-08 09:43:42.66696107 +0000 UTC m=+0.562322032 container died b397c2eb05b79f4dbe5166fc9ce9ec74d492714eade3bed1c5a2f56d1061bd45 (image=quay.io/ceph/ceph:v19, name=gracious_allen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct 08 09:43:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f57fec2920682b544793e447e9a3500e0326a9c9f4c51ec3ae862c6378011c6-merged.mount: Deactivated successfully.
Oct 08 09:43:42 compute-0 podman[78830]: 2025-10-08 09:43:42.716703558 +0000 UTC m=+0.612064520 container remove b397c2eb05b79f4dbe5166fc9ce9ec74d492714eade3bed1c5a2f56d1061bd45 (image=quay.io/ceph/ceph:v19, name=gracious_allen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 08 09:43:42 compute-0 systemd[1]: libpod-conmon-b397c2eb05b79f4dbe5166fc9ce9ec74d492714eade3bed1c5a2f56d1061bd45.scope: Deactivated successfully.
Oct 08 09:43:42 compute-0 sudo[78784]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:43 compute-0 podman[79070]: 2025-10-08 09:43:43.00816378 +0000 UTC m=+0.079884124 container exec 01c666addd8584f02eb7d29040d97e45d8cb7f834fd134029c1f7cca550aeb9d (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct 08 09:43:43 compute-0 sudo[79113]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjwemztpgirqhhkewftfxzetctmvzsgq ; /usr/bin/python3'
Oct 08 09:43:43 compute-0 sudo[79113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:43:43 compute-0 podman[79070]: 2025-10-08 09:43:43.092202831 +0000 UTC m=+0.163923175 container exec_died 01c666addd8584f02eb7d29040d97e45d8cb7f834fd134029c1f7cca550aeb9d (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Oct 08 09:43:43 compute-0 python3[79115]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:43:43 compute-0 podman[79145]: 2025-10-08 09:43:43.270267651 +0000 UTC m=+0.044463747 container create 46e18ac38c8a0cba92d0c06114e6122aabc624f3bc68651da0ca4d77ae2907ba (image=quay.io/ceph/ceph:v19, name=youthful_bhabha, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1)
Oct 08 09:43:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:43:43 compute-0 systemd[1]: Started libpod-conmon-46e18ac38c8a0cba92d0c06114e6122aabc624f3bc68651da0ca4d77ae2907ba.scope.
Oct 08 09:43:43 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:43 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:43 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:43 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:43 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:43 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:43 compute-0 ceph-mon[73572]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Oct 08 09:43:43 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:43 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:43:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7e7ea5ce9a534376979795f206d43f17df78353c96da16ca2ac0c9b2b992e7e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7e7ea5ce9a534376979795f206d43f17df78353c96da16ca2ac0c9b2b992e7e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7e7ea5ce9a534376979795f206d43f17df78353c96da16ca2ac0c9b2b992e7e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:43 compute-0 podman[79145]: 2025-10-08 09:43:43.25298676 +0000 UTC m=+0.027182866 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:43:43 compute-0 sudo[78955]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:43 compute-0 podman[79145]: 2025-10-08 09:43:43.355519469 +0000 UTC m=+0.129715585 container init 46e18ac38c8a0cba92d0c06114e6122aabc624f3bc68651da0ca4d77ae2907ba (image=quay.io/ceph/ceph:v19, name=youthful_bhabha, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 08 09:43:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 09:43:43 compute-0 podman[79145]: 2025-10-08 09:43:43.361349268 +0000 UTC m=+0.135545354 container start 46e18ac38c8a0cba92d0c06114e6122aabc624f3bc68651da0ca4d77ae2907ba (image=quay.io/ceph/ceph:v19, name=youthful_bhabha, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 08 09:43:43 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 09:43:43 compute-0 podman[79145]: 2025-10-08 09:43:43.364209736 +0000 UTC m=+0.138405832 container attach 46e18ac38c8a0cba92d0c06114e6122aabc624f3bc68651da0ca4d77ae2907ba (image=quay.io/ceph/ceph:v19, name=youthful_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:43:43 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:43:43 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:43:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 08 09:43:43 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 09:43:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 08 09:43:43 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:43 compute-0 sudo[79181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 08 09:43:43 compute-0 sudo[79181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:43 compute-0 sudo[79181]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0)
Oct 08 09:43:43 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0)
Oct 08 09:43:43 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0)
Oct 08 09:43:43 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0)
Oct 08 09:43:43 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:43 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Oct 08 09:43:43 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Oct 08 09:43:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Oct 08 09:43:43 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 08 09:43:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Oct 08 09:43:43 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 08 09:43:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:43:43 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:43:43 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Oct 08 09:43:43 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Oct 08 09:43:43 compute-0 sudo[79208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:43:43 compute-0 sudo[79208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:43 compute-0 sudo[79208]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:43 compute-0 sudo[79250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 _orch deploy --fsid 787292cc-8154-50c4-9e00-e9be3e817149
Oct 08 09:43:43 compute-0 sudo[79250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Oct 08 09:43:43 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3914095065' entity='client.admin' 
Oct 08 09:43:43 compute-0 systemd[1]: libpod-46e18ac38c8a0cba92d0c06114e6122aabc624f3bc68651da0ca4d77ae2907ba.scope: Deactivated successfully.
Oct 08 09:43:43 compute-0 podman[79145]: 2025-10-08 09:43:43.735118648 +0000 UTC m=+0.509314734 container died 46e18ac38c8a0cba92d0c06114e6122aabc624f3bc68651da0ca4d77ae2907ba (image=quay.io/ceph/ceph:v19, name=youthful_bhabha, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 08 09:43:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7e7ea5ce9a534376979795f206d43f17df78353c96da16ca2ac0c9b2b992e7e-merged.mount: Deactivated successfully.
Oct 08 09:43:43 compute-0 podman[79145]: 2025-10-08 09:43:43.803960172 +0000 UTC m=+0.578156258 container remove 46e18ac38c8a0cba92d0c06114e6122aabc624f3bc68651da0ca4d77ae2907ba (image=quay.io/ceph/ceph:v19, name=youthful_bhabha, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 08 09:43:43 compute-0 systemd[1]: libpod-conmon-46e18ac38c8a0cba92d0c06114e6122aabc624f3bc68651da0ca4d77ae2907ba.scope: Deactivated successfully.
Oct 08 09:43:43 compute-0 sudo[79113]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:43 compute-0 podman[79307]: 2025-10-08 09:43:43.892269414 +0000 UTC m=+0.034793549 container create c7b130b0dfc12fe33591e5637e609cdd623a85a558f717ac19ac251d1dfd4dc8 (image=quay.io/ceph/ceph:v19, name=brave_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:43:43 compute-0 systemd[1]: Started libpod-conmon-c7b130b0dfc12fe33591e5637e609cdd623a85a558f717ac19ac251d1dfd4dc8.scope.
Oct 08 09:43:43 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:43:43 compute-0 podman[79307]: 2025-10-08 09:43:43.942120936 +0000 UTC m=+0.084645091 container init c7b130b0dfc12fe33591e5637e609cdd623a85a558f717ac19ac251d1dfd4dc8 (image=quay.io/ceph/ceph:v19, name=brave_matsumoto, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:43:43 compute-0 podman[79307]: 2025-10-08 09:43:43.948177063 +0000 UTC m=+0.090701198 container start c7b130b0dfc12fe33591e5637e609cdd623a85a558f717ac19ac251d1dfd4dc8 (image=quay.io/ceph/ceph:v19, name=brave_matsumoto, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default)
Oct 08 09:43:43 compute-0 podman[79307]: 2025-10-08 09:43:43.951183884 +0000 UTC m=+0.093708039 container attach c7b130b0dfc12fe33591e5637e609cdd623a85a558f717ac19ac251d1dfd4dc8 (image=quay.io/ceph/ceph:v19, name=brave_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:43:43 compute-0 brave_matsumoto[79324]: 167 167
Oct 08 09:43:43 compute-0 systemd[1]: libpod-c7b130b0dfc12fe33591e5637e609cdd623a85a558f717ac19ac251d1dfd4dc8.scope: Deactivated successfully.
Oct 08 09:43:43 compute-0 podman[79307]: 2025-10-08 09:43:43.952143734 +0000 UTC m=+0.094667869 container died c7b130b0dfc12fe33591e5637e609cdd623a85a558f717ac19ac251d1dfd4dc8 (image=quay.io/ceph/ceph:v19, name=brave_matsumoto, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:43:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-56fb366c8aa6e88c2bd3f00d14312ca637745a404d169cfc9e68a34c24d130bb-merged.mount: Deactivated successfully.
Oct 08 09:43:43 compute-0 podman[79307]: 2025-10-08 09:43:43.877473891 +0000 UTC m=+0.019998046 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:43:43 compute-0 sudo[79354]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exmuygmpkdyhysibdmdsxiwectjcuhyt ; /usr/bin/python3'
Oct 08 09:43:43 compute-0 sudo[79354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:43:43 compute-0 podman[79307]: 2025-10-08 09:43:43.983192427 +0000 UTC m=+0.125716562 container remove c7b130b0dfc12fe33591e5637e609cdd623a85a558f717ac19ac251d1dfd4dc8 (image=quay.io/ceph/ceph:v19, name=brave_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:43:43 compute-0 systemd[1]: libpod-conmon-c7b130b0dfc12fe33591e5637e609cdd623a85a558f717ac19ac251d1dfd4dc8.scope: Deactivated successfully.
Oct 08 09:43:44 compute-0 sudo[79250]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:44 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 09:43:44 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:44 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 09:43:44 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:44 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.ixicfj (unknown last config time)...
Oct 08 09:43:44 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.ixicfj (unknown last config time)...
Oct 08 09:43:44 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.ixicfj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Oct 08 09:43:44 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.ixicfj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 08 09:43:44 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct 08 09:43:44 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 08 09:43:44 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:43:44 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:43:44 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.ixicfj on compute-0
Oct 08 09:43:44 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.ixicfj on compute-0
Oct 08 09:43:44 compute-0 ansible-async_wrapper.py[77782]: Done in kid B.
Oct 08 09:43:44 compute-0 sudo[79366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:43:44 compute-0 sudo[79366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:44 compute-0 sudo[79366]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:44 compute-0 python3[79365]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:43:44 compute-0 sudo[79391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 _orch deploy --fsid 787292cc-8154-50c4-9e00-e9be3e817149
Oct 08 09:43:44 compute-0 sudo[79391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:44 compute-0 podman[79394]: 2025-10-08 09:43:44.155680075 +0000 UTC m=+0.035505971 container create a44bc277c66ea519eb36c35827901effd92af7b0b2efb67fe855132bbd46130f (image=quay.io/ceph/ceph:v19, name=relaxed_joliot, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:43:44 compute-0 systemd[1]: Started libpod-conmon-a44bc277c66ea519eb36c35827901effd92af7b0b2efb67fe855132bbd46130f.scope.
Oct 08 09:43:44 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:43:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2175227c24a0970519a32ca5fba05acaadcece1a2c593dd6dd4d8bf735c34e4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2175227c24a0970519a32ca5fba05acaadcece1a2c593dd6dd4d8bf735c34e4/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2175227c24a0970519a32ca5fba05acaadcece1a2c593dd6dd4d8bf735c34e4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:44 compute-0 podman[79394]: 2025-10-08 09:43:44.232082092 +0000 UTC m=+0.111907978 container init a44bc277c66ea519eb36c35827901effd92af7b0b2efb67fe855132bbd46130f (image=quay.io/ceph/ceph:v19, name=relaxed_joliot, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct 08 09:43:44 compute-0 podman[79394]: 2025-10-08 09:43:44.139895691 +0000 UTC m=+0.019721607 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:43:44 compute-0 podman[79394]: 2025-10-08 09:43:44.237406115 +0000 UTC m=+0.117232011 container start a44bc277c66ea519eb36c35827901effd92af7b0b2efb67fe855132bbd46130f (image=quay.io/ceph/ceph:v19, name=relaxed_joliot, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct 08 09:43:44 compute-0 podman[79394]: 2025-10-08 09:43:44.240342176 +0000 UTC m=+0.120168072 container attach a44bc277c66ea519eb36c35827901effd92af7b0b2efb67fe855132bbd46130f (image=quay.io/ceph/ceph:v19, name=relaxed_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 08 09:43:44 compute-0 ceph-mon[73572]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 08 09:43:44 compute-0 ceph-mon[73572]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 08 09:43:44 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:44 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:44 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:43:44 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 09:43:44 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:44 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:44 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:44 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:44 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:44 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 08 09:43:44 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 08 09:43:44 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:43:44 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3914095065' entity='client.admin' 
Oct 08 09:43:44 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:44 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:44 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.ixicfj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 08 09:43:44 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 08 09:43:44 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:43:44 compute-0 podman[79470]: 2025-10-08 09:43:44.461127927 +0000 UTC m=+0.048361716 container create e75dfb2c305fde73906843cc8aaeb5511c9da27f86d5e20b07f4faed81ada41e (image=quay.io/ceph/ceph:v19, name=affectionate_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct 08 09:43:44 compute-0 systemd[1]: Started libpod-conmon-e75dfb2c305fde73906843cc8aaeb5511c9da27f86d5e20b07f4faed81ada41e.scope.
Oct 08 09:43:44 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:43:44 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 08 09:43:44 compute-0 podman[79470]: 2025-10-08 09:43:44.441511254 +0000 UTC m=+0.028745073 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:43:44 compute-0 podman[79470]: 2025-10-08 09:43:44.552501154 +0000 UTC m=+0.139735013 container init e75dfb2c305fde73906843cc8aaeb5511c9da27f86d5e20b07f4faed81ada41e (image=quay.io/ceph/ceph:v19, name=affectionate_merkle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 08 09:43:44 compute-0 podman[79470]: 2025-10-08 09:43:44.562822081 +0000 UTC m=+0.150055890 container start e75dfb2c305fde73906843cc8aaeb5511c9da27f86d5e20b07f4faed81ada41e (image=quay.io/ceph/ceph:v19, name=affectionate_merkle, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:43:44 compute-0 affectionate_merkle[79487]: 167 167
Oct 08 09:43:44 compute-0 systemd[1]: libpod-e75dfb2c305fde73906843cc8aaeb5511c9da27f86d5e20b07f4faed81ada41e.scope: Deactivated successfully.
Oct 08 09:43:44 compute-0 podman[79470]: 2025-10-08 09:43:44.572024904 +0000 UTC m=+0.159258773 container attach e75dfb2c305fde73906843cc8aaeb5511c9da27f86d5e20b07f4faed81ada41e (image=quay.io/ceph/ceph:v19, name=affectionate_merkle, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:43:44 compute-0 podman[79470]: 2025-10-08 09:43:44.572708854 +0000 UTC m=+0.159942653 container died e75dfb2c305fde73906843cc8aaeb5511c9da27f86d5e20b07f4faed81ada41e (image=quay.io/ceph/ceph:v19, name=affectionate_merkle, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:43:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-c604830965ad5dc36b89f6815f054c21707d02bc73ff36b35f1b1885674271ff-merged.mount: Deactivated successfully.
Oct 08 09:43:44 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Oct 08 09:43:44 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2813966862' entity='client.admin' 
Oct 08 09:43:44 compute-0 podman[79470]: 2025-10-08 09:43:44.628074785 +0000 UTC m=+0.215308564 container remove e75dfb2c305fde73906843cc8aaeb5511c9da27f86d5e20b07f4faed81ada41e (image=quay.io/ceph/ceph:v19, name=affectionate_merkle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:43:44 compute-0 systemd[1]: libpod-conmon-e75dfb2c305fde73906843cc8aaeb5511c9da27f86d5e20b07f4faed81ada41e.scope: Deactivated successfully.
Oct 08 09:43:44 compute-0 systemd[1]: libpod-a44bc277c66ea519eb36c35827901effd92af7b0b2efb67fe855132bbd46130f.scope: Deactivated successfully.
Oct 08 09:43:44 compute-0 podman[79394]: 2025-10-08 09:43:44.640427954 +0000 UTC m=+0.520253850 container died a44bc277c66ea519eb36c35827901effd92af7b0b2efb67fe855132bbd46130f (image=quay.io/ceph/ceph:v19, name=relaxed_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:43:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-a2175227c24a0970519a32ca5fba05acaadcece1a2c593dd6dd4d8bf735c34e4-merged.mount: Deactivated successfully.
Oct 08 09:43:44 compute-0 podman[79394]: 2025-10-08 09:43:44.680399882 +0000 UTC m=+0.560225768 container remove a44bc277c66ea519eb36c35827901effd92af7b0b2efb67fe855132bbd46130f (image=quay.io/ceph/ceph:v19, name=relaxed_joliot, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:43:44 compute-0 sudo[79391]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:44 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 09:43:44 compute-0 sudo[79354]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:44 compute-0 systemd[1]: libpod-conmon-a44bc277c66ea519eb36c35827901effd92af7b0b2efb67fe855132bbd46130f.scope: Deactivated successfully.
Oct 08 09:43:44 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:44 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 09:43:44 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:44 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:43:44 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:43:44 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 08 09:43:44 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 09:43:44 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 08 09:43:44 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:44 compute-0 sudo[79517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 08 09:43:44 compute-0 sudo[79517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:44 compute-0 sudo[79517]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:44 compute-0 sudo[79565]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzhvqktrahlbwzoovtfhxjutbwjpzbyk ; /usr/bin/python3'
Oct 08 09:43:44 compute-0 sudo[79565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:43:45 compute-0 python3[79567]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:43:45 compute-0 podman[79568]: 2025-10-08 09:43:45.166495662 +0000 UTC m=+0.062407577 container create 9b8f5309c3c1dcce65bbd09bfa5c6a9f90f5273b069857082dea9e78f4613d59 (image=quay.io/ceph/ceph:v19, name=boring_elbakyan, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 08 09:43:45 compute-0 systemd[1]: Started libpod-conmon-9b8f5309c3c1dcce65bbd09bfa5c6a9f90f5273b069857082dea9e78f4613d59.scope.
Oct 08 09:43:45 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:43:45 compute-0 podman[79568]: 2025-10-08 09:43:45.140574886 +0000 UTC m=+0.036486851 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:43:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39a080ef48b401fca3d59b9592e810b7920e03a752a69dffe9f4fb8de05d3a4e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39a080ef48b401fca3d59b9592e810b7920e03a752a69dffe9f4fb8de05d3a4e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39a080ef48b401fca3d59b9592e810b7920e03a752a69dffe9f4fb8de05d3a4e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:45 compute-0 podman[79568]: 2025-10-08 09:43:45.253543515 +0000 UTC m=+0.149455420 container init 9b8f5309c3c1dcce65bbd09bfa5c6a9f90f5273b069857082dea9e78f4613d59 (image=quay.io/ceph/ceph:v19, name=boring_elbakyan, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Oct 08 09:43:45 compute-0 podman[79568]: 2025-10-08 09:43:45.264509072 +0000 UTC m=+0.160420987 container start 9b8f5309c3c1dcce65bbd09bfa5c6a9f90f5273b069857082dea9e78f4613d59 (image=quay.io/ceph/ceph:v19, name=boring_elbakyan, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 08 09:43:45 compute-0 podman[79568]: 2025-10-08 09:43:45.268791784 +0000 UTC m=+0.164703669 container attach 9b8f5309c3c1dcce65bbd09bfa5c6a9f90f5273b069857082dea9e78f4613d59 (image=quay.io/ceph/ceph:v19, name=boring_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Oct 08 09:43:45 compute-0 ceph-mon[73572]: Reconfiguring mon.compute-0 (unknown last config time)...
Oct 08 09:43:45 compute-0 ceph-mon[73572]: Reconfiguring daemon mon.compute-0 on compute-0
Oct 08 09:43:45 compute-0 ceph-mon[73572]: Reconfiguring mgr.compute-0.ixicfj (unknown last config time)...
Oct 08 09:43:45 compute-0 ceph-mon[73572]: Reconfiguring daemon mgr.compute-0.ixicfj on compute-0
Oct 08 09:43:45 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/2813966862' entity='client.admin' 
Oct 08 09:43:45 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:45 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:45 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:43:45 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 09:43:45 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:45 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Oct 08 09:43:45 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3016004367' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Oct 08 09:43:46 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Oct 08 09:43:46 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 08 09:43:46 compute-0 ceph-mon[73572]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 08 09:43:46 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3016004367' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Oct 08 09:43:46 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3016004367' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Oct 08 09:43:46 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Oct 08 09:43:46 compute-0 boring_elbakyan[79584]: set require_min_compat_client to mimic
Oct 08 09:43:46 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Oct 08 09:43:46 compute-0 systemd[1]: libpod-9b8f5309c3c1dcce65bbd09bfa5c6a9f90f5273b069857082dea9e78f4613d59.scope: Deactivated successfully.
Oct 08 09:43:46 compute-0 podman[79568]: 2025-10-08 09:43:46.413134092 +0000 UTC m=+1.309045977 container died 9b8f5309c3c1dcce65bbd09bfa5c6a9f90f5273b069857082dea9e78f4613d59 (image=quay.io/ceph/ceph:v19, name=boring_elbakyan, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid)
Oct 08 09:43:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-39a080ef48b401fca3d59b9592e810b7920e03a752a69dffe9f4fb8de05d3a4e-merged.mount: Deactivated successfully.
Oct 08 09:43:46 compute-0 podman[79568]: 2025-10-08 09:43:46.451154599 +0000 UTC m=+1.347066484 container remove 9b8f5309c3c1dcce65bbd09bfa5c6a9f90f5273b069857082dea9e78f4613d59 (image=quay.io/ceph/ceph:v19, name=boring_elbakyan, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:43:46 compute-0 systemd[1]: libpod-conmon-9b8f5309c3c1dcce65bbd09bfa5c6a9f90f5273b069857082dea9e78f4613d59.scope: Deactivated successfully.
Oct 08 09:43:46 compute-0 sudo[79565]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:46 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 08 09:43:46 compute-0 sudo[79646]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdljqgnzmbfewajebwsmcohzdeltssbz ; /usr/bin/python3'
Oct 08 09:43:46 compute-0 sudo[79646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:43:47 compute-0 python3[79648]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:43:47 compute-0 podman[79649]: 2025-10-08 09:43:47.172277449 +0000 UTC m=+0.063376987 container create 1b9fcd02c6ff0f5c565a4f9cc127caea41212f8d0e933e584f3af3354ff17c09 (image=quay.io/ceph/ceph:v19, name=angry_newton, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 08 09:43:47 compute-0 systemd[1]: Started libpod-conmon-1b9fcd02c6ff0f5c565a4f9cc127caea41212f8d0e933e584f3af3354ff17c09.scope.
Oct 08 09:43:47 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:43:47 compute-0 podman[79649]: 2025-10-08 09:43:47.144080073 +0000 UTC m=+0.035179611 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:43:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94a8cfa8df5d8caab09bb643968b01c91c169fe0863a650e4c6d7613b77f1cf4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94a8cfa8df5d8caab09bb643968b01c91c169fe0863a650e4c6d7613b77f1cf4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94a8cfa8df5d8caab09bb643968b01c91c169fe0863a650e4c6d7613b77f1cf4/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:47 compute-0 podman[79649]: 2025-10-08 09:43:47.260348773 +0000 UTC m=+0.151448311 container init 1b9fcd02c6ff0f5c565a4f9cc127caea41212f8d0e933e584f3af3354ff17c09 (image=quay.io/ceph/ceph:v19, name=angry_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct 08 09:43:47 compute-0 podman[79649]: 2025-10-08 09:43:47.270992441 +0000 UTC m=+0.162091959 container start 1b9fcd02c6ff0f5c565a4f9cc127caea41212f8d0e933e584f3af3354ff17c09 (image=quay.io/ceph/ceph:v19, name=angry_newton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct 08 09:43:47 compute-0 podman[79649]: 2025-10-08 09:43:47.275622433 +0000 UTC m=+0.166721951 container attach 1b9fcd02c6ff0f5c565a4f9cc127caea41212f8d0e933e584f3af3354ff17c09 (image=quay.io/ceph/ceph:v19, name=angry_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid)
Oct 08 09:43:47 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3016004367' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Oct 08 09:43:47 compute-0 ceph-mon[73572]: osdmap e3: 0 total, 0 up, 0 in
Oct 08 09:43:47 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 09:43:47 compute-0 sudo[79688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:43:47 compute-0 sudo[79688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:47 compute-0 sudo[79688]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:47 compute-0 sudo[79713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-0
Oct 08 09:43:47 compute-0 sudo[79713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:48 compute-0 sudo[79713]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:48 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Oct 08 09:43:48 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:48 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Oct 08 09:43:48 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:48 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Oct 08 09:43:48 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:48 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Oct 08 09:43:48 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:48 compute-0 ceph-mgr[73869]: [cephadm INFO root] Added host compute-0
Oct 08 09:43:48 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Added host compute-0
Oct 08 09:43:48 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:43:48 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:43:48 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 08 09:43:48 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 09:43:48 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 08 09:43:48 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:48 compute-0 sudo[79758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 08 09:43:48 compute-0 sudo[79758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:43:48 compute-0 sudo[79758]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:48 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:43:48 compute-0 ceph-mon[73572]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 08 09:43:48 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:48 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:48 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:48 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:48 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:43:48 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 09:43:48 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:48 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 08 09:43:49 compute-0 ceph-mon[73572]: from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 09:43:49 compute-0 ceph-mon[73572]: Added host compute-0
Oct 08 09:43:49 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-1
Oct 08 09:43:49 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-1
Oct 08 09:43:50 compute-0 ceph-mon[73572]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 08 09:43:50 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 08 09:43:51 compute-0 ceph-mon[73572]: Deploying cephadm binary to compute-1
Oct 08 09:43:52 compute-0 ceph-mon[73572]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 08 09:43:52 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 08 09:43:52 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:43:52 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:43:52 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:43:52 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:43:52 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:43:52 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:43:53 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:43:53 compute-0 ceph-mon[73572]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 08 09:43:53 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Oct 08 09:43:53 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:53 compute-0 ceph-mgr[73869]: [cephadm INFO root] Added host compute-1
Oct 08 09:43:53 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Added host compute-1
Oct 08 09:43:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 08 09:43:54 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 08 09:43:54 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:54 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 08 09:43:54 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:54 compute-0 ceph-mon[73572]: Added host compute-1
Oct 08 09:43:54 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:54 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:54 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-2
Oct 08 09:43:54 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-2
Oct 08 09:43:55 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 08 09:43:55 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:55 compute-0 ceph-mon[73572]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 08 09:43:55 compute-0 ceph-mon[73572]: Deploying cephadm binary to compute-2
Oct 08 09:43:55 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:56 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 08 09:43:57 compute-0 ceph-mon[73572]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 08 09:43:58 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:43:58 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 08 09:43:58 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Oct 08 09:43:58 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:58 compute-0 ceph-mgr[73869]: [cephadm INFO root] Added host compute-2
Oct 08 09:43:58 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Added host compute-2
Oct 08 09:43:58 compute-0 ceph-mgr[73869]: [cephadm INFO root] Saving service mon spec with placement compute-0;compute-1;compute-2
Oct 08 09:43:58 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0;compute-1;compute-2
Oct 08 09:43:58 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Oct 08 09:43:58 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:58 compute-0 ceph-mgr[73869]: [cephadm INFO root] Saving service mgr spec with placement compute-0;compute-1;compute-2
Oct 08 09:43:58 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0;compute-1;compute-2
Oct 08 09:43:58 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Oct 08 09:43:58 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:58 compute-0 ceph-mgr[73869]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Oct 08 09:43:58 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Oct 08 09:43:58 compute-0 ceph-mgr[73869]: [cephadm INFO root] Marking host: compute-1 for OSDSpec preview refresh.
Oct 08 09:43:58 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Marking host: compute-1 for OSDSpec preview refresh.
Oct 08 09:43:58 compute-0 ceph-mgr[73869]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Oct 08 09:43:58 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Oct 08 09:43:58 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Oct 08 09:43:58 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:58 compute-0 angry_newton[79664]: Added host 'compute-0' with addr '192.168.122.100'
Oct 08 09:43:58 compute-0 angry_newton[79664]: Added host 'compute-1' with addr '192.168.122.101'
Oct 08 09:43:58 compute-0 angry_newton[79664]: Added host 'compute-2' with addr '192.168.122.102'
Oct 08 09:43:58 compute-0 angry_newton[79664]: Scheduled mon update...
Oct 08 09:43:58 compute-0 angry_newton[79664]: Scheduled mgr update...
Oct 08 09:43:58 compute-0 angry_newton[79664]: Scheduled osd.default_drive_group update...
Oct 08 09:43:59 compute-0 systemd[1]: libpod-1b9fcd02c6ff0f5c565a4f9cc127caea41212f8d0e933e584f3af3354ff17c09.scope: Deactivated successfully.
Oct 08 09:43:59 compute-0 podman[79649]: 2025-10-08 09:43:59.009416569 +0000 UTC m=+11.900516127 container died 1b9fcd02c6ff0f5c565a4f9cc127caea41212f8d0e933e584f3af3354ff17c09 (image=quay.io/ceph/ceph:v19, name=angry_newton, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Oct 08 09:43:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-94a8cfa8df5d8caab09bb643968b01c91c169fe0863a650e4c6d7613b77f1cf4-merged.mount: Deactivated successfully.
Oct 08 09:43:59 compute-0 podman[79649]: 2025-10-08 09:43:59.056935278 +0000 UTC m=+11.948034836 container remove 1b9fcd02c6ff0f5c565a4f9cc127caea41212f8d0e933e584f3af3354ff17c09 (image=quay.io/ceph/ceph:v19, name=angry_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True)
Oct 08 09:43:59 compute-0 systemd[1]: libpod-conmon-1b9fcd02c6ff0f5c565a4f9cc127caea41212f8d0e933e584f3af3354ff17c09.scope: Deactivated successfully.
Oct 08 09:43:59 compute-0 sudo[79646]: pam_unix(sudo:session): session closed for user root
Oct 08 09:43:59 compute-0 sudo[79819]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwhjkkhjyoegggwsgjyrjyjcknvhhceg ; /usr/bin/python3'
Oct 08 09:43:59 compute-0 sudo[79819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:43:59 compute-0 python3[79821]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:43:59 compute-0 podman[79823]: 2025-10-08 09:43:59.606817487 +0000 UTC m=+0.050449700 container create 38f23f457679ff470dd26bb1dfbb84366a95dac34433a7ae6c25139bfe4b5945 (image=quay.io/ceph/ceph:v19, name=hungry_elbakyan, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:43:59 compute-0 systemd[1]: Started libpod-conmon-38f23f457679ff470dd26bb1dfbb84366a95dac34433a7ae6c25139bfe4b5945.scope.
Oct 08 09:43:59 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:43:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/117f16257f3336811983e5370edc7b965984433d64d9eaf16806a44ccc1ed99d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/117f16257f3336811983e5370edc7b965984433d64d9eaf16806a44ccc1ed99d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/117f16257f3336811983e5370edc7b965984433d64d9eaf16806a44ccc1ed99d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 08 09:43:59 compute-0 podman[79823]: 2025-10-08 09:43:59.58832954 +0000 UTC m=+0.031961773 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:43:59 compute-0 podman[79823]: 2025-10-08 09:43:59.693564832 +0000 UTC m=+0.137197125 container init 38f23f457679ff470dd26bb1dfbb84366a95dac34433a7ae6c25139bfe4b5945 (image=quay.io/ceph/ceph:v19, name=hungry_elbakyan, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct 08 09:43:59 compute-0 podman[79823]: 2025-10-08 09:43:59.699797954 +0000 UTC m=+0.143430207 container start 38f23f457679ff470dd26bb1dfbb84366a95dac34433a7ae6c25139bfe4b5945 (image=quay.io/ceph/ceph:v19, name=hungry_elbakyan, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:43:59 compute-0 podman[79823]: 2025-10-08 09:43:59.704653422 +0000 UTC m=+0.148285715 container attach 38f23f457679ff470dd26bb1dfbb84366a95dac34433a7ae6c25139bfe4b5945 (image=quay.io/ceph/ceph:v19, name=hungry_elbakyan, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:43:59 compute-0 ceph-mon[73572]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 08 09:43:59 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:59 compute-0 ceph-mon[73572]: Added host compute-2
Oct 08 09:43:59 compute-0 ceph-mon[73572]: Saving service mon spec with placement compute-0;compute-1;compute-2
Oct 08 09:43:59 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:59 compute-0 ceph-mon[73572]: Saving service mgr spec with placement compute-0;compute-1;compute-2
Oct 08 09:43:59 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:43:59 compute-0 ceph-mon[73572]: Marking host: compute-0 for OSDSpec preview refresh.
Oct 08 09:43:59 compute-0 ceph-mon[73572]: Marking host: compute-1 for OSDSpec preview refresh.
Oct 08 09:43:59 compute-0 ceph-mon[73572]: Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Oct 08 09:43:59 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:00 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Oct 08 09:44:00 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2204811910' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 08 09:44:00 compute-0 hungry_elbakyan[79840]: 
Oct 08 09:44:00 compute-0 hungry_elbakyan[79840]: {"fsid":"787292cc-8154-50c4-9e00-e9be3e817149","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":56,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2025-10-08T09:43:01:374245+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-10-08T09:43:01.375926+0000","services":{}},"progress_events":{}}
Oct 08 09:44:00 compute-0 systemd[1]: libpod-38f23f457679ff470dd26bb1dfbb84366a95dac34433a7ae6c25139bfe4b5945.scope: Deactivated successfully.
Oct 08 09:44:00 compute-0 podman[79865]: 2025-10-08 09:44:00.161905317 +0000 UTC m=+0.025958769 container died 38f23f457679ff470dd26bb1dfbb84366a95dac34433a7ae6c25139bfe4b5945 (image=quay.io/ceph/ceph:v19, name=hungry_elbakyan, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:44:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-117f16257f3336811983e5370edc7b965984433d64d9eaf16806a44ccc1ed99d-merged.mount: Deactivated successfully.
Oct 08 09:44:00 compute-0 podman[79865]: 2025-10-08 09:44:00.192632561 +0000 UTC m=+0.056685943 container remove 38f23f457679ff470dd26bb1dfbb84366a95dac34433a7ae6c25139bfe4b5945 (image=quay.io/ceph/ceph:v19, name=hungry_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True)
Oct 08 09:44:00 compute-0 systemd[1]: libpod-conmon-38f23f457679ff470dd26bb1dfbb84366a95dac34433a7ae6c25139bfe4b5945.scope: Deactivated successfully.
Oct 08 09:44:00 compute-0 sudo[79819]: pam_unix(sudo:session): session closed for user root
Oct 08 09:44:00 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 08 09:44:00 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/2204811910' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 08 09:44:01 compute-0 ceph-mon[73572]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 08 09:44:02 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 08 09:44:03 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:44:04 compute-0 ceph-mon[73572]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 08 09:44:04 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 08 09:44:06 compute-0 ceph-mon[73572]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 08 09:44:06 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 08 09:44:08 compute-0 ceph-mon[73572]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 08 09:44:08 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:44:08 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 08 09:44:10 compute-0 ceph-mon[73572]: pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 08 09:44:10 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 08 09:44:12 compute-0 ceph-mon[73572]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 08 09:44:12 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 08 09:44:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:44:14 compute-0 ceph-mon[73572]: pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 08 09:44:14 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 08 09:44:14 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:14 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 08 09:44:14 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:14 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 08 09:44:14 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:14 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 08 09:44:14 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:14 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Oct 08 09:44:14 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 08 09:44:14 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:44:14 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:44:14 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 08 09:44:14 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 09:44:14 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Oct 08 09:44:14 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Oct 08 09:44:14 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 08 09:44:14 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct 08 09:44:14 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct 08 09:44:15 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:15 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:15 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:15 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:15 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 08 09:44:15 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:44:15 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 09:44:15 compute-0 ceph-mon[73572]: Updating compute-1:/etc/ceph/ceph.conf
Oct 08 09:44:15 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct 08 09:44:15 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct 08 09:44:15 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct 08 09:44:15 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct 08 09:44:16 compute-0 ceph-mon[73572]: pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 08 09:44:16 compute-0 ceph-mon[73572]: Updating compute-1:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct 08 09:44:16 compute-0 ceph-mon[73572]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct 08 09:44:16 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 08 09:44:16 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:16 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 08 09:44:16 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:16 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 08 09:44:16 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:16 compute-0 ceph-mgr[73869]: [cephadm ERROR cephadm.serve] Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Oct 08 09:44:16 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Oct 08 09:44:16 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 08 09:44:16 compute-0 ceph-mgr[73869]: [cephadm ERROR cephadm.serve] Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Oct 08 09:44:16 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Oct 08 09:44:16 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 08 09:44:16 compute-0 ceph-mgr[73869]: [progress INFO root] update: starting ev 959ab803-73bd-457e-a384-35c9535dfa13 (Updating crash deployment (+1 -> 2))
Oct 08 09:44:16 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Oct 08 09:44:16 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 08 09:44:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:44:16.291+0000 7fa806647640 -1 log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
Oct 08 09:44:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: service_name: mon
Oct 08 09:44:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: placement:
Oct 08 09:44:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]:   hosts:
Oct 08 09:44:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]:   - compute-0
Oct 08 09:44:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]:   - compute-1
Oct 08 09:44:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]:   - compute-2
Oct 08 09:44:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Oct 08 09:44:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:44:16.292+0000 7fa806647640 -1 log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
Oct 08 09:44:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: service_name: mgr
Oct 08 09:44:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: placement:
Oct 08 09:44:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]:   hosts:
Oct 08 09:44:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]:   - compute-0
Oct 08 09:44:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]:   - compute-1
Oct 08 09:44:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]:   - compute-2
Oct 08 09:44:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Oct 08 09:44:16 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct 08 09:44:16 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:44:16 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:44:16 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-1 on compute-1
Oct 08 09:44:16 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-1 on compute-1
Oct 08 09:44:17 compute-0 ceph-mon[73572]: Updating compute-1:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct 08 09:44:17 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:17 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:17 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:17 compute-0 ceph-mon[73572]: Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Oct 08 09:44:17 compute-0 ceph-mon[73572]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 08 09:44:17 compute-0 ceph-mon[73572]: Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Oct 08 09:44:17 compute-0 ceph-mon[73572]: pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 08 09:44:17 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 08 09:44:17 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct 08 09:44:17 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:44:17 compute-0 ceph-mon[73572]: Deploying daemon crash.compute-1 on compute-1
Oct 08 09:44:17 compute-0 ceph-mon[73572]: log_channel(cluster) log [WRN] : Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Oct 08 09:44:18 compute-0 ceph-mon[73572]: Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Oct 08 09:44:18 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 08 09:44:18 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:44:18 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 08 09:44:18 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:18 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 08 09:44:18 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:18 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Oct 08 09:44:18 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:18 compute-0 ceph-mgr[73869]: [progress INFO root] complete: finished ev 959ab803-73bd-457e-a384-35c9535dfa13 (Updating crash deployment (+1 -> 2))
Oct 08 09:44:18 compute-0 ceph-mgr[73869]: [progress INFO root] Completed event 959ab803-73bd-457e-a384-35c9535dfa13 (Updating crash deployment (+1 -> 2)) in 2 seconds
Oct 08 09:44:18 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Oct 08 09:44:18 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:18 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 08 09:44:18 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 09:44:18 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 08 09:44:18 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 09:44:18 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:44:18 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:44:18 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 08 09:44:18 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 09:44:18 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:44:18 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:44:18 compute-0 sudo[79880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:44:18 compute-0 sudo[79880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:44:18 compute-0 sudo[79880]: pam_unix(sudo:session): session closed for user root
Oct 08 09:44:18 compute-0 sudo[79905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 08 09:44:18 compute-0 sudo[79905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:44:19 compute-0 ceph-mon[73572]: pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 08 09:44:19 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:19 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:19 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:19 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:19 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 09:44:19 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 09:44:19 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:44:19 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 09:44:19 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:44:19 compute-0 podman[79970]: 2025-10-08 09:44:19.314532608 +0000 UTC m=+0.062631645 container create 2f59882b0f59ce938913c163f893ed683146fdacb14c245683cc9eb9f1fe1f62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 08 09:44:19 compute-0 systemd[1]: Started libpod-conmon-2f59882b0f59ce938913c163f893ed683146fdacb14c245683cc9eb9f1fe1f62.scope.
Oct 08 09:44:19 compute-0 podman[79970]: 2025-10-08 09:44:19.289147098 +0000 UTC m=+0.037246115 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:44:19 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:44:19 compute-0 podman[79970]: 2025-10-08 09:44:19.40350614 +0000 UTC m=+0.151605237 container init 2f59882b0f59ce938913c163f893ed683146fdacb14c245683cc9eb9f1fe1f62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_hugle, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 08 09:44:19 compute-0 podman[79970]: 2025-10-08 09:44:19.409786864 +0000 UTC m=+0.157885871 container start 2f59882b0f59ce938913c163f893ed683146fdacb14c245683cc9eb9f1fe1f62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Oct 08 09:44:19 compute-0 podman[79970]: 2025-10-08 09:44:19.413639781 +0000 UTC m=+0.161738778 container attach 2f59882b0f59ce938913c163f893ed683146fdacb14c245683cc9eb9f1fe1f62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_hugle, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:44:19 compute-0 laughing_hugle[79986]: 167 167
Oct 08 09:44:19 compute-0 systemd[1]: libpod-2f59882b0f59ce938913c163f893ed683146fdacb14c245683cc9eb9f1fe1f62.scope: Deactivated successfully.
Oct 08 09:44:19 compute-0 podman[79970]: 2025-10-08 09:44:19.418239733 +0000 UTC m=+0.166338730 container died 2f59882b0f59ce938913c163f893ed683146fdacb14c245683cc9eb9f1fe1f62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_hugle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:44:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-30c3fd269ff53e7accbd49fbe6cd2022214fbbce6383cfbcae05585c06b1ba98-merged.mount: Deactivated successfully.
Oct 08 09:44:19 compute-0 podman[79970]: 2025-10-08 09:44:19.457009804 +0000 UTC m=+0.205108801 container remove 2f59882b0f59ce938913c163f893ed683146fdacb14c245683cc9eb9f1fe1f62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_hugle, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct 08 09:44:19 compute-0 systemd[1]: libpod-conmon-2f59882b0f59ce938913c163f893ed683146fdacb14c245683cc9eb9f1fe1f62.scope: Deactivated successfully.
Oct 08 09:44:19 compute-0 podman[80009]: 2025-10-08 09:44:19.644289696 +0000 UTC m=+0.040689041 container create 35a5936dc2a6f8d35c4b48fc7d5dd1a62fc46a948e13f8a79b84dfe0328b76fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:44:19 compute-0 systemd[1]: Started libpod-conmon-35a5936dc2a6f8d35c4b48fc7d5dd1a62fc46a948e13f8a79b84dfe0328b76fa.scope.
Oct 08 09:44:19 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:44:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6124af7819ea7b990535e4686b9bb97a999d82404ad96c5eac3c4efccbd160ed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 09:44:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6124af7819ea7b990535e4686b9bb97a999d82404ad96c5eac3c4efccbd160ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:44:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6124af7819ea7b990535e4686b9bb97a999d82404ad96c5eac3c4efccbd160ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:44:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6124af7819ea7b990535e4686b9bb97a999d82404ad96c5eac3c4efccbd160ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 09:44:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6124af7819ea7b990535e4686b9bb97a999d82404ad96c5eac3c4efccbd160ed/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 09:44:19 compute-0 podman[80009]: 2025-10-08 09:44:19.705661111 +0000 UTC m=+0.102060436 container init 35a5936dc2a6f8d35c4b48fc7d5dd1a62fc46a948e13f8a79b84dfe0328b76fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_noether, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Oct 08 09:44:19 compute-0 podman[80009]: 2025-10-08 09:44:19.714185963 +0000 UTC m=+0.110585288 container start 35a5936dc2a6f8d35c4b48fc7d5dd1a62fc46a948e13f8a79b84dfe0328b76fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_noether, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 08 09:44:19 compute-0 podman[80009]: 2025-10-08 09:44:19.71830292 +0000 UTC m=+0.114702245 container attach 35a5936dc2a6f8d35c4b48fc7d5dd1a62fc46a948e13f8a79b84dfe0328b76fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_noether, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct 08 09:44:19 compute-0 podman[80009]: 2025-10-08 09:44:19.624917551 +0000 UTC m=+0.021316906 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:44:20 compute-0 funny_noether[80026]: --> passed data devices: 0 physical, 1 LVM
Oct 08 09:44:20 compute-0 funny_noether[80026]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 08 09:44:20 compute-0 funny_noether[80026]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 08 09:44:20 compute-0 funny_noether[80026]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 85fe3e7b-5e0f-4a19-934c-310215b2e933
Oct 08 09:44:20 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 08 09:44:20 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "4c1bfacb-a774-41da-9670-a649dcd6f8d0"} v 0)
Oct 08 09:44:20 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/2322205066' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "4c1bfacb-a774-41da-9670-a649dcd6f8d0"}]: dispatch
Oct 08 09:44:20 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Oct 08 09:44:20 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 08 09:44:20 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/2322205066' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "4c1bfacb-a774-41da-9670-a649dcd6f8d0"}]': finished
Oct 08 09:44:20 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Oct 08 09:44:20 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Oct 08 09:44:20 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct 08 09:44:20 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 08 09:44:20 compute-0 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 08 09:44:20 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "85fe3e7b-5e0f-4a19-934c-310215b2e933"} v 0)
Oct 08 09:44:20 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1515855431' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "85fe3e7b-5e0f-4a19-934c-310215b2e933"}]: dispatch
Oct 08 09:44:20 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Oct 08 09:44:20 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 08 09:44:20 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1515855431' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "85fe3e7b-5e0f-4a19-934c-310215b2e933"}]': finished
Oct 08 09:44:20 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Oct 08 09:44:20 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Oct 08 09:44:20 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct 08 09:44:20 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 08 09:44:20 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct 08 09:44:20 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 08 09:44:20 compute-0 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 08 09:44:20 compute-0 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 08 09:44:20 compute-0 funny_noether[80026]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Oct 08 09:44:20 compute-0 funny_noether[80026]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Oct 08 09:44:20 compute-0 lvm[80087]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 08 09:44:20 compute-0 lvm[80087]: VG ceph_vg0 finished
Oct 08 09:44:20 compute-0 funny_noether[80026]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 08 09:44:20 compute-0 funny_noether[80026]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Oct 08 09:44:20 compute-0 funny_noether[80026]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Oct 08 09:44:21 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Oct 08 09:44:21 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4120750441' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 08 09:44:21 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Oct 08 09:44:21 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3796036156' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 08 09:44:21 compute-0 funny_noether[80026]:  stderr: got monmap epoch 1
Oct 08 09:44:21 compute-0 funny_noether[80026]: --> Creating keyring file for osd.1
Oct 08 09:44:21 compute-0 funny_noether[80026]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Oct 08 09:44:21 compute-0 funny_noether[80026]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Oct 08 09:44:21 compute-0 funny_noether[80026]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 85fe3e7b-5e0f-4a19-934c-310215b2e933 --setuser ceph --setgroup ceph
Oct 08 09:44:21 compute-0 ceph-mon[73572]: pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 08 09:44:21 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/2322205066' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "4c1bfacb-a774-41da-9670-a649dcd6f8d0"}]: dispatch
Oct 08 09:44:21 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/2322205066' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "4c1bfacb-a774-41da-9670-a649dcd6f8d0"}]': finished
Oct 08 09:44:21 compute-0 ceph-mon[73572]: osdmap e4: 1 total, 0 up, 1 in
Oct 08 09:44:21 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 08 09:44:21 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/1515855431' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "85fe3e7b-5e0f-4a19-934c-310215b2e933"}]: dispatch
Oct 08 09:44:21 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/1515855431' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "85fe3e7b-5e0f-4a19-934c-310215b2e933"}]': finished
Oct 08 09:44:21 compute-0 ceph-mon[73572]: osdmap e5: 2 total, 0 up, 2 in
Oct 08 09:44:21 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 08 09:44:21 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 08 09:44:21 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/4120750441' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 08 09:44:21 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3796036156' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 08 09:44:22 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 08 09:44:22 compute-0 ceph-mon[73572]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Oct 08 09:44:22 compute-0 ceph-mon[73572]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Oct 08 09:44:22 compute-0 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_09:44:22
Oct 08 09:44:22 compute-0 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 08 09:44:22 compute-0 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct 08 09:44:22 compute-0 ceph-mgr[73869]: [balancer INFO root] No pools available
Oct 08 09:44:22 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct 08 09:44:22 compute-0 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 08 09:44:22 compute-0 ceph-mgr[73869]: [progress INFO root] Writing back 2 completed events
Oct 08 09:44:22 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 08 09:44:22 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:44:22 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:44:22 compute-0 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 08 09:44:22 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:22 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:44:22 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:44:22 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:44:22 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:44:23 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:44:23 compute-0 ceph-mon[73572]: pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 08 09:44:23 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:23 compute-0 funny_noether[80026]:  stderr: 2025-10-08T09:44:21.147+0000 7f7a2550b740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) No valid bdev label found
Oct 08 09:44:23 compute-0 funny_noether[80026]:  stderr: 2025-10-08T09:44:21.409+0000 7f7a2550b740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Oct 08 09:44:23 compute-0 funny_noether[80026]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Oct 08 09:44:23 compute-0 funny_noether[80026]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 08 09:44:23 compute-0 funny_noether[80026]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Oct 08 09:44:24 compute-0 funny_noether[80026]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Oct 08 09:44:24 compute-0 funny_noether[80026]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Oct 08 09:44:24 compute-0 funny_noether[80026]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 08 09:44:24 compute-0 funny_noether[80026]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 08 09:44:24 compute-0 funny_noether[80026]: --> ceph-volume lvm activate successful for osd ID: 1
Oct 08 09:44:24 compute-0 funny_noether[80026]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Oct 08 09:44:24 compute-0 systemd[1]: libpod-35a5936dc2a6f8d35c4b48fc7d5dd1a62fc46a948e13f8a79b84dfe0328b76fa.scope: Deactivated successfully.
Oct 08 09:44:24 compute-0 systemd[1]: libpod-35a5936dc2a6f8d35c4b48fc7d5dd1a62fc46a948e13f8a79b84dfe0328b76fa.scope: Consumed 2.000s CPU time.
Oct 08 09:44:24 compute-0 podman[80009]: 2025-10-08 09:44:24.194099521 +0000 UTC m=+4.590498846 container died 35a5936dc2a6f8d35c4b48fc7d5dd1a62fc46a948e13f8a79b84dfe0328b76fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_noether, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:44:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-6124af7819ea7b990535e4686b9bb97a999d82404ad96c5eac3c4efccbd160ed-merged.mount: Deactivated successfully.
Oct 08 09:44:24 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 08 09:44:24 compute-0 podman[80009]: 2025-10-08 09:44:24.320189814 +0000 UTC m=+4.716589129 container remove 35a5936dc2a6f8d35c4b48fc7d5dd1a62fc46a948e13f8a79b84dfe0328b76fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_noether, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:44:24 compute-0 systemd[1]: libpod-conmon-35a5936dc2a6f8d35c4b48fc7d5dd1a62fc46a948e13f8a79b84dfe0328b76fa.scope: Deactivated successfully.
Oct 08 09:44:24 compute-0 sudo[79905]: pam_unix(sudo:session): session closed for user root
Oct 08 09:44:24 compute-0 sudo[81006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:44:24 compute-0 sudo[81006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:44:24 compute-0 sudo[81006]: pam_unix(sudo:session): session closed for user root
Oct 08 09:44:24 compute-0 sudo[81031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- lvm list --format json
Oct 08 09:44:24 compute-0 sudo[81031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:44:24 compute-0 podman[81096]: 2025-10-08 09:44:24.883022731 +0000 UTC m=+0.035488542 container create 62dace7d2cde38cd87e8969c0b5fcee079b5e8ae48579004b674415dca8e9b85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_jepsen, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Oct 08 09:44:24 compute-0 systemd[1]: Started libpod-conmon-62dace7d2cde38cd87e8969c0b5fcee079b5e8ae48579004b674415dca8e9b85.scope.
Oct 08 09:44:24 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:44:24 compute-0 podman[81096]: 2025-10-08 09:44:24.868267747 +0000 UTC m=+0.020733578 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:44:24 compute-0 podman[81096]: 2025-10-08 09:44:24.966320029 +0000 UTC m=+0.118785870 container init 62dace7d2cde38cd87e8969c0b5fcee079b5e8ae48579004b674415dca8e9b85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Oct 08 09:44:24 compute-0 podman[81096]: 2025-10-08 09:44:24.974772729 +0000 UTC m=+0.127238540 container start 62dace7d2cde38cd87e8969c0b5fcee079b5e8ae48579004b674415dca8e9b85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_jepsen, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:44:24 compute-0 podman[81096]: 2025-10-08 09:44:24.977692778 +0000 UTC m=+0.130158589 container attach 62dace7d2cde38cd87e8969c0b5fcee079b5e8ae48579004b674415dca8e9b85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Oct 08 09:44:24 compute-0 quirky_jepsen[81113]: 167 167
Oct 08 09:44:24 compute-0 systemd[1]: libpod-62dace7d2cde38cd87e8969c0b5fcee079b5e8ae48579004b674415dca8e9b85.scope: Deactivated successfully.
Oct 08 09:44:24 compute-0 podman[81096]: 2025-10-08 09:44:24.980435833 +0000 UTC m=+0.132901654 container died 62dace7d2cde38cd87e8969c0b5fcee079b5e8ae48579004b674415dca8e9b85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_jepsen, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Oct 08 09:44:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a3fb35d8c1694bc5513e8969e289cfb01c6ba03f5104e3279a3a4cbdc55aa58-merged.mount: Deactivated successfully.
Oct 08 09:44:25 compute-0 podman[81096]: 2025-10-08 09:44:25.017544763 +0000 UTC m=+0.170010614 container remove 62dace7d2cde38cd87e8969c0b5fcee079b5e8ae48579004b674415dca8e9b85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:44:25 compute-0 systemd[1]: libpod-conmon-62dace7d2cde38cd87e8969c0b5fcee079b5e8ae48579004b674415dca8e9b85.scope: Deactivated successfully.
Oct 08 09:44:25 compute-0 podman[81137]: 2025-10-08 09:44:25.222224309 +0000 UTC m=+0.053637508 container create 8b458d804e25d65a5d2f0daddd57607e05a74c35152c08cda0cb6b7d94c0f62e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_boyd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 08 09:44:25 compute-0 systemd[1]: Started libpod-conmon-8b458d804e25d65a5d2f0daddd57607e05a74c35152c08cda0cb6b7d94c0f62e.scope.
Oct 08 09:44:25 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:44:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53bc24bd1ea4b2dd7a1b09712933b408d3edf565bc947b7d935d5006f2afbc16/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 09:44:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53bc24bd1ea4b2dd7a1b09712933b408d3edf565bc947b7d935d5006f2afbc16/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:44:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53bc24bd1ea4b2dd7a1b09712933b408d3edf565bc947b7d935d5006f2afbc16/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:44:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53bc24bd1ea4b2dd7a1b09712933b408d3edf565bc947b7d935d5006f2afbc16/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 09:44:25 compute-0 podman[81137]: 2025-10-08 09:44:25.200354687 +0000 UTC m=+0.031767886 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:44:25 compute-0 podman[81137]: 2025-10-08 09:44:25.307523249 +0000 UTC m=+0.138936418 container init 8b458d804e25d65a5d2f0daddd57607e05a74c35152c08cda0cb6b7d94c0f62e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct 08 09:44:25 compute-0 podman[81137]: 2025-10-08 09:44:25.313120211 +0000 UTC m=+0.144533410 container start 8b458d804e25d65a5d2f0daddd57607e05a74c35152c08cda0cb6b7d94c0f62e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_boyd, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:44:25 compute-0 podman[81137]: 2025-10-08 09:44:25.31667577 +0000 UTC m=+0.148089009 container attach 8b458d804e25d65a5d2f0daddd57607e05a74c35152c08cda0cb6b7d94c0f62e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 08 09:44:25 compute-0 ceph-mon[73572]: pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 08 09:44:25 compute-0 happy_boyd[81153]: {
Oct 08 09:44:25 compute-0 happy_boyd[81153]:     "1": [
Oct 08 09:44:25 compute-0 happy_boyd[81153]:         {
Oct 08 09:44:25 compute-0 happy_boyd[81153]:             "devices": [
Oct 08 09:44:25 compute-0 happy_boyd[81153]:                 "/dev/loop3"
Oct 08 09:44:25 compute-0 happy_boyd[81153]:             ],
Oct 08 09:44:25 compute-0 happy_boyd[81153]:             "lv_name": "ceph_lv0",
Oct 08 09:44:25 compute-0 happy_boyd[81153]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 09:44:25 compute-0 happy_boyd[81153]:             "lv_size": "21470642176",
Oct 08 09:44:25 compute-0 happy_boyd[81153]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 08 09:44:25 compute-0 happy_boyd[81153]:             "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 09:44:25 compute-0 happy_boyd[81153]:             "name": "ceph_lv0",
Oct 08 09:44:25 compute-0 happy_boyd[81153]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 09:44:25 compute-0 happy_boyd[81153]:             "tags": {
Oct 08 09:44:25 compute-0 happy_boyd[81153]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 08 09:44:25 compute-0 happy_boyd[81153]:                 "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 09:44:25 compute-0 happy_boyd[81153]:                 "ceph.cephx_lockbox_secret": "",
Oct 08 09:44:25 compute-0 happy_boyd[81153]:                 "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct 08 09:44:25 compute-0 happy_boyd[81153]:                 "ceph.cluster_name": "ceph",
Oct 08 09:44:25 compute-0 happy_boyd[81153]:                 "ceph.crush_device_class": "",
Oct 08 09:44:25 compute-0 happy_boyd[81153]:                 "ceph.encrypted": "0",
Oct 08 09:44:25 compute-0 happy_boyd[81153]:                 "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct 08 09:44:25 compute-0 happy_boyd[81153]:                 "ceph.osd_id": "1",
Oct 08 09:44:25 compute-0 happy_boyd[81153]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 08 09:44:25 compute-0 happy_boyd[81153]:                 "ceph.type": "block",
Oct 08 09:44:25 compute-0 happy_boyd[81153]:                 "ceph.vdo": "0",
Oct 08 09:44:25 compute-0 happy_boyd[81153]:                 "ceph.with_tpm": "0"
Oct 08 09:44:25 compute-0 happy_boyd[81153]:             },
Oct 08 09:44:25 compute-0 happy_boyd[81153]:             "type": "block",
Oct 08 09:44:25 compute-0 happy_boyd[81153]:             "vg_name": "ceph_vg0"
Oct 08 09:44:25 compute-0 happy_boyd[81153]:         }
Oct 08 09:44:25 compute-0 happy_boyd[81153]:     ]
Oct 08 09:44:25 compute-0 happy_boyd[81153]: }
Oct 08 09:44:25 compute-0 systemd[1]: libpod-8b458d804e25d65a5d2f0daddd57607e05a74c35152c08cda0cb6b7d94c0f62e.scope: Deactivated successfully.
Oct 08 09:44:25 compute-0 podman[81137]: 2025-10-08 09:44:25.626702202 +0000 UTC m=+0.458115391 container died 8b458d804e25d65a5d2f0daddd57607e05a74c35152c08cda0cb6b7d94c0f62e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 08 09:44:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-53bc24bd1ea4b2dd7a1b09712933b408d3edf565bc947b7d935d5006f2afbc16-merged.mount: Deactivated successfully.
Oct 08 09:44:25 compute-0 podman[81137]: 2025-10-08 09:44:25.68001807 +0000 UTC m=+0.511431239 container remove 8b458d804e25d65a5d2f0daddd57607e05a74c35152c08cda0cb6b7d94c0f62e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_boyd, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:44:25 compute-0 systemd[1]: libpod-conmon-8b458d804e25d65a5d2f0daddd57607e05a74c35152c08cda0cb6b7d94c0f62e.scope: Deactivated successfully.
Oct 08 09:44:25 compute-0 sudo[81031]: pam_unix(sudo:session): session closed for user root
Oct 08 09:44:25 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Oct 08 09:44:25 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct 08 09:44:25 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:44:25 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:44:25 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Oct 08 09:44:25 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Oct 08 09:44:25 compute-0 sudo[81174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:44:25 compute-0 sudo[81174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:44:25 compute-0 sudo[81174]: pam_unix(sudo:session): session closed for user root
Oct 08 09:44:25 compute-0 sudo[81199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 787292cc-8154-50c4-9e00-e9be3e817149
Oct 08 09:44:25 compute-0 sudo[81199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:44:26 compute-0 podman[81263]: 2025-10-08 09:44:26.265251305 +0000 UTC m=+0.042025142 container create 8743eb1f0ac2df5abb65a54a4b666efca15f18be3f20e8bba97854e361c017ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_gagarin, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:44:26 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 08 09:44:26 compute-0 systemd[1]: Started libpod-conmon-8743eb1f0ac2df5abb65a54a4b666efca15f18be3f20e8bba97854e361c017ce.scope.
Oct 08 09:44:26 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:44:26 compute-0 podman[81263]: 2025-10-08 09:44:26.341119145 +0000 UTC m=+0.117893002 container init 8743eb1f0ac2df5abb65a54a4b666efca15f18be3f20e8bba97854e361c017ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_gagarin, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 08 09:44:26 compute-0 podman[81263]: 2025-10-08 09:44:26.248382146 +0000 UTC m=+0.025156003 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:44:26 compute-0 podman[81263]: 2025-10-08 09:44:26.347870252 +0000 UTC m=+0.124644079 container start 8743eb1f0ac2df5abb65a54a4b666efca15f18be3f20e8bba97854e361c017ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_gagarin, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:44:26 compute-0 podman[81263]: 2025-10-08 09:44:26.351048329 +0000 UTC m=+0.127822196 container attach 8743eb1f0ac2df5abb65a54a4b666efca15f18be3f20e8bba97854e361c017ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_gagarin, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:44:26 compute-0 goofy_gagarin[81280]: 167 167
Oct 08 09:44:26 compute-0 systemd[1]: libpod-8743eb1f0ac2df5abb65a54a4b666efca15f18be3f20e8bba97854e361c017ce.scope: Deactivated successfully.
Oct 08 09:44:26 compute-0 conmon[81280]: conmon 8743eb1f0ac2df5abb65 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8743eb1f0ac2df5abb65a54a4b666efca15f18be3f20e8bba97854e361c017ce.scope/container/memory.events
Oct 08 09:44:26 compute-0 podman[81263]: 2025-10-08 09:44:26.352581807 +0000 UTC m=+0.129355634 container died 8743eb1f0ac2df5abb65a54a4b666efca15f18be3f20e8bba97854e361c017ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 08 09:44:26 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Oct 08 09:44:26 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct 08 09:44:26 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:44:26 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:44:26 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-1
Oct 08 09:44:26 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-1
Oct 08 09:44:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-acb57bd8fdc8eb223ca11fecd1ff4250a3d305d5b50410a827e0588dd78d8e28-merged.mount: Deactivated successfully.
Oct 08 09:44:26 compute-0 podman[81263]: 2025-10-08 09:44:26.384448615 +0000 UTC m=+0.161222442 container remove 8743eb1f0ac2df5abb65a54a4b666efca15f18be3f20e8bba97854e361c017ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_gagarin, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 08 09:44:26 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct 08 09:44:26 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:44:26 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct 08 09:44:26 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:44:26 compute-0 systemd[1]: libpod-conmon-8743eb1f0ac2df5abb65a54a4b666efca15f18be3f20e8bba97854e361c017ce.scope: Deactivated successfully.
Oct 08 09:44:26 compute-0 podman[81309]: 2025-10-08 09:44:26.665063524 +0000 UTC m=+0.044596720 container create ec3e73eb84d5ef78be6221f5194046ca6d05f340442e8d023bb0c4e8c2d7016b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1-activate-test, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:44:26 compute-0 systemd[1]: Started libpod-conmon-ec3e73eb84d5ef78be6221f5194046ca6d05f340442e8d023bb0c4e8c2d7016b.scope.
Oct 08 09:44:26 compute-0 podman[81309]: 2025-10-08 09:44:26.641657746 +0000 UTC m=+0.021190942 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:44:26 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:44:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46f18965360352602a8510ec931733efae88b6c6a6d2a16de5eddb55ebfacd35/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 09:44:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46f18965360352602a8510ec931733efae88b6c6a6d2a16de5eddb55ebfacd35/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:44:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46f18965360352602a8510ec931733efae88b6c6a6d2a16de5eddb55ebfacd35/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:44:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46f18965360352602a8510ec931733efae88b6c6a6d2a16de5eddb55ebfacd35/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 09:44:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46f18965360352602a8510ec931733efae88b6c6a6d2a16de5eddb55ebfacd35/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Oct 08 09:44:26 compute-0 podman[81309]: 2025-10-08 09:44:26.75898695 +0000 UTC m=+0.138520126 container init ec3e73eb84d5ef78be6221f5194046ca6d05f340442e8d023bb0c4e8c2d7016b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1-activate-test, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Oct 08 09:44:26 compute-0 podman[81309]: 2025-10-08 09:44:26.767834461 +0000 UTC m=+0.147367657 container start ec3e73eb84d5ef78be6221f5194046ca6d05f340442e8d023bb0c4e8c2d7016b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct 08 09:44:26 compute-0 podman[81309]: 2025-10-08 09:44:26.771243386 +0000 UTC m=+0.150776552 container attach ec3e73eb84d5ef78be6221f5194046ca6d05f340442e8d023bb0c4e8c2d7016b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1-activate-test, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:44:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1-activate-test[81326]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Oct 08 09:44:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1-activate-test[81326]:                             [--no-systemd] [--no-tmpfs]
Oct 08 09:44:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1-activate-test[81326]: ceph-volume activate: error: unrecognized arguments: --bad-option
Oct 08 09:44:26 compute-0 systemd[1]: libpod-ec3e73eb84d5ef78be6221f5194046ca6d05f340442e8d023bb0c4e8c2d7016b.scope: Deactivated successfully.
Oct 08 09:44:26 compute-0 podman[81309]: 2025-10-08 09:44:26.937451681 +0000 UTC m=+0.316984837 container died ec3e73eb84d5ef78be6221f5194046ca6d05f340442e8d023bb0c4e8c2d7016b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1-activate-test, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:44:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-46f18965360352602a8510ec931733efae88b6c6a6d2a16de5eddb55ebfacd35-merged.mount: Deactivated successfully.
Oct 08 09:44:26 compute-0 podman[81309]: 2025-10-08 09:44:26.985570219 +0000 UTC m=+0.365103375 container remove ec3e73eb84d5ef78be6221f5194046ca6d05f340442e8d023bb0c4e8c2d7016b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct 08 09:44:26 compute-0 systemd[1]: libpod-conmon-ec3e73eb84d5ef78be6221f5194046ca6d05f340442e8d023bb0c4e8c2d7016b.scope: Deactivated successfully.
Oct 08 09:44:27 compute-0 systemd[1]: Reloading.
Oct 08 09:44:27 compute-0 systemd-rc-local-generator[81386]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:44:27 compute-0 systemd-sysv-generator[81390]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:44:27 compute-0 ceph-mon[73572]: Deploying daemon osd.1 on compute-0
Oct 08 09:44:27 compute-0 ceph-mon[73572]: pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 08 09:44:27 compute-0 ceph-mon[73572]: Deploying daemon osd.0 on compute-1
Oct 08 09:44:27 compute-0 systemd[1]: Reloading.
Oct 08 09:44:27 compute-0 systemd-sysv-generator[81430]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:44:27 compute-0 systemd-rc-local-generator[81425]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:44:27 compute-0 systemd[1]: Starting Ceph osd.1 for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct 08 09:44:28 compute-0 podman[81484]: 2025-10-08 09:44:28.038944332 +0000 UTC m=+0.039761881 container create 0215da1036d559617466acc3522c3d2170d4960f1c9a8dac01252aa6de561452 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1-activate, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct 08 09:44:28 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:44:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b49b957306e541b17dea64faece7a941ec8b60bc2c382f0551edaff7b9a0e2c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 09:44:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b49b957306e541b17dea64faece7a941ec8b60bc2c382f0551edaff7b9a0e2c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:44:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b49b957306e541b17dea64faece7a941ec8b60bc2c382f0551edaff7b9a0e2c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:44:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b49b957306e541b17dea64faece7a941ec8b60bc2c382f0551edaff7b9a0e2c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 09:44:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b49b957306e541b17dea64faece7a941ec8b60bc2c382f0551edaff7b9a0e2c4/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Oct 08 09:44:28 compute-0 podman[81484]: 2025-10-08 09:44:28.109258432 +0000 UTC m=+0.110076011 container init 0215da1036d559617466acc3522c3d2170d4960f1c9a8dac01252aa6de561452 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1-activate, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 08 09:44:28 compute-0 podman[81484]: 2025-10-08 09:44:28.114642587 +0000 UTC m=+0.115460136 container start 0215da1036d559617466acc3522c3d2170d4960f1c9a8dac01252aa6de561452 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1-activate, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct 08 09:44:28 compute-0 podman[81484]: 2025-10-08 09:44:28.021066194 +0000 UTC m=+0.021883793 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:44:28 compute-0 podman[81484]: 2025-10-08 09:44:28.126346857 +0000 UTC m=+0.127164406 container attach 0215da1036d559617466acc3522c3d2170d4960f1c9a8dac01252aa6de561452 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1-activate, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:44:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1-activate[81500]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 08 09:44:28 compute-0 bash[81484]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 08 09:44:28 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 08 09:44:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1-activate[81500]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 08 09:44:28 compute-0 bash[81484]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 08 09:44:28 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:44:28 compute-0 lvm[81582]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 08 09:44:28 compute-0 lvm[81582]: VG ceph_vg0 finished
Oct 08 09:44:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1-activate[81500]: --> Failed to activate via raw: did not find any matching OSD to activate
Oct 08 09:44:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1-activate[81500]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 08 09:44:28 compute-0 bash[81484]: --> Failed to activate via raw: did not find any matching OSD to activate
Oct 08 09:44:28 compute-0 bash[81484]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 08 09:44:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1-activate[81500]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 08 09:44:28 compute-0 bash[81484]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 08 09:44:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1-activate[81500]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 08 09:44:28 compute-0 bash[81484]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 08 09:44:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1-activate[81500]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Oct 08 09:44:28 compute-0 bash[81484]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Oct 08 09:44:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1-activate[81500]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Oct 08 09:44:29 compute-0 bash[81484]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Oct 08 09:44:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1-activate[81500]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Oct 08 09:44:29 compute-0 bash[81484]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Oct 08 09:44:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1-activate[81500]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 08 09:44:29 compute-0 bash[81484]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 08 09:44:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1-activate[81500]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 08 09:44:29 compute-0 bash[81484]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 08 09:44:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1-activate[81500]: --> ceph-volume lvm activate successful for osd ID: 1
Oct 08 09:44:29 compute-0 bash[81484]: --> ceph-volume lvm activate successful for osd ID: 1
Oct 08 09:44:29 compute-0 systemd[1]: libpod-0215da1036d559617466acc3522c3d2170d4960f1c9a8dac01252aa6de561452.scope: Deactivated successfully.
Oct 08 09:44:29 compute-0 systemd[1]: libpod-0215da1036d559617466acc3522c3d2170d4960f1c9a8dac01252aa6de561452.scope: Consumed 1.406s CPU time.
Oct 08 09:44:29 compute-0 podman[81484]: 2025-10-08 09:44:29.392273489 +0000 UTC m=+1.393091078 container died 0215da1036d559617466acc3522c3d2170d4960f1c9a8dac01252aa6de561452 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1-activate, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:44:29 compute-0 ceph-mon[73572]: pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 08 09:44:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-b49b957306e541b17dea64faece7a941ec8b60bc2c382f0551edaff7b9a0e2c4-merged.mount: Deactivated successfully.
Oct 08 09:44:29 compute-0 podman[81484]: 2025-10-08 09:44:29.453884901 +0000 UTC m=+1.454702490 container remove 0215da1036d559617466acc3522c3d2170d4960f1c9a8dac01252aa6de561452 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1-activate, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:44:29 compute-0 podman[81732]: 2025-10-08 09:44:29.681789932 +0000 UTC m=+0.056053593 container create 7ace3f50e48c85dfbeac24b6a9c8de138ec140013d8daa3351908e2ceb79b4c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:44:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e9f70dbeabb77e0e5f55a1a90bae9d7c73b770d7aa16c3b0f593a39c496f154/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 09:44:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e9f70dbeabb77e0e5f55a1a90bae9d7c73b770d7aa16c3b0f593a39c496f154/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:44:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e9f70dbeabb77e0e5f55a1a90bae9d7c73b770d7aa16c3b0f593a39c496f154/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:44:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e9f70dbeabb77e0e5f55a1a90bae9d7c73b770d7aa16c3b0f593a39c496f154/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 09:44:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e9f70dbeabb77e0e5f55a1a90bae9d7c73b770d7aa16c3b0f593a39c496f154/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Oct 08 09:44:29 compute-0 podman[81732]: 2025-10-08 09:44:29.738657998 +0000 UTC m=+0.112921679 container init 7ace3f50e48c85dfbeac24b6a9c8de138ec140013d8daa3351908e2ceb79b4c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Oct 08 09:44:29 compute-0 podman[81732]: 2025-10-08 09:44:29.654772691 +0000 UTC m=+0.029036422 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:44:29 compute-0 podman[81732]: 2025-10-08 09:44:29.751334808 +0000 UTC m=+0.125598459 container start 7ace3f50e48c85dfbeac24b6a9c8de138ec140013d8daa3351908e2ceb79b4c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct 08 09:44:29 compute-0 bash[81732]: 7ace3f50e48c85dfbeac24b6a9c8de138ec140013d8daa3351908e2ceb79b4c2
Oct 08 09:44:29 compute-0 systemd[1]: Started Ceph osd.1 for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct 08 09:44:29 compute-0 ceph-osd[81751]: set uid:gid to 167:167 (ceph:ceph)
Oct 08 09:44:29 compute-0 ceph-osd[81751]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-osd, pid 2
Oct 08 09:44:29 compute-0 ceph-osd[81751]: pidfile_write: ignore empty --pid-file
Oct 08 09:44:29 compute-0 ceph-osd[81751]: bdev(0x559f28f21800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 08 09:44:29 compute-0 ceph-osd[81751]: bdev(0x559f28f21800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 08 09:44:29 compute-0 ceph-osd[81751]: bdev(0x559f28f21800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 08 09:44:29 compute-0 ceph-osd[81751]: bdev(0x559f28f21800 /var/lib/ceph/osd/ceph-1/block) close
Oct 08 09:44:29 compute-0 sudo[81199]: pam_unix(sudo:session): session closed for user root
Oct 08 09:44:29 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 09:44:29 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:29 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 09:44:29 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:29 compute-0 sudo[81763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:44:29 compute-0 sudo[81763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:44:29 compute-0 sudo[81763]: pam_unix(sudo:session): session closed for user root
Oct 08 09:44:29 compute-0 sudo[81788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- raw list --format json
Oct 08 09:44:29 compute-0 sudo[81788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:44:30 compute-0 ceph-osd[81751]: bdev(0x559f28f21800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 08 09:44:30 compute-0 ceph-osd[81751]: bdev(0x559f28f21800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 08 09:44:30 compute-0 ceph-osd[81751]: bdev(0x559f28f21800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 08 09:44:30 compute-0 ceph-osd[81751]: bdev(0x559f28f21800 /var/lib/ceph/osd/ceph-1/block) close
Oct 08 09:44:30 compute-0 ceph-osd[81751]: bdev(0x559f28f21800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 08 09:44:30 compute-0 ceph-osd[81751]: bdev(0x559f28f21800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 08 09:44:30 compute-0 ceph-osd[81751]: bdev(0x559f28f21800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 08 09:44:30 compute-0 ceph-osd[81751]: bdev(0x559f28f21800 /var/lib/ceph/osd/ceph-1/block) close
Oct 08 09:44:30 compute-0 ceph-osd[81751]: bdev(0x559f28f21800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 08 09:44:30 compute-0 ceph-osd[81751]: bdev(0x559f28f21800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 08 09:44:30 compute-0 ceph-osd[81751]: bdev(0x559f28f21800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 08 09:44:30 compute-0 ceph-osd[81751]: bdev(0x559f28f21800 /var/lib/ceph/osd/ceph-1/block) close
Oct 08 09:44:30 compute-0 ceph-osd[81751]: bdev(0x559f28f21800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 08 09:44:30 compute-0 ceph-osd[81751]: bdev(0x559f28f21800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 08 09:44:30 compute-0 ceph-osd[81751]: bdev(0x559f28f21800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 08 09:44:30 compute-0 ceph-osd[81751]: bdev(0x559f28f21800 /var/lib/ceph/osd/ceph-1/block) close
Oct 08 09:44:30 compute-0 ceph-osd[81751]: bdev(0x559f28f21800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 08 09:44:30 compute-0 ceph-osd[81751]: bdev(0x559f28f21800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 08 09:44:30 compute-0 ceph-osd[81751]: bdev(0x559f28f21800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 08 09:44:30 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 08 09:44:30 compute-0 ceph-osd[81751]: bdev(0x559f28f21c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 08 09:44:30 compute-0 ceph-osd[81751]: bdev(0x559f28f21c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 08 09:44:30 compute-0 ceph-osd[81751]: bdev(0x559f28f21c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 08 09:44:30 compute-0 ceph-osd[81751]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Oct 08 09:44:30 compute-0 ceph-osd[81751]: bdev(0x559f28f21c00 /var/lib/ceph/osd/ceph-1/block) close
Oct 08 09:44:30 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 08 09:44:30 compute-0 sudo[81896]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asvefzbgfphagqfsobqcccobgjsejwhj ; /usr/bin/python3'
Oct 08 09:44:30 compute-0 sudo[81896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:44:30 compute-0 podman[81874]: 2025-10-08 09:44:30.33518262 +0000 UTC m=+0.039238566 container create 415d3cdec75380d9adcbfab0184402fba6e8b694ac86d8a8cbfc307409f00d2a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_albattani, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:44:30 compute-0 systemd[1]: Started libpod-conmon-415d3cdec75380d9adcbfab0184402fba6e8b694ac86d8a8cbfc307409f00d2a.scope.
Oct 08 09:44:30 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:44:30 compute-0 podman[81874]: 2025-10-08 09:44:30.413430153 +0000 UTC m=+0.117486109 container init 415d3cdec75380d9adcbfab0184402fba6e8b694ac86d8a8cbfc307409f00d2a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_albattani, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:44:30 compute-0 podman[81874]: 2025-10-08 09:44:30.318867449 +0000 UTC m=+0.022923415 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:44:30 compute-0 podman[81874]: 2025-10-08 09:44:30.420079548 +0000 UTC m=+0.124135494 container start 415d3cdec75380d9adcbfab0184402fba6e8b694ac86d8a8cbfc307409f00d2a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_albattani, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:44:30 compute-0 podman[81874]: 2025-10-08 09:44:30.422986897 +0000 UTC m=+0.127042843 container attach 415d3cdec75380d9adcbfab0184402fba6e8b694ac86d8a8cbfc307409f00d2a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 08 09:44:30 compute-0 charming_albattani[81904]: 167 167
Oct 08 09:44:30 compute-0 systemd[1]: libpod-415d3cdec75380d9adcbfab0184402fba6e8b694ac86d8a8cbfc307409f00d2a.scope: Deactivated successfully.
Oct 08 09:44:30 compute-0 podman[81874]: 2025-10-08 09:44:30.427385462 +0000 UTC m=+0.131441418 container died 415d3cdec75380d9adcbfab0184402fba6e8b694ac86d8a8cbfc307409f00d2a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 08 09:44:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-c50712efa9855408ed3ff6e736fb635510267289aa7a63e6a482701caff1000a-merged.mount: Deactivated successfully.
Oct 08 09:44:30 compute-0 ceph-osd[81751]: bdev(0x559f28f21800 /var/lib/ceph/osd/ceph-1/block) close
Oct 08 09:44:30 compute-0 podman[81874]: 2025-10-08 09:44:30.465252985 +0000 UTC m=+0.169308931 container remove 415d3cdec75380d9adcbfab0184402fba6e8b694ac86d8a8cbfc307409f00d2a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 08 09:44:30 compute-0 python3[81901]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:44:30 compute-0 systemd[1]: libpod-conmon-415d3cdec75380d9adcbfab0184402fba6e8b694ac86d8a8cbfc307409f00d2a.scope: Deactivated successfully.
Oct 08 09:44:30 compute-0 podman[81925]: 2025-10-08 09:44:30.525132034 +0000 UTC m=+0.040500405 container create d07920120f5b80baec01ae82ad87c69dfaadb3cbf8c896183a73604ebdbb84b4 (image=quay.io/ceph/ceph:v19, name=bold_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Oct 08 09:44:30 compute-0 systemd[1]: Started libpod-conmon-d07920120f5b80baec01ae82ad87c69dfaadb3cbf8c896183a73604ebdbb84b4.scope.
Oct 08 09:44:30 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:44:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe875516d91d975b32dbad68a0295fd211e917c03c1f7b1689e524b5990f6600/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:44:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe875516d91d975b32dbad68a0295fd211e917c03c1f7b1689e524b5990f6600/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:44:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe875516d91d975b32dbad68a0295fd211e917c03c1f7b1689e524b5990f6600/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 08 09:44:30 compute-0 podman[81925]: 2025-10-08 09:44:30.506464571 +0000 UTC m=+0.021832972 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:44:30 compute-0 podman[81925]: 2025-10-08 09:44:30.611711663 +0000 UTC m=+0.127080054 container init d07920120f5b80baec01ae82ad87c69dfaadb3cbf8c896183a73604ebdbb84b4 (image=quay.io/ceph/ceph:v19, name=bold_wing, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:44:30 compute-0 podman[81925]: 2025-10-08 09:44:30.617271554 +0000 UTC m=+0.132639925 container start d07920120f5b80baec01ae82ad87c69dfaadb3cbf8c896183a73604ebdbb84b4 (image=quay.io/ceph/ceph:v19, name=bold_wing, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 08 09:44:30 compute-0 podman[81925]: 2025-10-08 09:44:30.630418278 +0000 UTC m=+0.145786659 container attach d07920120f5b80baec01ae82ad87c69dfaadb3cbf8c896183a73604ebdbb84b4 (image=quay.io/ceph/ceph:v19, name=bold_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:44:30 compute-0 podman[81950]: 2025-10-08 09:44:30.643997095 +0000 UTC m=+0.055796954 container create 1df969b0c50f9d66c35a75aa121c358ea471ac1f87acb1ba182e9d4b65c45859 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_hoover, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True)
Oct 08 09:44:30 compute-0 systemd[1]: Started libpod-conmon-1df969b0c50f9d66c35a75aa121c358ea471ac1f87acb1ba182e9d4b65c45859.scope.
Oct 08 09:44:30 compute-0 ceph-osd[81751]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Oct 08 09:44:30 compute-0 podman[81950]: 2025-10-08 09:44:30.625203188 +0000 UTC m=+0.037003067 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:44:30 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:44:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67337a3998292ee11e76d56a2d3288db98d1f5bd28c97ec2e33d8056edeb96df/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 09:44:30 compute-0 ceph-osd[81751]: load: jerasure load: lrc 
Oct 08 09:44:30 compute-0 ceph-osd[81751]: bdev(0x559f29dbcc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 08 09:44:30 compute-0 ceph-osd[81751]: bdev(0x559f29dbcc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 08 09:44:30 compute-0 ceph-osd[81751]: bdev(0x559f29dbcc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 08 09:44:30 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 08 09:44:30 compute-0 ceph-osd[81751]: bdev(0x559f29dbcc00 /var/lib/ceph/osd/ceph-1/block) close
Oct 08 09:44:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67337a3998292ee11e76d56a2d3288db98d1f5bd28c97ec2e33d8056edeb96df/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:44:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67337a3998292ee11e76d56a2d3288db98d1f5bd28c97ec2e33d8056edeb96df/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:44:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67337a3998292ee11e76d56a2d3288db98d1f5bd28c97ec2e33d8056edeb96df/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 09:44:30 compute-0 podman[81950]: 2025-10-08 09:44:30.736915029 +0000 UTC m=+0.148714918 container init 1df969b0c50f9d66c35a75aa121c358ea471ac1f87acb1ba182e9d4b65c45859 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_hoover, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct 08 09:44:30 compute-0 podman[81950]: 2025-10-08 09:44:30.742917634 +0000 UTC m=+0.154717493 container start 1df969b0c50f9d66c35a75aa121c358ea471ac1f87acb1ba182e9d4b65c45859 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_hoover, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Oct 08 09:44:30 compute-0 podman[81950]: 2025-10-08 09:44:30.74607324 +0000 UTC m=+0.157873099 container attach 1df969b0c50f9d66c35a75aa121c358ea471ac1f87acb1ba182e9d4b65c45859 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_hoover, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:44:30 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:30 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:30 compute-0 ceph-mon[73572]: pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 08 09:44:30 compute-0 ceph-osd[81751]: bdev(0x559f29dbcc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 08 09:44:30 compute-0 ceph-osd[81751]: bdev(0x559f29dbcc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 08 09:44:30 compute-0 ceph-osd[81751]: bdev(0x559f29dbcc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 08 09:44:30 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 08 09:44:30 compute-0 ceph-osd[81751]: bdev(0x559f29dbcc00 /var/lib/ceph/osd/ceph-1/block) close
Oct 08 09:44:31 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Oct 08 09:44:31 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3155317862' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 08 09:44:31 compute-0 bold_wing[81946]: 
Oct 08 09:44:31 compute-0 bold_wing[81946]: {"fsid":"787292cc-8154-50c4-9e00-e9be3e817149","health":{"status":"HEALTH_WARN","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":87,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":5,"num_osds":2,"num_up_osds":0,"osd_up_since":0,"num_in_osds":2,"osd_in_since":1759916660,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2025-10-08T09:43:01:374245+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-10-08T09:44:24.295017+0000","services":{}},"progress_events":{}}
Oct 08 09:44:31 compute-0 systemd[1]: libpod-d07920120f5b80baec01ae82ad87c69dfaadb3cbf8c896183a73604ebdbb84b4.scope: Deactivated successfully.
Oct 08 09:44:31 compute-0 podman[81925]: 2025-10-08 09:44:31.058700663 +0000 UTC m=+0.574069054 container died d07920120f5b80baec01ae82ad87c69dfaadb3cbf8c896183a73604ebdbb84b4 (image=quay.io/ceph/ceph:v19, name=bold_wing, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct 08 09:44:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe875516d91d975b32dbad68a0295fd211e917c03c1f7b1689e524b5990f6600-merged.mount: Deactivated successfully.
Oct 08 09:44:31 compute-0 podman[81925]: 2025-10-08 09:44:31.095824672 +0000 UTC m=+0.611193043 container remove d07920120f5b80baec01ae82ad87c69dfaadb3cbf8c896183a73604ebdbb84b4 (image=quay.io/ceph/ceph:v19, name=bold_wing, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 08 09:44:31 compute-0 systemd[1]: libpod-conmon-d07920120f5b80baec01ae82ad87c69dfaadb3cbf8c896183a73604ebdbb84b4.scope: Deactivated successfully.
Oct 08 09:44:31 compute-0 sudo[81896]: pam_unix(sudo:session): session closed for user root
Oct 08 09:44:31 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 08 09:44:31 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:31 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 08 09:44:31 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:31 compute-0 ceph-osd[81751]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Oct 08 09:44:31 compute-0 ceph-osd[81751]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Oct 08 09:44:31 compute-0 ceph-osd[81751]: bdev(0x559f29dbcc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 08 09:44:31 compute-0 ceph-osd[81751]: bdev(0x559f29dbcc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 08 09:44:31 compute-0 ceph-osd[81751]: bdev(0x559f29dbcc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 08 09:44:31 compute-0 ceph-osd[81751]: bdev(0x559f29dbcc00 /var/lib/ceph/osd/ceph-1/block) close
Oct 08 09:44:31 compute-0 lvm[82083]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 08 09:44:31 compute-0 lvm[82083]: VG ceph_vg0 finished
Oct 08 09:44:31 compute-0 ceph-osd[81751]: bdev(0x559f29dbcc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 08 09:44:31 compute-0 ceph-osd[81751]: bdev(0x559f29dbcc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 08 09:44:31 compute-0 ceph-osd[81751]: bdev(0x559f29dbcc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 08 09:44:31 compute-0 ceph-osd[81751]: bdev(0x559f29dbcc00 /var/lib/ceph/osd/ceph-1/block) close
Oct 08 09:44:31 compute-0 ceph-osd[81751]: bdev(0x559f29dbcc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 08 09:44:31 compute-0 ceph-osd[81751]: bdev(0x559f29dbcc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 08 09:44:31 compute-0 ceph-osd[81751]: bdev(0x559f29dbcc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 08 09:44:31 compute-0 ceph-osd[81751]: bdev(0x559f29dbcc00 /var/lib/ceph/osd/ceph-1/block) close
Oct 08 09:44:31 compute-0 agitated_hoover[81969]: {}
Oct 08 09:44:31 compute-0 systemd[1]: libpod-1df969b0c50f9d66c35a75aa121c358ea471ac1f87acb1ba182e9d4b65c45859.scope: Deactivated successfully.
Oct 08 09:44:31 compute-0 systemd[1]: libpod-1df969b0c50f9d66c35a75aa121c358ea471ac1f87acb1ba182e9d4b65c45859.scope: Consumed 1.002s CPU time.
Oct 08 09:44:31 compute-0 podman[81950]: 2025-10-08 09:44:31.409661762 +0000 UTC m=+0.821461641 container died 1df969b0c50f9d66c35a75aa121c358ea471ac1f87acb1ba182e9d4b65c45859 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_hoover, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Oct 08 09:44:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-67337a3998292ee11e76d56a2d3288db98d1f5bd28c97ec2e33d8056edeb96df-merged.mount: Deactivated successfully.
Oct 08 09:44:31 compute-0 podman[81950]: 2025-10-08 09:44:31.451211419 +0000 UTC m=+0.863011278 container remove 1df969b0c50f9d66c35a75aa121c358ea471ac1f87acb1ba182e9d4b65c45859 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_hoover, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct 08 09:44:31 compute-0 systemd[1]: libpod-conmon-1df969b0c50f9d66c35a75aa121c358ea471ac1f87acb1ba182e9d4b65c45859.scope: Deactivated successfully.
Oct 08 09:44:31 compute-0 sudo[81788]: pam_unix(sudo:session): session closed for user root
Oct 08 09:44:31 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 09:44:31 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:31 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 09:44:31 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:31 compute-0 ceph-osd[81751]: bdev(0x559f29dbcc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 08 09:44:31 compute-0 ceph-osd[81751]: bdev(0x559f29dbcc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 08 09:44:31 compute-0 ceph-osd[81751]: bdev(0x559f29dbcc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 08 09:44:31 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 08 09:44:31 compute-0 ceph-osd[81751]: bdev(0x559f29dbd000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 08 09:44:31 compute-0 ceph-osd[81751]: bdev(0x559f29dbd000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 08 09:44:31 compute-0 ceph-osd[81751]: bdev(0x559f29dbd000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 08 09:44:31 compute-0 ceph-osd[81751]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Oct 08 09:44:31 compute-0 ceph-osd[81751]: bluefs mount
Oct 08 09:44:31 compute-0 ceph-osd[81751]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 08 09:44:31 compute-0 ceph-osd[81751]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 08 09:44:31 compute-0 ceph-osd[81751]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 08 09:44:31 compute-0 ceph-osd[81751]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 08 09:44:31 compute-0 ceph-osd[81751]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 08 09:44:31 compute-0 ceph-osd[81751]: bluefs mount shared_bdev_used = 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: RocksDB version: 7.9.2
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Git sha 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Compile date 2025-07-17 03:12:14
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: DB SUMMARY
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: DB Session ID:  OKB0236OTSDNNJ5ULVKQ
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: CURRENT file:  CURRENT
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: IDENTITY file:  IDENTITY
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                         Options.error_if_exists: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                       Options.create_if_missing: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                         Options.paranoid_checks: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                                     Options.env: 0x559f29d8ddc0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                                Options.info_log: 0x559f29d917a0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.max_file_opening_threads: 16
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                              Options.statistics: (nil)
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                               Options.use_fsync: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                       Options.max_log_file_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                         Options.allow_fallocate: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                        Options.use_direct_reads: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.create_missing_column_families: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                              Options.db_log_dir: 
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                                 Options.wal_dir: db.wal
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.advise_random_on_open: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                    Options.write_buffer_manager: 0x559f29e9aa00
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                            Options.rate_limiter: (nil)
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.unordered_write: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                               Options.row_cache: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                              Options.wal_filter: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.allow_ingest_behind: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.two_write_queues: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.manual_wal_flush: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.wal_compression: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.atomic_flush: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                 Options.log_readahead_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.allow_data_in_errors: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.db_host_id: __hostname__
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.max_background_jobs: 4
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.max_background_compactions: -1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.max_subcompactions: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                          Options.max_open_files: -1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                          Options.bytes_per_sync: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.max_background_flushes: -1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Compression algorithms supported:
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         kZSTD supported: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         kXpressCompression supported: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         kBZip2Compression supported: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         kLZ4Compression supported: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         kZlibCompression supported: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         kLZ4HCCompression supported: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         kSnappyCompression supported: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.compaction_filter: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.compaction_filter_factory: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:  Options.sst_partitioner_factory: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f29d91b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559f28fb7350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.write_buffer_size: 16777216
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:  Options.max_write_buffer_number: 64
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.compression: LZ4
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:       Options.prefix_extractor: nullptr
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.num_levels: 7
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.compression_opts.level: 32767
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.compression_opts.strategy: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.compression_opts.enabled: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                        Options.arena_block_size: 1048576
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.disable_auto_compactions: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.inplace_update_support: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                           Options.bloom_locality: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                    Options.max_successive_merges: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.paranoid_file_checks: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.force_consistency_checks: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.report_bg_io_stats: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                               Options.ttl: 2592000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                       Options.enable_blob_files: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                           Options.min_blob_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                          Options.blob_file_size: 268435456
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.blob_file_starting_level: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:           Options.merge_operator: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.compaction_filter: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.compaction_filter_factory: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:  Options.sst_partitioner_factory: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f29d91b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559f28fb7350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.write_buffer_size: 16777216
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:  Options.max_write_buffer_number: 64
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.compression: LZ4
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:       Options.prefix_extractor: nullptr
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.num_levels: 7
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.compression_opts.level: 32767
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.compression_opts.strategy: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.compression_opts.enabled: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                        Options.arena_block_size: 1048576
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.disable_auto_compactions: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.inplace_update_support: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                           Options.bloom_locality: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                    Options.max_successive_merges: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.paranoid_file_checks: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.force_consistency_checks: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.report_bg_io_stats: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                               Options.ttl: 2592000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                       Options.enable_blob_files: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                           Options.min_blob_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                          Options.blob_file_size: 268435456
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.blob_file_starting_level: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:           Options.merge_operator: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.compaction_filter: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.compaction_filter_factory: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:  Options.sst_partitioner_factory: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f29d91b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559f28fb7350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.write_buffer_size: 16777216
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:  Options.max_write_buffer_number: 64
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.compression: LZ4
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:       Options.prefix_extractor: nullptr
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.num_levels: 7
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.compression_opts.level: 32767
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.compression_opts.strategy: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.compression_opts.enabled: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                        Options.arena_block_size: 1048576
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.disable_auto_compactions: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.inplace_update_support: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                           Options.bloom_locality: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                    Options.max_successive_merges: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.paranoid_file_checks: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.force_consistency_checks: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.report_bg_io_stats: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                               Options.ttl: 2592000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                       Options.enable_blob_files: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                           Options.min_blob_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                          Options.blob_file_size: 268435456
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.blob_file_starting_level: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:           Options.merge_operator: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.compaction_filter: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.compaction_filter_factory: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:  Options.sst_partitioner_factory: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f29d91b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559f28fb7350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.write_buffer_size: 16777216
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:  Options.max_write_buffer_number: 64
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.compression: LZ4
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:       Options.prefix_extractor: nullptr
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.num_levels: 7
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.compression_opts.level: 32767
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.compression_opts.strategy: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.compression_opts.enabled: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                        Options.arena_block_size: 1048576
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.disable_auto_compactions: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.inplace_update_support: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                           Options.bloom_locality: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                    Options.max_successive_merges: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.paranoid_file_checks: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.force_consistency_checks: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.report_bg_io_stats: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                               Options.ttl: 2592000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                       Options.enable_blob_files: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                           Options.min_blob_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                          Options.blob_file_size: 268435456
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.blob_file_starting_level: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:           Options.merge_operator: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.compaction_filter: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.compaction_filter_factory: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:  Options.sst_partitioner_factory: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f29d91b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559f28fb7350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.write_buffer_size: 16777216
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:  Options.max_write_buffer_number: 64
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.compression: LZ4
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:       Options.prefix_extractor: nullptr
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.num_levels: 7
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.compression_opts.level: 32767
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.compression_opts.strategy: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.compression_opts.enabled: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                        Options.arena_block_size: 1048576
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.disable_auto_compactions: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.inplace_update_support: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                           Options.bloom_locality: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                    Options.max_successive_merges: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.paranoid_file_checks: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.force_consistency_checks: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.report_bg_io_stats: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                               Options.ttl: 2592000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                       Options.enable_blob_files: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                           Options.min_blob_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                          Options.blob_file_size: 268435456
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.blob_file_starting_level: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:           Options.merge_operator: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.compaction_filter: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.compaction_filter_factory: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:  Options.sst_partitioner_factory: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f29d91b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559f28fb7350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.write_buffer_size: 16777216
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:  Options.max_write_buffer_number: 64
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.compression: LZ4
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:       Options.prefix_extractor: nullptr
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.num_levels: 7
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.compression_opts.level: 32767
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.compression_opts.strategy: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.compression_opts.enabled: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                        Options.arena_block_size: 1048576
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.disable_auto_compactions: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.inplace_update_support: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                           Options.bloom_locality: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                    Options.max_successive_merges: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.paranoid_file_checks: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.force_consistency_checks: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.report_bg_io_stats: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                               Options.ttl: 2592000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                       Options.enable_blob_files: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                           Options.min_blob_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                          Options.blob_file_size: 268435456
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.blob_file_starting_level: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:           Options.merge_operator: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.compaction_filter: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.compaction_filter_factory: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:  Options.sst_partitioner_factory: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f29d91b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559f28fb7350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.write_buffer_size: 16777216
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:  Options.max_write_buffer_number: 64
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.compression: LZ4
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:       Options.prefix_extractor: nullptr
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.num_levels: 7
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.compression_opts.level: 32767
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.compression_opts.strategy: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.compression_opts.enabled: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                        Options.arena_block_size: 1048576
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.disable_auto_compactions: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.inplace_update_support: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                           Options.bloom_locality: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                    Options.max_successive_merges: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.paranoid_file_checks: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.force_consistency_checks: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.report_bg_io_stats: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                               Options.ttl: 2592000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                       Options.enable_blob_files: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                           Options.min_blob_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                          Options.blob_file_size: 268435456
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.blob_file_starting_level: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:           Options.merge_operator: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.compaction_filter: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.compaction_filter_factory: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:  Options.sst_partitioner_factory: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f29d91b80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559f28fb69b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.write_buffer_size: 16777216
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:  Options.max_write_buffer_number: 64
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.compression: LZ4
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:       Options.prefix_extractor: nullptr
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.num_levels: 7
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.compression_opts.level: 32767
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.compression_opts.strategy: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.compression_opts.enabled: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                        Options.arena_block_size: 1048576
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.disable_auto_compactions: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.inplace_update_support: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                           Options.bloom_locality: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                    Options.max_successive_merges: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.paranoid_file_checks: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.force_consistency_checks: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.report_bg_io_stats: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                               Options.ttl: 2592000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                       Options.enable_blob_files: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                           Options.min_blob_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                          Options.blob_file_size: 268435456
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.blob_file_starting_level: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:           Options.merge_operator: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.compaction_filter: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.compaction_filter_factory: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:  Options.sst_partitioner_factory: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f29d91b80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559f28fb69b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.write_buffer_size: 16777216
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:  Options.max_write_buffer_number: 64
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.compression: LZ4
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:       Options.prefix_extractor: nullptr
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.num_levels: 7
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.compression_opts.level: 32767
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.compression_opts.strategy: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.compression_opts.enabled: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                        Options.arena_block_size: 1048576
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.disable_auto_compactions: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.inplace_update_support: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                           Options.bloom_locality: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                    Options.max_successive_merges: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.paranoid_file_checks: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.force_consistency_checks: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.report_bg_io_stats: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                               Options.ttl: 2592000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                       Options.enable_blob_files: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                           Options.min_blob_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                          Options.blob_file_size: 268435456
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.blob_file_starting_level: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:           Options.merge_operator: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.compaction_filter: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.compaction_filter_factory: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:  Options.sst_partitioner_factory: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f29d91b80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559f28fb69b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.write_buffer_size: 16777216
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:  Options.max_write_buffer_number: 64
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.compression: LZ4
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:       Options.prefix_extractor: nullptr
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.num_levels: 7
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.compression_opts.level: 32767
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.compression_opts.strategy: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.compression_opts.enabled: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                        Options.arena_block_size: 1048576
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.disable_auto_compactions: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.inplace_update_support: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                           Options.bloom_locality: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                    Options.max_successive_merges: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.paranoid_file_checks: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.force_consistency_checks: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.report_bg_io_stats: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                               Options.ttl: 2592000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                       Options.enable_blob_files: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                           Options.min_blob_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                          Options.blob_file_size: 268435456
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.blob_file_starting_level: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: ab27757c-4f23-4fe8-9f12-78d1a161a24a
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916671633238, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916671633484, "job": 1, "event": "recovery_finished"}
Oct 08 09:44:31 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Oct 08 09:44:31 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Oct 08 09:44:31 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Oct 08 09:44:31 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: freelist init
Oct 08 09:44:31 compute-0 ceph-osd[81751]: freelist _read_cfg
Oct 08 09:44:31 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct 08 09:44:31 compute-0 ceph-osd[81751]: bluefs umount
Oct 08 09:44:31 compute-0 ceph-osd[81751]: bdev(0x559f29dbd000 /var/lib/ceph/osd/ceph-1/block) close
Oct 08 09:44:31 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3155317862' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 08 09:44:31 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:31 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:31 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:31 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:31 compute-0 ceph-osd[81751]: bdev(0x559f29dbd000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 08 09:44:31 compute-0 ceph-osd[81751]: bdev(0x559f29dbd000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 08 09:44:31 compute-0 ceph-osd[81751]: bdev(0x559f29dbd000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 08 09:44:31 compute-0 ceph-osd[81751]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Oct 08 09:44:31 compute-0 ceph-osd[81751]: bluefs mount
Oct 08 09:44:31 compute-0 ceph-osd[81751]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 08 09:44:31 compute-0 ceph-osd[81751]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 08 09:44:31 compute-0 ceph-osd[81751]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 08 09:44:31 compute-0 ceph-osd[81751]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 08 09:44:31 compute-0 ceph-osd[81751]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 08 09:44:31 compute-0 ceph-osd[81751]: bluefs mount shared_bdev_used = 4718592
Oct 08 09:44:31 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: RocksDB version: 7.9.2
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Git sha 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Compile date 2025-07-17 03:12:14
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: DB SUMMARY
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: DB Session ID:  OKB0236OTSDNNJ5ULVKR
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: CURRENT file:  CURRENT
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: IDENTITY file:  IDENTITY
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                         Options.error_if_exists: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                       Options.create_if_missing: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                         Options.paranoid_checks: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                                     Options.env: 0x559f29f3e2a0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                                Options.info_log: 0x559f29d91920
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.max_file_opening_threads: 16
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                              Options.statistics: (nil)
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                               Options.use_fsync: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                       Options.max_log_file_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                         Options.allow_fallocate: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                        Options.use_direct_reads: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.create_missing_column_families: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                              Options.db_log_dir: 
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                                 Options.wal_dir: db.wal
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.advise_random_on_open: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                    Options.write_buffer_manager: 0x559f29e9ac80
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                            Options.rate_limiter: (nil)
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.unordered_write: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                               Options.row_cache: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                              Options.wal_filter: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.allow_ingest_behind: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.two_write_queues: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.manual_wal_flush: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.wal_compression: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.atomic_flush: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                 Options.log_readahead_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.allow_data_in_errors: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.db_host_id: __hostname__
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.max_background_jobs: 4
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.max_background_compactions: -1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.max_subcompactions: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                          Options.max_open_files: -1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                          Options.bytes_per_sync: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.max_background_flushes: -1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Compression algorithms supported:
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         kZSTD supported: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         kXpressCompression supported: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         kBZip2Compression supported: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         kLZ4Compression supported: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         kZlibCompression supported: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         kLZ4HCCompression supported: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         kSnappyCompression supported: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.compaction_filter: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.compaction_filter_factory: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:  Options.sst_partitioner_factory: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f29d91680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559f28fb7350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.write_buffer_size: 16777216
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:  Options.max_write_buffer_number: 64
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.compression: LZ4
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:       Options.prefix_extractor: nullptr
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.num_levels: 7
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.compression_opts.level: 32767
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.compression_opts.strategy: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.compression_opts.enabled: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                        Options.arena_block_size: 1048576
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.disable_auto_compactions: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.inplace_update_support: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                           Options.bloom_locality: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                    Options.max_successive_merges: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.paranoid_file_checks: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.force_consistency_checks: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.report_bg_io_stats: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                               Options.ttl: 2592000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                       Options.enable_blob_files: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                           Options.min_blob_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                          Options.blob_file_size: 268435456
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.blob_file_starting_level: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:           Options.merge_operator: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.compaction_filter: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.compaction_filter_factory: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:  Options.sst_partitioner_factory: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f29d91680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559f28fb7350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.write_buffer_size: 16777216
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:  Options.max_write_buffer_number: 64
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.compression: LZ4
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:       Options.prefix_extractor: nullptr
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.num_levels: 7
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.compression_opts.level: 32767
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.compression_opts.strategy: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.compression_opts.enabled: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                        Options.arena_block_size: 1048576
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.disable_auto_compactions: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.inplace_update_support: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                           Options.bloom_locality: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                    Options.max_successive_merges: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.paranoid_file_checks: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.force_consistency_checks: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.report_bg_io_stats: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                               Options.ttl: 2592000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                       Options.enable_blob_files: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                           Options.min_blob_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                          Options.blob_file_size: 268435456
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.blob_file_starting_level: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:           Options.merge_operator: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.compaction_filter: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.compaction_filter_factory: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:  Options.sst_partitioner_factory: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f29d91680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559f28fb7350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.write_buffer_size: 16777216
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:  Options.max_write_buffer_number: 64
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.compression: LZ4
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:       Options.prefix_extractor: nullptr
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.num_levels: 7
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.compression_opts.level: 32767
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.compression_opts.strategy: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.compression_opts.enabled: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                        Options.arena_block_size: 1048576
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.disable_auto_compactions: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.inplace_update_support: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                           Options.bloom_locality: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                    Options.max_successive_merges: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.paranoid_file_checks: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.force_consistency_checks: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.report_bg_io_stats: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                               Options.ttl: 2592000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                       Options.enable_blob_files: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                           Options.min_blob_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                          Options.blob_file_size: 268435456
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.blob_file_starting_level: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:           Options.merge_operator: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.compaction_filter: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.compaction_filter_factory: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:  Options.sst_partitioner_factory: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f29d91680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559f28fb7350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.write_buffer_size: 16777216
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:  Options.max_write_buffer_number: 64
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.compression: LZ4
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:       Options.prefix_extractor: nullptr
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.num_levels: 7
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.compression_opts.level: 32767
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.compression_opts.strategy: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.compression_opts.enabled: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                        Options.arena_block_size: 1048576
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.disable_auto_compactions: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.inplace_update_support: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                           Options.bloom_locality: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                    Options.max_successive_merges: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.paranoid_file_checks: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.force_consistency_checks: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.report_bg_io_stats: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                               Options.ttl: 2592000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                       Options.enable_blob_files: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                           Options.min_blob_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                          Options.blob_file_size: 268435456
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.blob_file_starting_level: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:           Options.merge_operator: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.compaction_filter: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.compaction_filter_factory: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:  Options.sst_partitioner_factory: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f29d91680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559f28fb7350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.write_buffer_size: 16777216
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:  Options.max_write_buffer_number: 64
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.compression: LZ4
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:       Options.prefix_extractor: nullptr
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.num_levels: 7
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.compression_opts.level: 32767
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.compression_opts.strategy: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.compression_opts.enabled: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                        Options.arena_block_size: 1048576
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.disable_auto_compactions: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.inplace_update_support: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                           Options.bloom_locality: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                    Options.max_successive_merges: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.paranoid_file_checks: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.force_consistency_checks: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.report_bg_io_stats: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                               Options.ttl: 2592000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                       Options.enable_blob_files: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                           Options.min_blob_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                          Options.blob_file_size: 268435456
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.blob_file_starting_level: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:           Options.merge_operator: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.compaction_filter: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.compaction_filter_factory: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:  Options.sst_partitioner_factory: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f29d91680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559f28fb7350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.write_buffer_size: 16777216
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:  Options.max_write_buffer_number: 64
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.compression: LZ4
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:       Options.prefix_extractor: nullptr
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.num_levels: 7
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.compression_opts.level: 32767
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.compression_opts.strategy: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.compression_opts.enabled: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                        Options.arena_block_size: 1048576
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.disable_auto_compactions: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.inplace_update_support: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                           Options.bloom_locality: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                    Options.max_successive_merges: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.paranoid_file_checks: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.force_consistency_checks: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.report_bg_io_stats: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                               Options.ttl: 2592000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                       Options.enable_blob_files: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                           Options.min_blob_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                          Options.blob_file_size: 268435456
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.blob_file_starting_level: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:           Options.merge_operator: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.compaction_filter: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.compaction_filter_factory: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:  Options.sst_partitioner_factory: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f29d91680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559f28fb7350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.write_buffer_size: 16777216
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:  Options.max_write_buffer_number: 64
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.compression: LZ4
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:       Options.prefix_extractor: nullptr
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.num_levels: 7
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.compression_opts.level: 32767
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.compression_opts.strategy: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.compression_opts.enabled: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                        Options.arena_block_size: 1048576
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.disable_auto_compactions: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.inplace_update_support: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                           Options.bloom_locality: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                    Options.max_successive_merges: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.paranoid_file_checks: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.force_consistency_checks: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.report_bg_io_stats: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                               Options.ttl: 2592000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                       Options.enable_blob_files: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                           Options.min_blob_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                          Options.blob_file_size: 268435456
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.blob_file_starting_level: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:           Options.merge_operator: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.compaction_filter: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.compaction_filter_factory: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:  Options.sst_partitioner_factory: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f29d91ac0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559f28fb69b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.write_buffer_size: 16777216
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:  Options.max_write_buffer_number: 64
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.compression: LZ4
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:       Options.prefix_extractor: nullptr
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.num_levels: 7
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.compression_opts.level: 32767
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.compression_opts.strategy: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.compression_opts.enabled: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                        Options.arena_block_size: 1048576
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.disable_auto_compactions: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.inplace_update_support: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                           Options.bloom_locality: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                    Options.max_successive_merges: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.paranoid_file_checks: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.force_consistency_checks: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.report_bg_io_stats: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                               Options.ttl: 2592000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                       Options.enable_blob_files: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                           Options.min_blob_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                          Options.blob_file_size: 268435456
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.blob_file_starting_level: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:           Options.merge_operator: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.compaction_filter: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.compaction_filter_factory: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:  Options.sst_partitioner_factory: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f29d91ac0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559f28fb69b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.write_buffer_size: 16777216
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:  Options.max_write_buffer_number: 64
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.compression: LZ4
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:       Options.prefix_extractor: nullptr
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.num_levels: 7
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.compression_opts.level: 32767
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.compression_opts.strategy: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.compression_opts.enabled: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                        Options.arena_block_size: 1048576
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.disable_auto_compactions: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.inplace_update_support: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                           Options.bloom_locality: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                    Options.max_successive_merges: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.paranoid_file_checks: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.force_consistency_checks: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.report_bg_io_stats: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                               Options.ttl: 2592000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                       Options.enable_blob_files: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                           Options.min_blob_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                          Options.blob_file_size: 268435456
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.blob_file_starting_level: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:           Options.merge_operator: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.compaction_filter: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.compaction_filter_factory: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:  Options.sst_partitioner_factory: None
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f29d91ac0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559f28fb69b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.write_buffer_size: 16777216
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:  Options.max_write_buffer_number: 64
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.compression: LZ4
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:       Options.prefix_extractor: nullptr
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.num_levels: 7
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.compression_opts.level: 32767
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.compression_opts.strategy: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                  Options.compression_opts.enabled: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                        Options.arena_block_size: 1048576
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.disable_auto_compactions: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.inplace_update_support: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                           Options.bloom_locality: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                    Options.max_successive_merges: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.paranoid_file_checks: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.force_consistency_checks: 1
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.report_bg_io_stats: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                               Options.ttl: 2592000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                       Options.enable_blob_files: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                           Options.min_blob_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                          Options.blob_file_size: 268435456
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb:                Options.blob_file_starting_level: 0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: ab27757c-4f23-4fe8-9f12-78d1a161a24a
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916671888685, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916671893452, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916671, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ab27757c-4f23-4fe8-9f12-78d1a161a24a", "db_session_id": "OKB0236OTSDNNJ5ULVKR", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916671897415, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916671, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ab27757c-4f23-4fe8-9f12-78d1a161a24a", "db_session_id": "OKB0236OTSDNNJ5ULVKR", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916671899844, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916671, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ab27757c-4f23-4fe8-9f12-78d1a161a24a", "db_session_id": "OKB0236OTSDNNJ5ULVKR", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916671901691, "job": 1, "event": "recovery_finished"}
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x559f29f8e000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: DB pointer 0x559f29f4a000
Oct 08 09:44:31 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 08 09:44:31 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Oct 08 09:44:31 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 08 09:44:31 compute-0 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb69b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb69b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb69b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 08 09:44:31 compute-0 ceph-osd[81751]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Oct 08 09:44:31 compute-0 ceph-osd[81751]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/hello/cls_hello.cc:316: loading cls_hello
Oct 08 09:44:31 compute-0 ceph-osd[81751]: _get_class not permitted to load lua
Oct 08 09:44:31 compute-0 ceph-osd[81751]: _get_class not permitted to load sdk
Oct 08 09:44:31 compute-0 ceph-osd[81751]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Oct 08 09:44:31 compute-0 ceph-osd[81751]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Oct 08 09:44:31 compute-0 ceph-osd[81751]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Oct 08 09:44:31 compute-0 ceph-osd[81751]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Oct 08 09:44:31 compute-0 ceph-osd[81751]: osd.1 0 load_pgs
Oct 08 09:44:31 compute-0 ceph-osd[81751]: osd.1 0 load_pgs opened 0 pgs
Oct 08 09:44:31 compute-0 ceph-osd[81751]: osd.1 0 log_to_monitors true
Oct 08 09:44:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1[81747]: 2025-10-08T09:44:31.932+0000 7f264c97f740 -1 osd.1 0 log_to_monitors true
Oct 08 09:44:31 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Oct 08 09:44:31 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/2242474769,v1:192.168.122.100:6803/2242474769]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Oct 08 09:44:32 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 08 09:44:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Oct 08 09:44:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 08 09:44:32 compute-0 ceph-mon[73572]: from='osd.1 [v2:192.168.122.100:6802/2242474769,v1:192.168.122.100:6803/2242474769]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Oct 08 09:44:32 compute-0 ceph-mon[73572]: pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 08 09:44:32 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/2242474769,v1:192.168.122.100:6803/2242474769]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Oct 08 09:44:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e6 e6: 2 total, 0 up, 2 in
Oct 08 09:44:32 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e6: 2 total, 0 up, 2 in
Oct 08 09:44:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Oct 08 09:44:32 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/2242474769,v1:192.168.122.100:6803/2242474769]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 08 09:44:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-0,root=default}
Oct 08 09:44:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct 08 09:44:32 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 08 09:44:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct 08 09:44:32 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 08 09:44:32 compute-0 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 08 09:44:32 compute-0 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 08 09:44:32 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Oct 08 09:44:32 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Oct 08 09:44:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 08 09:44:32 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 08 09:44:32 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:33 compute-0 sudo[82519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 08 09:44:33 compute-0 sudo[82519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:44:33 compute-0 sudo[82519]: pam_unix(sudo:session): session closed for user root
Oct 08 09:44:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:44:33 compute-0 sudo[82544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:44:33 compute-0 sudo[82544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:44:33 compute-0 sudo[82544]: pam_unix(sudo:session): session closed for user root
Oct 08 09:44:33 compute-0 sudo[82569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Oct 08 09:44:33 compute-0 sudo[82569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:44:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 08 09:44:33 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Oct 08 09:44:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 08 09:44:33 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/2242474769,v1:192.168.122.100:6803/2242474769]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct 08 09:44:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e7 e7: 2 total, 0 up, 2 in
Oct 08 09:44:33 compute-0 ceph-osd[81751]: osd.1 0 done with init, starting boot process
Oct 08 09:44:33 compute-0 ceph-osd[81751]: osd.1 0 start_boot
Oct 08 09:44:33 compute-0 ceph-osd[81751]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Oct 08 09:44:33 compute-0 ceph-osd[81751]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Oct 08 09:44:33 compute-0 ceph-osd[81751]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Oct 08 09:44:33 compute-0 ceph-osd[81751]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Oct 08 09:44:33 compute-0 ceph-osd[81751]: osd.1 0  bench count 12288000 bsize 4 KiB
Oct 08 09:44:33 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e7: 2 total, 0 up, 2 in
Oct 08 09:44:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct 08 09:44:33 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 08 09:44:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct 08 09:44:33 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 08 09:44:33 compute-0 ceph-mon[73572]: from='osd.1 [v2:192.168.122.100:6802/2242474769,v1:192.168.122.100:6803/2242474769]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Oct 08 09:44:33 compute-0 ceph-mon[73572]: osdmap e6: 2 total, 0 up, 2 in
Oct 08 09:44:33 compute-0 ceph-mon[73572]: from='osd.1 [v2:192.168.122.100:6802/2242474769,v1:192.168.122.100:6803/2242474769]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 08 09:44:33 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 08 09:44:33 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 08 09:44:33 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:33 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:33 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:33 compute-0 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 08 09:44:33 compute-0 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 08 09:44:33 compute-0 ceph-mgr[73869]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2242474769; not ready for session (expect reconnect)
Oct 08 09:44:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct 08 09:44:33 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 08 09:44:33 compute-0 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 08 09:44:34 compute-0 podman[82663]: 2025-10-08 09:44:34.04095448 +0000 UTC m=+0.086159817 container exec 01c666addd8584f02eb7d29040d97e45d8cb7f834fd134029c1f7cca550aeb9d (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct 08 09:44:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Oct 08 09:44:34 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/645704721,v1:192.168.122.101:6801/645704721]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Oct 08 09:44:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 08 09:44:34 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 08 09:44:34 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:34 compute-0 podman[82681]: 2025-10-08 09:44:34.209205068 +0000 UTC m=+0.052815263 container exec_died 01c666addd8584f02eb7d29040d97e45d8cb7f834fd134029c1f7cca550aeb9d (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:44:34 compute-0 podman[82663]: 2025-10-08 09:44:34.222146616 +0000 UTC m=+0.267351923 container exec_died 01c666addd8584f02eb7d29040d97e45d8cb7f834fd134029c1f7cca550aeb9d (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 08 09:44:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 08 09:44:34 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:34 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 08 09:44:34 compute-0 sudo[82569]: pam_unix(sudo:session): session closed for user root
Oct 08 09:44:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 09:44:34 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 09:44:34 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:34 compute-0 sudo[82746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:44:34 compute-0 sudo[82746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:44:34 compute-0 sudo[82746]: pam_unix(sudo:session): session closed for user root
Oct 08 09:44:34 compute-0 sudo[82771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- inventory --format=json-pretty --filter-for-batch
Oct 08 09:44:34 compute-0 sudo[82771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:44:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Oct 08 09:44:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 08 09:44:34 compute-0 ceph-mgr[73869]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2242474769; not ready for session (expect reconnect)
Oct 08 09:44:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct 08 09:44:34 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 08 09:44:34 compute-0 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 08 09:44:34 compute-0 ceph-mon[73572]: from='osd.1 [v2:192.168.122.100:6802/2242474769,v1:192.168.122.100:6803/2242474769]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct 08 09:44:34 compute-0 ceph-mon[73572]: osdmap e7: 2 total, 0 up, 2 in
Oct 08 09:44:34 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 08 09:44:34 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 08 09:44:34 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 08 09:44:34 compute-0 ceph-mon[73572]: from='osd.0 [v2:192.168.122.101:6800/645704721,v1:192.168.122.101:6801/645704721]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Oct 08 09:44:34 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:34 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:34 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:34 compute-0 ceph-mon[73572]: pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 08 09:44:34 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:34 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:34 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/645704721,v1:192.168.122.101:6801/645704721]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Oct 08 09:44:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e8 e8: 2 total, 0 up, 2 in
Oct 08 09:44:34 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e8: 2 total, 0 up, 2 in
Oct 08 09:44:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]} v 0)
Oct 08 09:44:34 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/645704721,v1:192.168.122.101:6801/645704721]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Oct 08 09:44:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e8 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-1,root=default}
Oct 08 09:44:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct 08 09:44:34 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 08 09:44:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct 08 09:44:34 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 08 09:44:34 compute-0 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 08 09:44:34 compute-0 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 08 09:44:35 compute-0 podman[82835]: 2025-10-08 09:44:35.152509611 +0000 UTC m=+0.045522719 container create 3dc6f8aa48536afd09880ab22fc8bcf93076d3eac62c87361dec8f49e3e11ab0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:44:35 compute-0 systemd[1]: Started libpod-conmon-3dc6f8aa48536afd09880ab22fc8bcf93076d3eac62c87361dec8f49e3e11ab0.scope.
Oct 08 09:44:35 compute-0 podman[82835]: 2025-10-08 09:44:35.131603619 +0000 UTC m=+0.024616757 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:44:35 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:44:35 compute-0 podman[82835]: 2025-10-08 09:44:35.252858134 +0000 UTC m=+0.145871262 container init 3dc6f8aa48536afd09880ab22fc8bcf93076d3eac62c87361dec8f49e3e11ab0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_clarke, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 08 09:44:35 compute-0 podman[82835]: 2025-10-08 09:44:35.264237403 +0000 UTC m=+0.157250501 container start 3dc6f8aa48536afd09880ab22fc8bcf93076d3eac62c87361dec8f49e3e11ab0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_clarke, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True)
Oct 08 09:44:35 compute-0 sleepy_clarke[82849]: 167 167
Oct 08 09:44:35 compute-0 systemd[1]: libpod-3dc6f8aa48536afd09880ab22fc8bcf93076d3eac62c87361dec8f49e3e11ab0.scope: Deactivated successfully.
Oct 08 09:44:35 compute-0 podman[82835]: 2025-10-08 09:44:35.273836047 +0000 UTC m=+0.166849155 container attach 3dc6f8aa48536afd09880ab22fc8bcf93076d3eac62c87361dec8f49e3e11ab0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_clarke, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:44:35 compute-0 podman[82835]: 2025-10-08 09:44:35.274411035 +0000 UTC m=+0.167424143 container died 3dc6f8aa48536afd09880ab22fc8bcf93076d3eac62c87361dec8f49e3e11ab0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_clarke, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:44:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1966638ce9b85886e37fa901e2c697734d2dc5e9929202cbced237cf140d4b8-merged.mount: Deactivated successfully.
Oct 08 09:44:35 compute-0 podman[82835]: 2025-10-08 09:44:35.331590141 +0000 UTC m=+0.224603239 container remove 3dc6f8aa48536afd09880ab22fc8bcf93076d3eac62c87361dec8f49e3e11ab0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_clarke, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:44:35 compute-0 systemd[1]: libpod-conmon-3dc6f8aa48536afd09880ab22fc8bcf93076d3eac62c87361dec8f49e3e11ab0.scope: Deactivated successfully.
Oct 08 09:44:35 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 08 09:44:35 compute-0 podman[82871]: 2025-10-08 09:44:35.487192141 +0000 UTC m=+0.048619005 container create 1346d9a3abfaf549e34b120a8e6ae89e8d5cd2fb54c50c4a55300e4dc4170174 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:44:35 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:35 compute-0 systemd[1]: Started libpod-conmon-1346d9a3abfaf549e34b120a8e6ae89e8d5cd2fb54c50c4a55300e4dc4170174.scope.
Oct 08 09:44:35 compute-0 podman[82871]: 2025-10-08 09:44:35.465815515 +0000 UTC m=+0.027242439 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:44:35 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:44:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53f461279a8cfc784b895aed3a69123d919356dfaea2d9c7864e5e88e059ec41/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 09:44:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53f461279a8cfc784b895aed3a69123d919356dfaea2d9c7864e5e88e059ec41/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:44:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53f461279a8cfc784b895aed3a69123d919356dfaea2d9c7864e5e88e059ec41/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:44:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53f461279a8cfc784b895aed3a69123d919356dfaea2d9c7864e5e88e059ec41/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 09:44:35 compute-0 podman[82871]: 2025-10-08 09:44:35.602438901 +0000 UTC m=+0.163865795 container init 1346d9a3abfaf549e34b120a8e6ae89e8d5cd2fb54c50c4a55300e4dc4170174 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_hopper, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:44:35 compute-0 podman[82871]: 2025-10-08 09:44:35.610115156 +0000 UTC m=+0.171542010 container start 1346d9a3abfaf549e34b120a8e6ae89e8d5cd2fb54c50c4a55300e4dc4170174 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_hopper, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:44:35 compute-0 podman[82871]: 2025-10-08 09:44:35.622854497 +0000 UTC m=+0.184281371 container attach 1346d9a3abfaf549e34b120a8e6ae89e8d5cd2fb54c50c4a55300e4dc4170174 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_hopper, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Oct 08 09:44:35 compute-0 ceph-mgr[73869]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2242474769; not ready for session (expect reconnect)
Oct 08 09:44:35 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct 08 09:44:35 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 08 09:44:35 compute-0 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 08 09:44:35 compute-0 ceph-mon[73572]: purged_snaps scrub starts
Oct 08 09:44:35 compute-0 ceph-mon[73572]: purged_snaps scrub ok
Oct 08 09:44:35 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 08 09:44:35 compute-0 ceph-mon[73572]: from='osd.0 [v2:192.168.122.101:6800/645704721,v1:192.168.122.101:6801/645704721]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Oct 08 09:44:35 compute-0 ceph-mon[73572]: osdmap e8: 2 total, 0 up, 2 in
Oct 08 09:44:35 compute-0 ceph-mon[73572]: from='osd.0 [v2:192.168.122.101:6800/645704721,v1:192.168.122.101:6801/645704721]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Oct 08 09:44:35 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 08 09:44:35 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 08 09:44:35 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:35 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 08 09:44:35 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Oct 08 09:44:35 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 08 09:44:35 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/645704721,v1:192.168.122.101:6801/645704721]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Oct 08 09:44:35 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e9 e9: 2 total, 0 up, 2 in
Oct 08 09:44:35 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e9: 2 total, 0 up, 2 in
Oct 08 09:44:35 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct 08 09:44:35 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 08 09:44:35 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct 08 09:44:35 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 08 09:44:35 compute-0 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 08 09:44:35 compute-0 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 08 09:44:35 compute-0 ceph-mgr[73869]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/645704721; not ready for session (expect reconnect)
Oct 08 09:44:35 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct 08 09:44:35 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 08 09:44:35 compute-0 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 08 09:44:36 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 08 09:44:36 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:36 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 08 09:44:36 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:36 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 08 09:44:36 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:36 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 08 09:44:36 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:36 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Oct 08 09:44:36 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct 08 09:44:36 compute-0 ceph-mgr[73869]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to  5247M
Oct 08 09:44:36 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to  5247M
Oct 08 09:44:36 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Oct 08 09:44:36 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:36 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 08 09:44:36 compute-0 heuristic_hopper[82887]: [
Oct 08 09:44:36 compute-0 heuristic_hopper[82887]:     {
Oct 08 09:44:36 compute-0 heuristic_hopper[82887]:         "available": false,
Oct 08 09:44:36 compute-0 heuristic_hopper[82887]:         "being_replaced": false,
Oct 08 09:44:36 compute-0 heuristic_hopper[82887]:         "ceph_device_lvm": false,
Oct 08 09:44:36 compute-0 heuristic_hopper[82887]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 08 09:44:36 compute-0 heuristic_hopper[82887]:         "lsm_data": {},
Oct 08 09:44:36 compute-0 heuristic_hopper[82887]:         "lvs": [],
Oct 08 09:44:36 compute-0 heuristic_hopper[82887]:         "path": "/dev/sr0",
Oct 08 09:44:36 compute-0 heuristic_hopper[82887]:         "rejected_reasons": [
Oct 08 09:44:36 compute-0 heuristic_hopper[82887]:             "Insufficient space (<5GB)",
Oct 08 09:44:36 compute-0 heuristic_hopper[82887]:             "Has a FileSystem"
Oct 08 09:44:36 compute-0 heuristic_hopper[82887]:         ],
Oct 08 09:44:36 compute-0 heuristic_hopper[82887]:         "sys_api": {
Oct 08 09:44:36 compute-0 heuristic_hopper[82887]:             "actuators": null,
Oct 08 09:44:36 compute-0 heuristic_hopper[82887]:             "device_nodes": [
Oct 08 09:44:36 compute-0 heuristic_hopper[82887]:                 "sr0"
Oct 08 09:44:36 compute-0 heuristic_hopper[82887]:             ],
Oct 08 09:44:36 compute-0 heuristic_hopper[82887]:             "devname": "sr0",
Oct 08 09:44:36 compute-0 heuristic_hopper[82887]:             "human_readable_size": "482.00 KB",
Oct 08 09:44:36 compute-0 heuristic_hopper[82887]:             "id_bus": "ata",
Oct 08 09:44:36 compute-0 heuristic_hopper[82887]:             "model": "QEMU DVD-ROM",
Oct 08 09:44:36 compute-0 heuristic_hopper[82887]:             "nr_requests": "2",
Oct 08 09:44:36 compute-0 heuristic_hopper[82887]:             "parent": "/dev/sr0",
Oct 08 09:44:36 compute-0 heuristic_hopper[82887]:             "partitions": {},
Oct 08 09:44:36 compute-0 heuristic_hopper[82887]:             "path": "/dev/sr0",
Oct 08 09:44:36 compute-0 heuristic_hopper[82887]:             "removable": "1",
Oct 08 09:44:36 compute-0 heuristic_hopper[82887]:             "rev": "2.5+",
Oct 08 09:44:36 compute-0 heuristic_hopper[82887]:             "ro": "0",
Oct 08 09:44:36 compute-0 heuristic_hopper[82887]:             "rotational": "0",
Oct 08 09:44:36 compute-0 heuristic_hopper[82887]:             "sas_address": "",
Oct 08 09:44:36 compute-0 heuristic_hopper[82887]:             "sas_device_handle": "",
Oct 08 09:44:36 compute-0 heuristic_hopper[82887]:             "scheduler_mode": "mq-deadline",
Oct 08 09:44:36 compute-0 heuristic_hopper[82887]:             "sectors": 0,
Oct 08 09:44:36 compute-0 heuristic_hopper[82887]:             "sectorsize": "2048",
Oct 08 09:44:36 compute-0 heuristic_hopper[82887]:             "size": 493568.0,
Oct 08 09:44:36 compute-0 heuristic_hopper[82887]:             "support_discard": "2048",
Oct 08 09:44:36 compute-0 heuristic_hopper[82887]:             "type": "disk",
Oct 08 09:44:36 compute-0 heuristic_hopper[82887]:             "vendor": "QEMU"
Oct 08 09:44:36 compute-0 heuristic_hopper[82887]:         }
Oct 08 09:44:36 compute-0 heuristic_hopper[82887]:     }
Oct 08 09:44:36 compute-0 heuristic_hopper[82887]: ]
Oct 08 09:44:36 compute-0 systemd[1]: libpod-1346d9a3abfaf549e34b120a8e6ae89e8d5cd2fb54c50c4a55300e4dc4170174.scope: Deactivated successfully.
Oct 08 09:44:36 compute-0 podman[82871]: 2025-10-08 09:44:36.510734788 +0000 UTC m=+1.072161662 container died 1346d9a3abfaf549e34b120a8e6ae89e8d5cd2fb54c50c4a55300e4dc4170174 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True)
Oct 08 09:44:36 compute-0 ceph-osd[81751]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 37.521 iops: 9605.469 elapsed_sec: 0.312
Oct 08 09:44:36 compute-0 ceph-osd[81751]: log_channel(cluster) log [WRN] : OSD bench result of 9605.469339 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 08 09:44:36 compute-0 ceph-osd[81751]: osd.1 0 waiting for initial osdmap
Oct 08 09:44:36 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1[81747]: 2025-10-08T09:44:36.520+0000 7f2649115640 -1 osd.1 0 waiting for initial osdmap
Oct 08 09:44:36 compute-0 ceph-osd[81751]: osd.1 9 crush map has features 288514050185494528, adjusting msgr requires for clients
Oct 08 09:44:36 compute-0 ceph-osd[81751]: osd.1 9 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Oct 08 09:44:36 compute-0 ceph-osd[81751]: osd.1 9 crush map has features 3314932999778484224, adjusting msgr requires for osds
Oct 08 09:44:36 compute-0 ceph-osd[81751]: osd.1 9 check_osdmap_features require_osd_release unknown -> squid
Oct 08 09:44:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-53f461279a8cfc784b895aed3a69123d919356dfaea2d9c7864e5e88e059ec41-merged.mount: Deactivated successfully.
Oct 08 09:44:36 compute-0 ceph-osd[81751]: osd.1 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 08 09:44:36 compute-0 ceph-osd[81751]: osd.1 9 set_numa_affinity not setting numa affinity
Oct 08 09:44:36 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1[81747]: 2025-10-08T09:44:36.548+0000 7f2643f2a640 -1 osd.1 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 08 09:44:36 compute-0 podman[82871]: 2025-10-08 09:44:36.551860212 +0000 UTC m=+1.113287076 container remove 1346d9a3abfaf549e34b120a8e6ae89e8d5cd2fb54c50c4a55300e4dc4170174 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True)
Oct 08 09:44:36 compute-0 ceph-osd[81751]: osd.1 9 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Oct 08 09:44:36 compute-0 systemd[1]: libpod-conmon-1346d9a3abfaf549e34b120a8e6ae89e8d5cd2fb54c50c4a55300e4dc4170174.scope: Deactivated successfully.
Oct 08 09:44:36 compute-0 sudo[82771]: pam_unix(sudo:session): session closed for user root
Oct 08 09:44:36 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 09:44:36 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:36 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 09:44:36 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:36 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 09:44:36 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:36 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 09:44:36 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:36 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Oct 08 09:44:36 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct 08 09:44:36 compute-0 ceph-mgr[73869]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 127.8M
Oct 08 09:44:36 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 127.8M
Oct 08 09:44:36 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Oct 08 09:44:36 compute-0 ceph-mgr[73869]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134062899: error parsing value: Value '134062899' is below minimum 939524096
Oct 08 09:44:36 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134062899: error parsing value: Value '134062899' is below minimum 939524096
Oct 08 09:44:36 compute-0 ceph-mgr[73869]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2242474769; not ready for session (expect reconnect)
Oct 08 09:44:36 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct 08 09:44:36 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 08 09:44:36 compute-0 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 08 09:44:36 compute-0 ceph-mon[73572]: from='osd.0 [v2:192.168.122.101:6800/645704721,v1:192.168.122.101:6801/645704721]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Oct 08 09:44:36 compute-0 ceph-mon[73572]: osdmap e9: 2 total, 0 up, 2 in
Oct 08 09:44:36 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 08 09:44:36 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 08 09:44:36 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 08 09:44:36 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:36 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:36 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:36 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:36 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct 08 09:44:36 compute-0 ceph-mon[73572]: Adjusting osd_memory_target on compute-1 to  5247M
Oct 08 09:44:36 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:36 compute-0 ceph-mon[73572]: pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 08 09:44:36 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:36 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:36 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:36 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:36 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct 08 09:44:36 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 08 09:44:36 compute-0 ceph-mgr[73869]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/645704721; not ready for session (expect reconnect)
Oct 08 09:44:36 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct 08 09:44:36 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 08 09:44:36 compute-0 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 08 09:44:37 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Oct 08 09:44:37 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 08 09:44:37 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e10 e10: 2 total, 1 up, 2 in
Oct 08 09:44:37 compute-0 ceph-mon[73572]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6802/2242474769,v1:192.168.122.100:6803/2242474769] boot
Oct 08 09:44:37 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e10: 2 total, 1 up, 2 in
Oct 08 09:44:37 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct 08 09:44:37 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 08 09:44:37 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct 08 09:44:37 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 08 09:44:37 compute-0 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 08 09:44:37 compute-0 ceph-osd[81751]: osd.1 10 state: booting -> active
Oct 08 09:44:37 compute-0 ceph-mon[73572]: purged_snaps scrub starts
Oct 08 09:44:37 compute-0 ceph-mon[73572]: purged_snaps scrub ok
Oct 08 09:44:37 compute-0 ceph-mon[73572]: OSD bench result of 9605.469339 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 08 09:44:37 compute-0 ceph-mon[73572]: Adjusting osd_memory_target on compute-0 to 127.8M
Oct 08 09:44:37 compute-0 ceph-mon[73572]: Unable to set osd_memory_target on compute-0 to 134062899: error parsing value: Value '134062899' is below minimum 939524096
Oct 08 09:44:37 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 08 09:44:37 compute-0 ceph-mon[73572]: osd.1 [v2:192.168.122.100:6802/2242474769,v1:192.168.122.100:6803/2242474769] boot
Oct 08 09:44:37 compute-0 ceph-mon[73572]: osdmap e10: 2 total, 1 up, 2 in
Oct 08 09:44:37 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 08 09:44:37 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 08 09:44:37 compute-0 ceph-mgr[73869]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/645704721; not ready for session (expect reconnect)
Oct 08 09:44:37 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct 08 09:44:37 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 08 09:44:37 compute-0 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 08 09:44:38 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v40: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Oct 08 09:44:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e10 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:44:38 compute-0 ceph-mgr[73869]: [devicehealth INFO root] creating mgr pool
Oct 08 09:44:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Oct 08 09:44:38 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Oct 08 09:44:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Oct 08 09:44:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 08 09:44:38 compute-0 ceph-mgr[73869]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/645704721; not ready for session (expect reconnect)
Oct 08 09:44:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct 08 09:44:38 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 08 09:44:38 compute-0 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 08 09:44:38 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Oct 08 09:44:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e11 e11: 2 total, 1 up, 2 in
Oct 08 09:44:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e11 crush map has features 3314933000852226048, adjusting msgr requires
Oct 08 09:44:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Oct 08 09:44:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Oct 08 09:44:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Oct 08 09:44:38 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e11: 2 total, 1 up, 2 in
Oct 08 09:44:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct 08 09:44:38 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 08 09:44:38 compute-0 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 08 09:44:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Oct 08 09:44:38 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Oct 08 09:44:38 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 08 09:44:38 compute-0 ceph-mon[73572]: pgmap v40: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Oct 08 09:44:38 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Oct 08 09:44:38 compute-0 ceph-osd[81751]: osd.1 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct 08 09:44:38 compute-0 ceph-osd[81751]: osd.1 11 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Oct 08 09:44:38 compute-0 ceph-osd[81751]: osd.1 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct 08 09:44:39 compute-0 ceph-mgr[73869]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/645704721; not ready for session (expect reconnect)
Oct 08 09:44:39 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct 08 09:44:39 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 08 09:44:39 compute-0 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 08 09:44:39 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Oct 08 09:44:39 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Oct 08 09:44:39 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e12 e12: 2 total, 2 up, 2 in
Oct 08 09:44:39 compute-0 ceph-mon[73572]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.101:6800/645704721,v1:192.168.122.101:6801/645704721] boot
Oct 08 09:44:39 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e12: 2 total, 2 up, 2 in
Oct 08 09:44:39 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct 08 09:44:39 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 08 09:44:39 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 08 09:44:39 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Oct 08 09:44:39 compute-0 ceph-mon[73572]: osdmap e11: 2 total, 1 up, 2 in
Oct 08 09:44:39 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 08 09:44:39 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Oct 08 09:44:39 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 08 09:44:39 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Oct 08 09:44:39 compute-0 ceph-mon[73572]: osd.0 [v2:192.168.122.101:6800/645704721,v1:192.168.122.101:6801/645704721] boot
Oct 08 09:44:39 compute-0 ceph-mon[73572]: osdmap e12: 2 total, 2 up, 2 in
Oct 08 09:44:39 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 08 09:44:40 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v43: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Oct 08 09:44:40 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Oct 08 09:44:40 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e13 e13: 2 total, 2 up, 2 in
Oct 08 09:44:40 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e13: 2 total, 2 up, 2 in
Oct 08 09:44:40 compute-0 ceph-mon[73572]: OSD bench result of 10085.285206 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 08 09:44:40 compute-0 ceph-mon[73572]: pgmap v43: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Oct 08 09:44:41 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Oct 08 09:44:41 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e14 e14: 2 total, 2 up, 2 in
Oct 08 09:44:41 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e14: 2 total, 2 up, 2 in
Oct 08 09:44:41 compute-0 ceph-mon[73572]: osdmap e13: 2 total, 2 up, 2 in
Oct 08 09:44:42 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v46: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 08 09:44:42 compute-0 ceph-mgr[73869]: [devicehealth INFO root] creating main.db for devicehealth
Oct 08 09:44:42 compute-0 ceph-mgr[73869]: [devicehealth INFO root] Check health
Oct 08 09:44:42 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Oct 08 09:44:42 compute-0 sudo[84076]:     ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda
Oct 08 09:44:42 compute-0 sudo[84076]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 08 09:44:42 compute-0 sudo[84076]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167)
Oct 08 09:44:42 compute-0 sudo[84076]: pam_unix(sudo:session): session closed for user root
Oct 08 09:44:42 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Oct 08 09:44:42 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Oct 08 09:44:42 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 08 09:44:42 compute-0 ceph-mon[73572]: osdmap e14: 2 total, 2 up, 2 in
Oct 08 09:44:42 compute-0 ceph-mon[73572]: pgmap v46: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 08 09:44:42 compute-0 ceph-mon[73572]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Oct 08 09:44:42 compute-0 ceph-mon[73572]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Oct 08 09:44:42 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 08 09:44:43 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.ixicfj(active, since 80s)
Oct 08 09:44:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e14 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:44:44 compute-0 ceph-mon[73572]: mgrmap e9: compute-0.ixicfj(active, since 80s)
Oct 08 09:44:44 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v47: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Oct 08 09:44:45 compute-0 ceph-mon[73572]: pgmap v47: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Oct 08 09:44:46 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct 08 09:44:47 compute-0 ceph-mon[73572]: pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct 08 09:44:48 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct 08 09:44:48 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e14 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:44:49 compute-0 ceph-mon[73572]: pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct 08 09:44:50 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct 08 09:44:51 compute-0 ceph-mon[73572]: pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct 08 09:44:52 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct 08 09:44:52 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:44:52 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:44:52 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:44:52 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:44:52 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:44:52 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:44:53 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e14 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:44:53 compute-0 ceph-mon[73572]: pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct 08 09:44:54 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct 08 09:44:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 08 09:44:54 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 08 09:44:54 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 08 09:44:54 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 08 09:44:54 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Oct 08 09:44:54 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 08 09:44:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:44:54 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:44:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 08 09:44:54 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 09:44:54 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Oct 08 09:44:54 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Oct 08 09:44:55 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct 08 09:44:55 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct 08 09:44:55 compute-0 ceph-mon[73572]: pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct 08 09:44:55 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:55 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:55 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:55 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:55 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 08 09:44:55 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:44:55 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 09:44:55 compute-0 ceph-mon[73572]: Updating compute-2:/etc/ceph/ceph.conf
Oct 08 09:44:55 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 08 09:44:55 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 08 09:44:56 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct 08 09:44:56 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct 08 09:44:56 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct 08 09:44:56 compute-0 ceph-mon[73572]: Updating compute-2:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct 08 09:44:56 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 08 09:44:56 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:56 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 08 09:44:56 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:56 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 08 09:44:56 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:56 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct 08 09:44:56 compute-0 ceph-mgr[73869]: [progress INFO root] update: starting ev a1daac8b-8bd7-4296-8123-624af205803a (Updating mon deployment (+2 -> 3))
Oct 08 09:44:56 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Oct 08 09:44:56 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 08 09:44:56 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Oct 08 09:44:56 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 08 09:44:56 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:44:56 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:44:56 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-2 on compute-2
Oct 08 09:44:56 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-2 on compute-2
Oct 08 09:44:57 compute-0 ceph-mon[73572]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 08 09:44:57 compute-0 ceph-mon[73572]: Updating compute-2:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct 08 09:44:57 compute-0 ceph-mon[73572]: pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct 08 09:44:57 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:57 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:57 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:44:57 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 08 09:44:57 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 08 09:44:57 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:44:57 compute-0 ceph-mon[73572]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Oct 08 09:44:57 compute-0 ceph-mon[73572]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct 08 09:44:58 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e14 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:44:58 compute-0 ceph-mon[73572]: pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct 08 09:44:58 compute-0 ceph-mon[73572]: Deploying daemon mon.compute-2 on compute-2
Oct 08 09:44:58 compute-0 ceph-mon[73572]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Oct 08 09:44:58 compute-0 ceph-mon[73572]: Cluster is now healthy
Oct 08 09:44:58 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct 08 09:45:00 compute-0 ceph-mon[73572]: pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct 08 09:45:00 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 08 09:45:00 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:00 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 08 09:45:00 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Oct 08 09:45:00 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Oct 08 09:45:00 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:00 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Oct 08 09:45:00 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:00 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Oct 08 09:45:00 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 08 09:45:00 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Oct 08 09:45:00 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 08 09:45:00 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:45:00 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:45:00 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-1 on compute-1
Oct 08 09:45:00 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-1 on compute-1
Oct 08 09:45:00 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Oct 08 09:45:00 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).monmap v1 adding/updating compute-2 at [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to monitor cluster
Oct 08 09:45:00 compute-0 ceph-mgr[73869]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2171900707; not ready for session (expect reconnect)
Oct 08 09:45:00 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct 08 09:45:00 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 08 09:45:00 compute-0 ceph-mgr[73869]: mgr finish mon failed to return metadata for mon.compute-2: (2) No such file or directory
Oct 08 09:45:00 compute-0 ceph-mon[73572]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Oct 08 09:45:00 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 08 09:45:00 compute-0 ceph-mon[73572]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct 08 09:45:00 compute-0 ceph-mgr[73869]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Oct 08 09:45:00 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 08 09:45:00 compute-0 ceph-mon[73572]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Oct 08 09:45:00 compute-0 ceph-mon[73572]: paxos.0).electionLogic(5) init, last seen epoch 5, mid-election, bumping
Oct 08 09:45:00 compute-0 ceph-mon[73572]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 08 09:45:00 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct 08 09:45:01 compute-0 sudo[84102]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjgzkbnnfoprdqovyryjpvjdaxxxffkr ; /usr/bin/python3'
Oct 08 09:45:01 compute-0 sudo[84102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:45:01 compute-0 python3[84104]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:45:01 compute-0 podman[84106]: 2025-10-08 09:45:01.422628059 +0000 UTC m=+0.040906117 container create d8d6ede593244305d57375942d7a7efcd5f5eb4a4eba6fbcd2e3b67c65f4aa3d (image=quay.io/ceph/ceph:v19, name=priceless_kowalevski, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid)
Oct 08 09:45:01 compute-0 systemd[1]: Started libpod-conmon-d8d6ede593244305d57375942d7a7efcd5f5eb4a4eba6fbcd2e3b67c65f4aa3d.scope.
Oct 08 09:45:01 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:45:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c87ba05174c88ee8e51dbf1b172aa4df1580c564c8706413888bea07e4dd4677/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c87ba05174c88ee8e51dbf1b172aa4df1580c564c8706413888bea07e4dd4677/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c87ba05174c88ee8e51dbf1b172aa4df1580c564c8706413888bea07e4dd4677/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:01 compute-0 podman[84106]: 2025-10-08 09:45:01.486174055 +0000 UTC m=+0.104452113 container init d8d6ede593244305d57375942d7a7efcd5f5eb4a4eba6fbcd2e3b67c65f4aa3d (image=quay.io/ceph/ceph:v19, name=priceless_kowalevski, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:45:01 compute-0 podman[84106]: 2025-10-08 09:45:01.495120316 +0000 UTC m=+0.113398374 container start d8d6ede593244305d57375942d7a7efcd5f5eb4a4eba6fbcd2e3b67c65f4aa3d (image=quay.io/ceph/ceph:v19, name=priceless_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Oct 08 09:45:01 compute-0 podman[84106]: 2025-10-08 09:45:01.404027728 +0000 UTC m=+0.022305816 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:45:01 compute-0 podman[84106]: 2025-10-08 09:45:01.498989337 +0000 UTC m=+0.117267395 container attach d8d6ede593244305d57375942d7a7efcd5f5eb4a4eba6fbcd2e3b67c65f4aa3d (image=quay.io/ceph/ceph:v19, name=priceless_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 08 09:45:01 compute-0 ceph-mon[73572]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Oct 08 09:45:01 compute-0 ceph-mgr[73869]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2171900707; not ready for session (expect reconnect)
Oct 08 09:45:01 compute-0 ceph-mon[73572]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct 08 09:45:01 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 08 09:45:01 compute-0 ceph-mgr[73869]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Oct 08 09:45:01 compute-0 ceph-mon[73572]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Oct 08 09:45:02 compute-0 ceph-mon[73572]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Oct 08 09:45:02 compute-0 ceph-mon[73572]: mon.compute-0@0(electing) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 08 09:45:02 compute-0 ceph-mon[73572]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Oct 08 09:45:02 compute-0 ceph-mon[73572]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Oct 08 09:45:02 compute-0 ceph-mon[73572]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Oct 08 09:45:02 compute-0 ceph-mgr[73869]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3650877272; not ready for session (expect reconnect)
Oct 08 09:45:02 compute-0 ceph-mon[73572]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct 08 09:45:02 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 08 09:45:02 compute-0 ceph-mgr[73869]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Oct 08 09:45:02 compute-0 ceph-mgr[73869]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2171900707; not ready for session (expect reconnect)
Oct 08 09:45:02 compute-0 ceph-mon[73572]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct 08 09:45:02 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 08 09:45:02 compute-0 ceph-mgr[73869]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Oct 08 09:45:02 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct 08 09:45:02 compute-0 ceph-mon[73572]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Oct 08 09:45:03 compute-0 ceph-mgr[73869]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3650877272; not ready for session (expect reconnect)
Oct 08 09:45:03 compute-0 ceph-mon[73572]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct 08 09:45:03 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 08 09:45:03 compute-0 ceph-mgr[73869]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Oct 08 09:45:03 compute-0 ceph-mgr[73869]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2171900707; not ready for session (expect reconnect)
Oct 08 09:45:03 compute-0 ceph-mon[73572]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct 08 09:45:03 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 08 09:45:03 compute-0 ceph-mgr[73869]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Oct 08 09:45:04 compute-0 ceph-mon[73572]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Oct 08 09:45:04 compute-0 ceph-mgr[73869]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3650877272; not ready for session (expect reconnect)
Oct 08 09:45:04 compute-0 ceph-mon[73572]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct 08 09:45:04 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 08 09:45:04 compute-0 ceph-mgr[73869]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Oct 08 09:45:04 compute-0 ceph-mon[73572]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Oct 08 09:45:04 compute-0 ceph-mgr[73869]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2171900707; not ready for session (expect reconnect)
Oct 08 09:45:04 compute-0 ceph-mon[73572]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct 08 09:45:04 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 08 09:45:04 compute-0 ceph-mgr[73869]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Oct 08 09:45:04 compute-0 ceph-mon[73572]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Oct 08 09:45:04 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct 08 09:45:05 compute-0 ceph-mon[73572]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Oct 08 09:45:05 compute-0 ceph-mgr[73869]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3650877272; not ready for session (expect reconnect)
Oct 08 09:45:05 compute-0 ceph-mon[73572]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct 08 09:45:05 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 08 09:45:05 compute-0 ceph-mgr[73869]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Oct 08 09:45:05 compute-0 ceph-mgr[73869]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2171900707; not ready for session (expect reconnect)
Oct 08 09:45:05 compute-0 ceph-mon[73572]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct 08 09:45:05 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 08 09:45:05 compute-0 ceph-mgr[73869]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Oct 08 09:45:05 compute-0 ceph-mon[73572]: paxos.0).electionLogic(7) init, last seen epoch 7, mid-election, bumping
Oct 08 09:45:05 compute-0 ceph-mon[73572]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 08 09:45:05 compute-0 ceph-mon[73572]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Oct 08 09:45:05 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : monmap epoch 2
Oct 08 09:45:05 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : fsid 787292cc-8154-50c4-9e00-e9be3e817149
Oct 08 09:45:05 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : last_changed 2025-10-08T09:45:00.661832+0000
Oct 08 09:45:05 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : created 2025-10-08T09:42:59.307631+0000
Oct 08 09:45:05 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Oct 08 09:45:05 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : election_strategy: 1
Oct 08 09:45:05 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Oct 08 09:45:05 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Oct 08 09:45:05 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 08 09:45:05 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : fsmap 
Oct 08 09:45:05 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e14: 2 total, 2 up, 2 in
Oct 08 09:45:05 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.ixicfj(active, since 103s)
Oct 08 09:45:05 compute-0 ceph-mon[73572]: log_channel(cluster) log [INF] : overall HEALTH_OK
Oct 08 09:45:05 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:05 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 08 09:45:05 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:05 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Oct 08 09:45:05 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:05 compute-0 ceph-mgr[73869]: [progress INFO root] complete: finished ev a1daac8b-8bd7-4296-8123-624af205803a (Updating mon deployment (+2 -> 3))
Oct 08 09:45:05 compute-0 ceph-mgr[73869]: [progress INFO root] Completed event a1daac8b-8bd7-4296-8123-624af205803a (Updating mon deployment (+2 -> 3)) in 9 seconds
Oct 08 09:45:05 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Oct 08 09:45:05 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:05 compute-0 ceph-mgr[73869]: [progress INFO root] update: starting ev 4f6e05db-358c-451e-8e62-6c11a418e1af (Updating mgr deployment (+2 -> 3))
Oct 08 09:45:05 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.mtagwx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Oct 08 09:45:05 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.mtagwx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 08 09:45:05 compute-0 ceph-mon[73572]: Deploying daemon mon.compute-1 on compute-1
Oct 08 09:45:05 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 08 09:45:05 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 08 09:45:05 compute-0 ceph-mon[73572]: mon.compute-0 calling monitor election
Oct 08 09:45:05 compute-0 ceph-mon[73572]: pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct 08 09:45:05 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 08 09:45:05 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 08 09:45:05 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 08 09:45:05 compute-0 ceph-mon[73572]: mon.compute-2 calling monitor election
Oct 08 09:45:05 compute-0 ceph-mon[73572]: pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct 08 09:45:05 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 08 09:45:05 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 08 09:45:05 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 08 09:45:05 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 08 09:45:05 compute-0 ceph-mon[73572]: pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct 08 09:45:05 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 08 09:45:05 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 08 09:45:05 compute-0 ceph-mon[73572]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Oct 08 09:45:05 compute-0 ceph-mon[73572]: monmap epoch 2
Oct 08 09:45:05 compute-0 ceph-mon[73572]: fsid 787292cc-8154-50c4-9e00-e9be3e817149
Oct 08 09:45:05 compute-0 ceph-mon[73572]: last_changed 2025-10-08T09:45:00.661832+0000
Oct 08 09:45:05 compute-0 ceph-mon[73572]: created 2025-10-08T09:42:59.307631+0000
Oct 08 09:45:05 compute-0 ceph-mon[73572]: min_mon_release 19 (squid)
Oct 08 09:45:05 compute-0 ceph-mon[73572]: election_strategy: 1
Oct 08 09:45:05 compute-0 ceph-mon[73572]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Oct 08 09:45:05 compute-0 ceph-mon[73572]: 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Oct 08 09:45:05 compute-0 ceph-mon[73572]: fsmap 
Oct 08 09:45:05 compute-0 ceph-mon[73572]: osdmap e14: 2 total, 2 up, 2 in
Oct 08 09:45:05 compute-0 ceph-mon[73572]: mgrmap e9: compute-0.ixicfj(active, since 103s)
Oct 08 09:45:05 compute-0 ceph-mon[73572]: overall HEALTH_OK
Oct 08 09:45:05 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:05 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:05 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:05 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.mtagwx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct 08 09:45:05 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct 08 09:45:05 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 08 09:45:05 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:45:05 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:45:05 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-2.mtagwx on compute-2
Oct 08 09:45:05 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-2.mtagwx on compute-2
Oct 08 09:45:06 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Oct 08 09:45:06 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2539592381' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 08 09:45:06 compute-0 priceless_kowalevski[84122]: 
Oct 08 09:45:06 compute-0 priceless_kowalevski[84122]: {"fsid":"787292cc-8154-50c4-9e00-e9be3e817149","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":10,"quorum":[0,1],"quorum_names":["compute-0","compute-2"],"quorum_age":0,"monmap":{"epoch":2,"min_mon_release_name":"squid","num_mons":2},"osdmap":{"epoch":14,"num_osds":2,"num_up_osds":2,"osd_up_since":1759916679,"num_in_osds":2,"osd_in_since":1759916660,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":475242496,"bytes_avail":42466041856,"bytes_total":42941284352},"fsmap":{"epoch":1,"btime":"2025-10-08T09:43:01:374245+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-10-08T09:44:24.295017+0000","services":{}},"progress_events":{"a1daac8b-8bd7-4296-8123-624af205803a":{"message":"Updating mon deployment (+2 -> 3) (3s)\n      [==============..............] (remaining: 3s)","progress":0.5,"add_to_ceph_s":true}}}
Oct 08 09:45:06 compute-0 systemd[1]: libpod-d8d6ede593244305d57375942d7a7efcd5f5eb4a4eba6fbcd2e3b67c65f4aa3d.scope: Deactivated successfully.
Oct 08 09:45:06 compute-0 podman[84106]: 2025-10-08 09:45:06.320491953 +0000 UTC m=+4.938770051 container died d8d6ede593244305d57375942d7a7efcd5f5eb4a4eba6fbcd2e3b67c65f4aa3d (image=quay.io/ceph/ceph:v19, name=priceless_kowalevski, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct 08 09:45:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-c87ba05174c88ee8e51dbf1b172aa4df1580c564c8706413888bea07e4dd4677-merged.mount: Deactivated successfully.
Oct 08 09:45:06 compute-0 podman[84106]: 2025-10-08 09:45:06.369090649 +0000 UTC m=+4.987368707 container remove d8d6ede593244305d57375942d7a7efcd5f5eb4a4eba6fbcd2e3b67c65f4aa3d (image=quay.io/ceph/ceph:v19, name=priceless_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True)
Oct 08 09:45:06 compute-0 systemd[1]: libpod-conmon-d8d6ede593244305d57375942d7a7efcd5f5eb4a4eba6fbcd2e3b67c65f4aa3d.scope: Deactivated successfully.
Oct 08 09:45:06 compute-0 sudo[84102]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:06 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Oct 08 09:45:06 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).monmap v2 adding/updating compute-1 at [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to monitor cluster
Oct 08 09:45:06 compute-0 ceph-mgr[73869]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3650877272; not ready for session (expect reconnect)
Oct 08 09:45:06 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct 08 09:45:06 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 08 09:45:06 compute-0 ceph-mgr[73869]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Oct 08 09:45:06 compute-0 ceph-mon[73572]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Oct 08 09:45:06 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 08 09:45:06 compute-0 ceph-mon[73572]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Oct 08 09:45:06 compute-0 ceph-mon[73572]: paxos.0).electionLogic(10) init, last seen epoch 10
Oct 08 09:45:06 compute-0 ceph-mon[73572]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 08 09:45:06 compute-0 ceph-mon[73572]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct 08 09:45:06 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 08 09:45:06 compute-0 ceph-mon[73572]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct 08 09:45:06 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 08 09:45:06 compute-0 ceph-mgr[73869]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Oct 08 09:45:06 compute-0 ceph-mgr[73869]: mgr.server handle_report got status from non-daemon mon.compute-2
Oct 08 09:45:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:45:06.663+0000 7fa814663640 -1 mgr.server handle_report got status from non-daemon mon.compute-2
Oct 08 09:45:06 compute-0 sudo[84184]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpszwxhrrxskgeoglziiuktiadebhzjf ; /usr/bin/python3'
Oct 08 09:45:06 compute-0 sudo[84184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:45:06 compute-0 python3[84186]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:45:06 compute-0 podman[84187]: 2025-10-08 09:45:06.928143736 +0000 UTC m=+0.042438670 container create a515da464a8fda7282d9faa004e04b6c68da658e287a958dca8b131776174cdc (image=quay.io/ceph/ceph:v19, name=intelligent_faraday, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:45:06 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct 08 09:45:06 compute-0 systemd[1]: Started libpod-conmon-a515da464a8fda7282d9faa004e04b6c68da658e287a958dca8b131776174cdc.scope.
Oct 08 09:45:06 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:45:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/188c5be260ce5df253dd263ebb6d0582559fba0978701d3d9e349877dfd2c1d0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/188c5be260ce5df253dd263ebb6d0582559fba0978701d3d9e349877dfd2c1d0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:07 compute-0 podman[84187]: 2025-10-08 09:45:06.90748977 +0000 UTC m=+0.021784734 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:45:07 compute-0 podman[84187]: 2025-10-08 09:45:07.007103811 +0000 UTC m=+0.121398775 container init a515da464a8fda7282d9faa004e04b6c68da658e287a958dca8b131776174cdc (image=quay.io/ceph/ceph:v19, name=intelligent_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Oct 08 09:45:07 compute-0 podman[84187]: 2025-10-08 09:45:07.01262414 +0000 UTC m=+0.126919074 container start a515da464a8fda7282d9faa004e04b6c68da658e287a958dca8b131776174cdc (image=quay.io/ceph/ceph:v19, name=intelligent_faraday, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Oct 08 09:45:07 compute-0 podman[84187]: 2025-10-08 09:45:07.017830666 +0000 UTC m=+0.132125600 container attach a515da464a8fda7282d9faa004e04b6c68da658e287a958dca8b131776174cdc (image=quay.io/ceph/ceph:v19, name=intelligent_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct 08 09:45:07 compute-0 ceph-mon[73572]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 08 09:45:07 compute-0 ceph-mon[73572]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 08 09:45:07 compute-0 ceph-mgr[73869]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3650877272; not ready for session (expect reconnect)
Oct 08 09:45:07 compute-0 ceph-mon[73572]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct 08 09:45:07 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 08 09:45:07 compute-0 ceph-mgr[73869]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Oct 08 09:45:07 compute-0 ceph-mon[73572]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 08 09:45:07 compute-0 ceph-mon[73572]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 08 09:45:07 compute-0 ceph-mgr[73869]: [progress INFO root] Writing back 3 completed events
Oct 08 09:45:07 compute-0 ceph-mon[73572]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 08 09:45:07 compute-0 ceph-mon[73572]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 08 09:45:07 compute-0 ceph-mon[73572]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 08 09:45:08 compute-0 ceph-mon[73572]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 08 09:45:08 compute-0 ceph-mon[73572]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 08 09:45:08 compute-0 ceph-mgr[73869]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3650877272; not ready for session (expect reconnect)
Oct 08 09:45:08 compute-0 ceph-mon[73572]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct 08 09:45:08 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 08 09:45:08 compute-0 ceph-mgr[73869]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Oct 08 09:45:08 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct 08 09:45:08 compute-0 ceph-mon[73572]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 08 09:45:09 compute-0 ceph-mgr[73869]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3650877272; not ready for session (expect reconnect)
Oct 08 09:45:09 compute-0 ceph-mon[73572]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct 08 09:45:09 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 08 09:45:09 compute-0 ceph-mgr[73869]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Oct 08 09:45:10 compute-0 ceph-mon[73572]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 08 09:45:10 compute-0 ceph-mon[73572]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 08 09:45:10 compute-0 ceph-mgr[73869]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3650877272; not ready for session (expect reconnect)
Oct 08 09:45:10 compute-0 ceph-mon[73572]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct 08 09:45:10 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 08 09:45:10 compute-0 ceph-mgr[73869]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Oct 08 09:45:10 compute-0 ceph-mon[73572]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 08 09:45:10 compute-0 ceph-mon[73572]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 08 09:45:10 compute-0 ceph-mon[73572]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 08 09:45:10 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct 08 09:45:11 compute-0 ceph-mon[73572]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 08 09:45:11 compute-0 ceph-mon[73572]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 08 09:45:11 compute-0 ceph-mgr[73869]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3650877272; not ready for session (expect reconnect)
Oct 08 09:45:11 compute-0 ceph-mon[73572]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct 08 09:45:11 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 08 09:45:11 compute-0 ceph-mgr[73869]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Oct 08 09:45:11 compute-0 ceph-mon[73572]: paxos.0).electionLogic(11) init, last seen epoch 11, mid-election, bumping
Oct 08 09:45:11 compute-0 ceph-mon[73572]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 08 09:45:11 compute-0 ceph-mon[73572]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Oct 08 09:45:11 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : monmap epoch 3
Oct 08 09:45:11 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : fsid 787292cc-8154-50c4-9e00-e9be3e817149
Oct 08 09:45:11 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : last_changed 2025-10-08T09:45:06.514939+0000
Oct 08 09:45:11 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : created 2025-10-08T09:42:59.307631+0000
Oct 08 09:45:11 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Oct 08 09:45:11 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : election_strategy: 1
Oct 08 09:45:11 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Oct 08 09:45:11 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Oct 08 09:45:11 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Oct 08 09:45:11 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 08 09:45:11 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : fsmap 
Oct 08 09:45:11 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e14: 2 total, 2 up, 2 in
Oct 08 09:45:11 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.ixicfj(active, since 109s)
Oct 08 09:45:11 compute-0 ceph-mon[73572]: log_channel(cluster) log [INF] : overall HEALTH_OK
Oct 08 09:45:11 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:11 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 08 09:45:11 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:11 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:11 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Oct 08 09:45:11 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:11 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-1.swlvov", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Oct 08 09:45:11 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.swlvov", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 08 09:45:11 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 08 09:45:11 compute-0 ceph-mon[73572]: mon.compute-0 calling monitor election
Oct 08 09:45:11 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 08 09:45:11 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 08 09:45:11 compute-0 ceph-mon[73572]: mon.compute-2 calling monitor election
Oct 08 09:45:11 compute-0 ceph-mon[73572]: pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct 08 09:45:11 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 08 09:45:11 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 08 09:45:11 compute-0 ceph-mon[73572]: mon.compute-1 calling monitor election
Oct 08 09:45:11 compute-0 ceph-mon[73572]: pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct 08 09:45:11 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 08 09:45:11 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 08 09:45:11 compute-0 ceph-mon[73572]: pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct 08 09:45:11 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 08 09:45:11 compute-0 ceph-mon[73572]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Oct 08 09:45:11 compute-0 ceph-mon[73572]: monmap epoch 3
Oct 08 09:45:11 compute-0 ceph-mon[73572]: fsid 787292cc-8154-50c4-9e00-e9be3e817149
Oct 08 09:45:11 compute-0 ceph-mon[73572]: last_changed 2025-10-08T09:45:06.514939+0000
Oct 08 09:45:11 compute-0 ceph-mon[73572]: created 2025-10-08T09:42:59.307631+0000
Oct 08 09:45:11 compute-0 ceph-mon[73572]: min_mon_release 19 (squid)
Oct 08 09:45:11 compute-0 ceph-mon[73572]: election_strategy: 1
Oct 08 09:45:11 compute-0 ceph-mon[73572]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Oct 08 09:45:11 compute-0 ceph-mon[73572]: 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Oct 08 09:45:11 compute-0 ceph-mon[73572]: 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Oct 08 09:45:11 compute-0 ceph-mon[73572]: fsmap 
Oct 08 09:45:11 compute-0 ceph-mon[73572]: osdmap e14: 2 total, 2 up, 2 in
Oct 08 09:45:11 compute-0 ceph-mon[73572]: mgrmap e9: compute-0.ixicfj(active, since 109s)
Oct 08 09:45:11 compute-0 ceph-mon[73572]: overall HEALTH_OK
Oct 08 09:45:11 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:11 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:11 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:11 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.swlvov", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct 08 09:45:11 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct 08 09:45:11 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 08 09:45:11 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:45:11 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:45:11 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-1.swlvov on compute-1
Oct 08 09:45:11 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-1.swlvov on compute-1
Oct 08 09:45:12 compute-0 ceph-mgr[73869]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3650877272; not ready for session (expect reconnect)
Oct 08 09:45:12 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct 08 09:45:12 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 08 09:45:12 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:12 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.swlvov", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 08 09:45:12 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.swlvov", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct 08 09:45:12 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 08 09:45:12 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:45:12 compute-0 ceph-mon[73572]: Deploying daemon mgr.compute-1.swlvov on compute-1
Oct 08 09:45:12 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 08 09:45:12 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct 08 09:45:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 08 09:45:13 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 08 09:45:13 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Oct 08 09:45:13 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:13 compute-0 ceph-mgr[73869]: [progress INFO root] complete: finished ev 4f6e05db-358c-451e-8e62-6c11a418e1af (Updating mgr deployment (+2 -> 3))
Oct 08 09:45:13 compute-0 ceph-mgr[73869]: [progress INFO root] Completed event 4f6e05db-358c-451e-8e62-6c11a418e1af (Updating mgr deployment (+2 -> 3)) in 8 seconds
Oct 08 09:45:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Oct 08 09:45:13 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:13 compute-0 ceph-mgr[73869]: [progress INFO root] update: starting ev 6b090588-9c5d-45b5-8b61-76caf7676272 (Updating crash deployment (+1 -> 3))
Oct 08 09:45:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Oct 08 09:45:13 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 08 09:45:13 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct 08 09:45:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:45:13 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:45:13 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-2 on compute-2
Oct 08 09:45:13 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-2 on compute-2
Oct 08 09:45:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e14 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:45:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Oct 08 09:45:13 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/413234013' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 08 09:45:13 compute-0 ceph-mgr[73869]: mgr.server handle_report got status from non-daemon mon.compute-1
Oct 08 09:45:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:45:13.519+0000 7fa814663640 -1 mgr.server handle_report got status from non-daemon mon.compute-1
Oct 08 09:45:14 compute-0 ceph-mon[73572]: pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct 08 09:45:14 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:14 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:14 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:14 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:14 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 08 09:45:14 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct 08 09:45:14 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:45:14 compute-0 ceph-mon[73572]: Deploying daemon crash.compute-2 on compute-2
Oct 08 09:45:14 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/413234013' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 08 09:45:14 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Oct 08 09:45:14 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/413234013' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 08 09:45:14 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e15 e15: 2 total, 2 up, 2 in
Oct 08 09:45:14 compute-0 intelligent_faraday[84203]: pool 'vms' created
Oct 08 09:45:14 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e15: 2 total, 2 up, 2 in
Oct 08 09:45:14 compute-0 systemd[1]: libpod-a515da464a8fda7282d9faa004e04b6c68da658e287a958dca8b131776174cdc.scope: Deactivated successfully.
Oct 08 09:45:14 compute-0 podman[84187]: 2025-10-08 09:45:14.349689513 +0000 UTC m=+7.463984447 container died a515da464a8fda7282d9faa004e04b6c68da658e287a958dca8b131776174cdc (image=quay.io/ceph/ceph:v19, name=intelligent_faraday, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct 08 09:45:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-188c5be260ce5df253dd263ebb6d0582559fba0978701d3d9e349877dfd2c1d0-merged.mount: Deactivated successfully.
Oct 08 09:45:14 compute-0 podman[84187]: 2025-10-08 09:45:14.421599496 +0000 UTC m=+7.535894430 container remove a515da464a8fda7282d9faa004e04b6c68da658e287a958dca8b131776174cdc (image=quay.io/ceph/ceph:v19, name=intelligent_faraday, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 08 09:45:14 compute-0 systemd[1]: libpod-conmon-a515da464a8fda7282d9faa004e04b6c68da658e287a958dca8b131776174cdc.scope: Deactivated successfully.
Oct 08 09:45:14 compute-0 sudo[84184]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:14 compute-0 sudo[84266]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyecbcscubnxscfsywtjefgwogujglmj ; /usr/bin/python3'
Oct 08 09:45:14 compute-0 sudo[84266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:45:14 compute-0 python3[84268]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:45:14 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v64: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct 08 09:45:14 compute-0 podman[84269]: 2025-10-08 09:45:14.865870482 +0000 UTC m=+0.020030983 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:45:15 compute-0 podman[84269]: 2025-10-08 09:45:15.076258289 +0000 UTC m=+0.230418740 container create aa39ca2ecf9c4eb7bfb412e1021924fb9fe234599c208db56d6bf6f32dc7beed (image=quay.io/ceph/ceph:v19, name=nice_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Oct 08 09:45:15 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 08 09:45:15 compute-0 systemd[1]: Started libpod-conmon-aa39ca2ecf9c4eb7bfb412e1021924fb9fe234599c208db56d6bf6f32dc7beed.scope.
Oct 08 09:45:15 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:45:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10b11d5a3a564521372d79f314c15ce41051c122e1225ad50d10c1acbe490c9e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10b11d5a3a564521372d79f314c15ce41051c122e1225ad50d10c1acbe490c9e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:15 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:15 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 08 09:45:15 compute-0 podman[84269]: 2025-10-08 09:45:15.303417491 +0000 UTC m=+0.457577962 container init aa39ca2ecf9c4eb7bfb412e1021924fb9fe234599c208db56d6bf6f32dc7beed (image=quay.io/ceph/ceph:v19, name=nice_allen, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:45:15 compute-0 podman[84269]: 2025-10-08 09:45:15.313045409 +0000 UTC m=+0.467205860 container start aa39ca2ecf9c4eb7bfb412e1021924fb9fe234599c208db56d6bf6f32dc7beed (image=quay.io/ceph/ceph:v19, name=nice_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:45:15 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Oct 08 09:45:15 compute-0 ceph-mon[73572]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 08 09:45:15 compute-0 podman[84269]: 2025-10-08 09:45:15.347606653 +0000 UTC m=+0.501767104 container attach aa39ca2ecf9c4eb7bfb412e1021924fb9fe234599c208db56d6bf6f32dc7beed (image=quay.io/ceph/ceph:v19, name=nice_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:45:15 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:15 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Oct 08 09:45:15 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e16 e16: 2 total, 2 up, 2 in
Oct 08 09:45:15 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/413234013' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 08 09:45:15 compute-0 ceph-mon[73572]: osdmap e15: 2 total, 2 up, 2 in
Oct 08 09:45:15 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:15 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e16: 2 total, 2 up, 2 in
Oct 08 09:45:15 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:15 compute-0 ceph-mgr[73869]: [progress INFO root] complete: finished ev 6b090588-9c5d-45b5-8b61-76caf7676272 (Updating crash deployment (+1 -> 3))
Oct 08 09:45:15 compute-0 ceph-mgr[73869]: [progress INFO root] Completed event 6b090588-9c5d-45b5-8b61-76caf7676272 (Updating crash deployment (+1 -> 3)) in 2 seconds
Oct 08 09:45:15 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Oct 08 09:45:15 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:15 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 08 09:45:15 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 09:45:15 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 08 09:45:15 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 09:45:15 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:45:15 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:45:15 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 08 09:45:15 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 09:45:15 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:45:15 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:45:15 compute-0 sudo[84308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:45:15 compute-0 sudo[84308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:15 compute-0 sudo[84308]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:15 compute-0 sudo[84333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 08 09:45:15 compute-0 sudo[84333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:15 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Oct 08 09:45:15 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2222990356' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 08 09:45:16 compute-0 podman[84401]: 2025-10-08 09:45:16.009558569 +0000 UTC m=+0.019349855 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:45:16 compute-0 podman[84401]: 2025-10-08 09:45:16.161182766 +0000 UTC m=+0.170974022 container create fca70e323dd8fcde7b610d8c370eb1a7947d67c1fb79eb5ff9d73cc03efe1737 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:45:16 compute-0 systemd[1]: Started libpod-conmon-fca70e323dd8fcde7b610d8c370eb1a7947d67c1fb79eb5ff9d73cc03efe1737.scope.
Oct 08 09:45:16 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:45:16 compute-0 podman[84401]: 2025-10-08 09:45:16.39802007 +0000 UTC m=+0.407811346 container init fca70e323dd8fcde7b610d8c370eb1a7947d67c1fb79eb5ff9d73cc03efe1737 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_germain, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct 08 09:45:16 compute-0 podman[84401]: 2025-10-08 09:45:16.403534528 +0000 UTC m=+0.413325784 container start fca70e323dd8fcde7b610d8c370eb1a7947d67c1fb79eb5ff9d73cc03efe1737 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_germain, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 08 09:45:16 compute-0 competent_germain[84418]: 167 167
Oct 08 09:45:16 compute-0 systemd[1]: libpod-fca70e323dd8fcde7b610d8c370eb1a7947d67c1fb79eb5ff9d73cc03efe1737.scope: Deactivated successfully.
Oct 08 09:45:16 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Oct 08 09:45:16 compute-0 podman[84401]: 2025-10-08 09:45:16.470258176 +0000 UTC m=+0.480049532 container attach fca70e323dd8fcde7b610d8c370eb1a7947d67c1fb79eb5ff9d73cc03efe1737 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_germain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:45:16 compute-0 podman[84401]: 2025-10-08 09:45:16.470749577 +0000 UTC m=+0.480540873 container died fca70e323dd8fcde7b610d8c370eb1a7947d67c1fb79eb5ff9d73cc03efe1737 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_germain, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:45:16 compute-0 ceph-mon[73572]: pgmap v64: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct 08 09:45:16 compute-0 ceph-mon[73572]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 08 09:45:16 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:16 compute-0 ceph-mon[73572]: osdmap e16: 2 total, 2 up, 2 in
Oct 08 09:45:16 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:16 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:16 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 09:45:16 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 09:45:16 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:45:16 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 09:45:16 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:45:16 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/2222990356' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 08 09:45:16 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2222990356' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 08 09:45:16 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e17 e17: 2 total, 2 up, 2 in
Oct 08 09:45:16 compute-0 nice_allen[84285]: pool 'volumes' created
Oct 08 09:45:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4aaf7c4f3b079466f23c4f77442fb42509a5d5497f41e1258605da179678cf1-merged.mount: Deactivated successfully.
Oct 08 09:45:16 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e17: 2 total, 2 up, 2 in
Oct 08 09:45:16 compute-0 podman[84401]: 2025-10-08 09:45:16.552129402 +0000 UTC m=+0.561920658 container remove fca70e323dd8fcde7b610d8c370eb1a7947d67c1fb79eb5ff9d73cc03efe1737 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_germain, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:45:16 compute-0 systemd[1]: libpod-aa39ca2ecf9c4eb7bfb412e1021924fb9fe234599c208db56d6bf6f32dc7beed.scope: Deactivated successfully.
Oct 08 09:45:16 compute-0 podman[84269]: 2025-10-08 09:45:16.554650287 +0000 UTC m=+1.708810738 container died aa39ca2ecf9c4eb7bfb412e1021924fb9fe234599c208db56d6bf6f32dc7beed (image=quay.io/ceph/ceph:v19, name=nice_allen, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:45:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-10b11d5a3a564521372d79f314c15ce41051c122e1225ad50d10c1acbe490c9e-merged.mount: Deactivated successfully.
Oct 08 09:45:16 compute-0 ceph-mgr[73869]: [progress INFO root] Writing back 5 completed events
Oct 08 09:45:16 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 08 09:45:16 compute-0 podman[84269]: 2025-10-08 09:45:16.602220119 +0000 UTC m=+1.756380570 container remove aa39ca2ecf9c4eb7bfb412e1021924fb9fe234599c208db56d6bf6f32dc7beed (image=quay.io/ceph/ceph:v19, name=nice_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:45:16 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:16 compute-0 systemd[1]: libpod-conmon-aa39ca2ecf9c4eb7bfb412e1021924fb9fe234599c208db56d6bf6f32dc7beed.scope: Deactivated successfully.
Oct 08 09:45:16 compute-0 systemd[1]: libpod-conmon-fca70e323dd8fcde7b610d8c370eb1a7947d67c1fb79eb5ff9d73cc03efe1737.scope: Deactivated successfully.
Oct 08 09:45:16 compute-0 sudo[84266]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:16 compute-0 podman[84454]: 2025-10-08 09:45:16.716115273 +0000 UTC m=+0.044990977 container create 80edc1149d7f2a8fe8271e6ac9881d5f0aa9a1f88ca2384e96c71ee8b48369bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_bell, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:45:16 compute-0 sudo[84489]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfgtbprqyvzfxmmyudezggtafoorneyn ; /usr/bin/python3'
Oct 08 09:45:16 compute-0 sudo[84489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:45:16 compute-0 systemd[1]: Started libpod-conmon-80edc1149d7f2a8fe8271e6ac9881d5f0aa9a1f88ca2384e96c71ee8b48369bd.scope.
Oct 08 09:45:16 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:45:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a191d926bff0b2f121f93f8cb007c3962163067894e7c28bf34a53fb85f286c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:16 compute-0 podman[84454]: 2025-10-08 09:45:16.692250194 +0000 UTC m=+0.021125918 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:45:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a191d926bff0b2f121f93f8cb007c3962163067894e7c28bf34a53fb85f286c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a191d926bff0b2f121f93f8cb007c3962163067894e7c28bf34a53fb85f286c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a191d926bff0b2f121f93f8cb007c3962163067894e7c28bf34a53fb85f286c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a191d926bff0b2f121f93f8cb007c3962163067894e7c28bf34a53fb85f286c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:16 compute-0 podman[84454]: 2025-10-08 09:45:16.805885176 +0000 UTC m=+0.134760890 container init 80edc1149d7f2a8fe8271e6ac9881d5f0aa9a1f88ca2384e96c71ee8b48369bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:45:16 compute-0 podman[84454]: 2025-10-08 09:45:16.812849066 +0000 UTC m=+0.141724770 container start 80edc1149d7f2a8fe8271e6ac9881d5f0aa9a1f88ca2384e96c71ee8b48369bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:45:16 compute-0 podman[84454]: 2025-10-08 09:45:16.821605378 +0000 UTC m=+0.150481102 container attach 80edc1149d7f2a8fe8271e6ac9881d5f0aa9a1f88ca2384e96c71ee8b48369bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_bell, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 08 09:45:16 compute-0 python3[84493]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:45:16 compute-0 podman[84501]: 2025-10-08 09:45:16.93979109 +0000 UTC m=+0.045224057 container create 4c9b9b6e8de74b49d4551b1524bd6330b91bb45668d77b8384c7a0090556c8d1 (image=quay.io/ceph/ceph:v19, name=wonderful_feynman, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:45:16 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v67: 3 pgs: 1 unknown, 2 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 08 09:45:16 compute-0 systemd[1]: Started libpod-conmon-4c9b9b6e8de74b49d4551b1524bd6330b91bb45668d77b8384c7a0090556c8d1.scope.
Oct 08 09:45:17 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:45:17 compute-0 podman[84501]: 2025-10-08 09:45:16.91878986 +0000 UTC m=+0.024222807 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:45:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92f8bdd67e6680adba3e4a589b635fbafe3539cffc1acbb6e355174d2d399758/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92f8bdd67e6680adba3e4a589b635fbafe3539cffc1acbb6e355174d2d399758/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:17 compute-0 podman[84501]: 2025-10-08 09:45:17.026458275 +0000 UTC m=+0.131891202 container init 4c9b9b6e8de74b49d4551b1524bd6330b91bb45668d77b8384c7a0090556c8d1 (image=quay.io/ceph/ceph:v19, name=wonderful_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default)
Oct 08 09:45:17 compute-0 podman[84501]: 2025-10-08 09:45:17.036508402 +0000 UTC m=+0.141941329 container start 4c9b9b6e8de74b49d4551b1524bd6330b91bb45668d77b8384c7a0090556c8d1 (image=quay.io/ceph/ceph:v19, name=wonderful_feynman, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 08 09:45:17 compute-0 podman[84501]: 2025-10-08 09:45:17.041962058 +0000 UTC m=+0.147395005 container attach 4c9b9b6e8de74b49d4551b1524bd6330b91bb45668d77b8384c7a0090556c8d1 (image=quay.io/ceph/ceph:v19, name=wonderful_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:45:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 17 pg[3.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [1] r=0 lpr=17 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:17 compute-0 boring_bell[84494]: --> passed data devices: 0 physical, 1 LVM
Oct 08 09:45:17 compute-0 boring_bell[84494]: --> All data devices are unavailable
Oct 08 09:45:17 compute-0 systemd[1]: libpod-80edc1149d7f2a8fe8271e6ac9881d5f0aa9a1f88ca2384e96c71ee8b48369bd.scope: Deactivated successfully.
Oct 08 09:45:17 compute-0 podman[84454]: 2025-10-08 09:45:17.179386838 +0000 UTC m=+0.508262542 container died 80edc1149d7f2a8fe8271e6ac9881d5f0aa9a1f88ca2384e96c71ee8b48369bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_bell, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325)
Oct 08 09:45:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a191d926bff0b2f121f93f8cb007c3962163067894e7c28bf34a53fb85f286c-merged.mount: Deactivated successfully.
Oct 08 09:45:17 compute-0 podman[84454]: 2025-10-08 09:45:17.248811277 +0000 UTC m=+0.577686981 container remove 80edc1149d7f2a8fe8271e6ac9881d5f0aa9a1f88ca2384e96c71ee8b48369bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_bell, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 08 09:45:17 compute-0 systemd[1]: libpod-conmon-80edc1149d7f2a8fe8271e6ac9881d5f0aa9a1f88ca2384e96c71ee8b48369bd.scope: Deactivated successfully.
Oct 08 09:45:17 compute-0 sudo[84333]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:17 compute-0 sudo[84563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:45:17 compute-0 sudo[84563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:17 compute-0 sudo[84563]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:17 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Oct 08 09:45:17 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3583095774' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 08 09:45:17 compute-0 sudo[84588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- lvm list --format json
Oct 08 09:45:17 compute-0 sudo[84588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:17 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/2222990356' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 08 09:45:17 compute-0 ceph-mon[73572]: osdmap e17: 2 total, 2 up, 2 in
Oct 08 09:45:17 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:17 compute-0 ceph-mon[73572]: pgmap v67: 3 pgs: 1 unknown, 2 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 08 09:45:17 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3583095774' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 08 09:45:17 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd new", "uuid": "ef552a3d-427a-4a30-bf26-d668cd69b923"} v 0)
Oct 08 09:45:17 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ef552a3d-427a-4a30-bf26-d668cd69b923"}]: dispatch
Oct 08 09:45:17 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Oct 08 09:45:17 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3583095774' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 08 09:45:17 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ef552a3d-427a-4a30-bf26-d668cd69b923"}]': finished
Oct 08 09:45:17 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e18 e18: 3 total, 2 up, 3 in
Oct 08 09:45:17 compute-0 wonderful_feynman[84517]: pool 'backups' created
Oct 08 09:45:17 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 2 up, 3 in
Oct 08 09:45:17 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 08 09:45:17 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 08 09:45:17 compute-0 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 08 09:45:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 18 pg[4.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [1] r=0 lpr=18 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:17 compute-0 systemd[1]: libpod-4c9b9b6e8de74b49d4551b1524bd6330b91bb45668d77b8384c7a0090556c8d1.scope: Deactivated successfully.
Oct 08 09:45:17 compute-0 podman[84501]: 2025-10-08 09:45:17.623403134 +0000 UTC m=+0.728836051 container died 4c9b9b6e8de74b49d4551b1524bd6330b91bb45668d77b8384c7a0090556c8d1 (image=quay.io/ceph/ceph:v19, name=wonderful_feynman, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:45:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 18 pg[3.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [1] r=0 lpr=17 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-92f8bdd67e6680adba3e4a589b635fbafe3539cffc1acbb6e355174d2d399758-merged.mount: Deactivated successfully.
Oct 08 09:45:17 compute-0 podman[84501]: 2025-10-08 09:45:17.697099101 +0000 UTC m=+0.802532038 container remove 4c9b9b6e8de74b49d4551b1524bd6330b91bb45668d77b8384c7a0090556c8d1 (image=quay.io/ceph/ceph:v19, name=wonderful_feynman, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:45:17 compute-0 systemd[1]: libpod-conmon-4c9b9b6e8de74b49d4551b1524bd6330b91bb45668d77b8384c7a0090556c8d1.scope: Deactivated successfully.
Oct 08 09:45:17 compute-0 sudo[84489]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:17 compute-0 podman[84664]: 2025-10-08 09:45:17.844091747 +0000 UTC m=+0.042972473 container create 264b7827657d6019b9e89715eb5925fdf90c79300471ba4ddf97f7ef9f880c10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_agnesi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:45:17 compute-0 systemd[1]: Started libpod-conmon-264b7827657d6019b9e89715eb5925fdf90c79300471ba4ddf97f7ef9f880c10.scope.
Oct 08 09:45:17 compute-0 sudo[84701]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfarrjkqsqbifhkozcfqilsuygwgdijf ; /usr/bin/python3'
Oct 08 09:45:17 compute-0 sudo[84701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:45:17 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:45:17 compute-0 podman[84664]: 2025-10-08 09:45:17.914364762 +0000 UTC m=+0.113245518 container init 264b7827657d6019b9e89715eb5925fdf90c79300471ba4ddf97f7ef9f880c10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_agnesi, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct 08 09:45:17 compute-0 podman[84664]: 2025-10-08 09:45:17.922630334 +0000 UTC m=+0.121511060 container start 264b7827657d6019b9e89715eb5925fdf90c79300471ba4ddf97f7ef9f880c10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_agnesi, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct 08 09:45:17 compute-0 podman[84664]: 2025-10-08 09:45:17.827820542 +0000 UTC m=+0.026701288 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:45:17 compute-0 podman[84664]: 2025-10-08 09:45:17.925768845 +0000 UTC m=+0.124649601 container attach 264b7827657d6019b9e89715eb5925fdf90c79300471ba4ddf97f7ef9f880c10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_agnesi, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct 08 09:45:17 compute-0 intelligent_agnesi[84705]: 167 167
Oct 08 09:45:17 compute-0 systemd[1]: libpod-264b7827657d6019b9e89715eb5925fdf90c79300471ba4ddf97f7ef9f880c10.scope: Deactivated successfully.
Oct 08 09:45:17 compute-0 podman[84664]: 2025-10-08 09:45:17.92901997 +0000 UTC m=+0.127900696 container died 264b7827657d6019b9e89715eb5925fdf90c79300471ba4ddf97f7ef9f880c10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_agnesi, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:45:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef8ab4c5fb1cd573d170484e76da240c2a69fa7dd6c7e1b6a3420a25aa6fc336-merged.mount: Deactivated successfully.
Oct 08 09:45:17 compute-0 podman[84664]: 2025-10-08 09:45:17.962882974 +0000 UTC m=+0.161763700 container remove 264b7827657d6019b9e89715eb5925fdf90c79300471ba4ddf97f7ef9f880c10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_agnesi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:45:17 compute-0 systemd[1]: libpod-conmon-264b7827657d6019b9e89715eb5925fdf90c79300471ba4ddf97f7ef9f880c10.scope: Deactivated successfully.
Oct 08 09:45:18 compute-0 python3[84707]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:45:18 compute-0 podman[84725]: 2025-10-08 09:45:18.09995856 +0000 UTC m=+0.051828571 container create 7d505846ae5612903e0b8a8c43d8cd26f20dc21d4255cb6a6dfea69d5faa8726 (image=quay.io/ceph/ceph:v19, name=unruffled_lovelace, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:45:18 compute-0 systemd[1]: Started libpod-conmon-7d505846ae5612903e0b8a8c43d8cd26f20dc21d4255cb6a6dfea69d5faa8726.scope.
Oct 08 09:45:18 compute-0 podman[84740]: 2025-10-08 09:45:18.150228515 +0000 UTC m=+0.068338846 container create 4b56845d6cec65aa0a14424f55847cdea2e1e33e41bb814b2912afbad4552408 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_hellman, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:45:18 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:45:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3ea1dcf3947b191752b402681f7d6449f6b28beb27563da3223d9f76e6079bb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3ea1dcf3947b191752b402681f7d6449f6b28beb27563da3223d9f76e6079bb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:18 compute-0 systemd[1]: Started libpod-conmon-4b56845d6cec65aa0a14424f55847cdea2e1e33e41bb814b2912afbad4552408.scope.
Oct 08 09:45:18 compute-0 podman[84725]: 2025-10-08 09:45:18.076308318 +0000 UTC m=+0.028178339 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:45:18 compute-0 podman[84725]: 2025-10-08 09:45:18.175062555 +0000 UTC m=+0.126932546 container init 7d505846ae5612903e0b8a8c43d8cd26f20dc21d4255cb6a6dfea69d5faa8726 (image=quay.io/ceph/ceph:v19, name=unruffled_lovelace, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Oct 08 09:45:18 compute-0 podman[84725]: 2025-10-08 09:45:18.181056853 +0000 UTC m=+0.132926824 container start 7d505846ae5612903e0b8a8c43d8cd26f20dc21d4255cb6a6dfea69d5faa8726 (image=quay.io/ceph/ceph:v19, name=unruffled_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 08 09:45:18 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:45:18 compute-0 podman[84725]: 2025-10-08 09:45:18.185543149 +0000 UTC m=+0.137413120 container attach 7d505846ae5612903e0b8a8c43d8cd26f20dc21d4255cb6a6dfea69d5faa8726 (image=quay.io/ceph/ceph:v19, name=unruffled_lovelace, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 08 09:45:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f4ea1e12f056d3b91ef60f321c7ad850cfc852d00f72939f9b2cf1a60bb29e5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f4ea1e12f056d3b91ef60f321c7ad850cfc852d00f72939f9b2cf1a60bb29e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f4ea1e12f056d3b91ef60f321c7ad850cfc852d00f72939f9b2cf1a60bb29e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f4ea1e12f056d3b91ef60f321c7ad850cfc852d00f72939f9b2cf1a60bb29e5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:18 compute-0 podman[84740]: 2025-10-08 09:45:18.196233322 +0000 UTC m=+0.114343653 container init 4b56845d6cec65aa0a14424f55847cdea2e1e33e41bb814b2912afbad4552408 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_hellman, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct 08 09:45:18 compute-0 podman[84740]: 2025-10-08 09:45:18.20412254 +0000 UTC m=+0.122232871 container start 4b56845d6cec65aa0a14424f55847cdea2e1e33e41bb814b2912afbad4552408 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_hellman, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:45:18 compute-0 podman[84740]: 2025-10-08 09:45:18.207183277 +0000 UTC m=+0.125293608 container attach 4b56845d6cec65aa0a14424f55847cdea2e1e33e41bb814b2912afbad4552408 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:45:18 compute-0 podman[84740]: 2025-10-08 09:45:18.126479629 +0000 UTC m=+0.044590040 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:45:18 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e18 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:45:18 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.mtagwx started
Oct 08 09:45:18 compute-0 ceph-mgr[73869]: mgr.server handle_open ignoring open from mgr.compute-2.mtagwx 192.168.122.102:0/1031292428; not ready for session (expect reconnect)
Oct 08 09:45:18 compute-0 dazzling_hellman[84766]: {
Oct 08 09:45:18 compute-0 dazzling_hellman[84766]:     "1": [
Oct 08 09:45:18 compute-0 dazzling_hellman[84766]:         {
Oct 08 09:45:18 compute-0 dazzling_hellman[84766]:             "devices": [
Oct 08 09:45:18 compute-0 dazzling_hellman[84766]:                 "/dev/loop3"
Oct 08 09:45:18 compute-0 dazzling_hellman[84766]:             ],
Oct 08 09:45:18 compute-0 dazzling_hellman[84766]:             "lv_name": "ceph_lv0",
Oct 08 09:45:18 compute-0 dazzling_hellman[84766]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 09:45:18 compute-0 dazzling_hellman[84766]:             "lv_size": "21470642176",
Oct 08 09:45:18 compute-0 dazzling_hellman[84766]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 08 09:45:18 compute-0 dazzling_hellman[84766]:             "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 09:45:18 compute-0 dazzling_hellman[84766]:             "name": "ceph_lv0",
Oct 08 09:45:18 compute-0 dazzling_hellman[84766]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 09:45:18 compute-0 dazzling_hellman[84766]:             "tags": {
Oct 08 09:45:18 compute-0 dazzling_hellman[84766]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 08 09:45:18 compute-0 dazzling_hellman[84766]:                 "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 09:45:18 compute-0 dazzling_hellman[84766]:                 "ceph.cephx_lockbox_secret": "",
Oct 08 09:45:18 compute-0 dazzling_hellman[84766]:                 "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct 08 09:45:18 compute-0 dazzling_hellman[84766]:                 "ceph.cluster_name": "ceph",
Oct 08 09:45:18 compute-0 dazzling_hellman[84766]:                 "ceph.crush_device_class": "",
Oct 08 09:45:18 compute-0 dazzling_hellman[84766]:                 "ceph.encrypted": "0",
Oct 08 09:45:18 compute-0 dazzling_hellman[84766]:                 "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct 08 09:45:18 compute-0 dazzling_hellman[84766]:                 "ceph.osd_id": "1",
Oct 08 09:45:18 compute-0 dazzling_hellman[84766]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 08 09:45:18 compute-0 dazzling_hellman[84766]:                 "ceph.type": "block",
Oct 08 09:45:18 compute-0 dazzling_hellman[84766]:                 "ceph.vdo": "0",
Oct 08 09:45:18 compute-0 dazzling_hellman[84766]:                 "ceph.with_tpm": "0"
Oct 08 09:45:18 compute-0 dazzling_hellman[84766]:             },
Oct 08 09:45:18 compute-0 dazzling_hellman[84766]:             "type": "block",
Oct 08 09:45:18 compute-0 dazzling_hellman[84766]:             "vg_name": "ceph_vg0"
Oct 08 09:45:18 compute-0 dazzling_hellman[84766]:         }
Oct 08 09:45:18 compute-0 dazzling_hellman[84766]:     ]
Oct 08 09:45:18 compute-0 dazzling_hellman[84766]: }
Oct 08 09:45:18 compute-0 systemd[1]: libpod-4b56845d6cec65aa0a14424f55847cdea2e1e33e41bb814b2912afbad4552408.scope: Deactivated successfully.
Oct 08 09:45:18 compute-0 podman[84740]: 2025-10-08 09:45:18.526328494 +0000 UTC m=+0.444438855 container died 4b56845d6cec65aa0a14424f55847cdea2e1e33e41bb814b2912afbad4552408 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:45:18 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Oct 08 09:45:18 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/739388561' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 08 09:45:18 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/3019668088' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ef552a3d-427a-4a30-bf26-d668cd69b923"}]: dispatch
Oct 08 09:45:18 compute-0 ceph-mon[73572]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ef552a3d-427a-4a30-bf26-d668cd69b923"}]: dispatch
Oct 08 09:45:18 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3583095774' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 08 09:45:18 compute-0 ceph-mon[73572]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ef552a3d-427a-4a30-bf26-d668cd69b923"}]': finished
Oct 08 09:45:18 compute-0 ceph-mon[73572]: osdmap e18: 3 total, 2 up, 3 in
Oct 08 09:45:18 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 08 09:45:18 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/1590761823' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 08 09:45:18 compute-0 ceph-mon[73572]: Standby manager daemon compute-2.mtagwx started
Oct 08 09:45:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f4ea1e12f056d3b91ef60f321c7ad850cfc852d00f72939f9b2cf1a60bb29e5-merged.mount: Deactivated successfully.
Oct 08 09:45:18 compute-0 podman[84740]: 2025-10-08 09:45:18.578172534 +0000 UTC m=+0.496282845 container remove 4b56845d6cec65aa0a14424f55847cdea2e1e33e41bb814b2912afbad4552408 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_hellman, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325)
Oct 08 09:45:18 compute-0 systemd[1]: libpod-conmon-4b56845d6cec65aa0a14424f55847cdea2e1e33e41bb814b2912afbad4552408.scope: Deactivated successfully.
Oct 08 09:45:18 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Oct 08 09:45:18 compute-0 sudo[84588]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:18 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/739388561' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 08 09:45:18 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e19 e19: 3 total, 2 up, 3 in
Oct 08 09:45:18 compute-0 unruffled_lovelace[84758]: pool 'images' created
Oct 08 09:45:18 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 2 up, 3 in
Oct 08 09:45:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 19 pg[5.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [1] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:18 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 08 09:45:18 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 08 09:45:18 compute-0 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 08 09:45:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 19 pg[4.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [1] r=0 lpr=18 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:18 compute-0 systemd[1]: libpod-7d505846ae5612903e0b8a8c43d8cd26f20dc21d4255cb6a6dfea69d5faa8726.scope: Deactivated successfully.
Oct 08 09:45:18 compute-0 podman[84725]: 2025-10-08 09:45:18.635344426 +0000 UTC m=+0.587214407 container died 7d505846ae5612903e0b8a8c43d8cd26f20dc21d4255cb6a6dfea69d5faa8726 (image=quay.io/ceph/ceph:v19, name=unruffled_lovelace, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:45:18 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.ixicfj(active, since 116s), standbys: compute-2.mtagwx
Oct 08 09:45:18 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.mtagwx", "id": "compute-2.mtagwx"} v 0)
Oct 08 09:45:18 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mgr metadata", "who": "compute-2.mtagwx", "id": "compute-2.mtagwx"}]: dispatch
Oct 08 09:45:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3ea1dcf3947b191752b402681f7d6449f6b28beb27563da3223d9f76e6079bb-merged.mount: Deactivated successfully.
Oct 08 09:45:18 compute-0 podman[84725]: 2025-10-08 09:45:18.669996862 +0000 UTC m=+0.621866823 container remove 7d505846ae5612903e0b8a8c43d8cd26f20dc21d4255cb6a6dfea69d5faa8726 (image=quay.io/ceph/ceph:v19, name=unruffled_lovelace, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 08 09:45:18 compute-0 systemd[1]: libpod-conmon-7d505846ae5612903e0b8a8c43d8cd26f20dc21d4255cb6a6dfea69d5faa8726.scope: Deactivated successfully.
Oct 08 09:45:18 compute-0 sudo[84701]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:18 compute-0 sudo[84810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:45:18 compute-0 sudo[84810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:18 compute-0 sudo[84810]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:18 compute-0 sudo[84846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- raw list --format json
Oct 08 09:45:18 compute-0 sudo[84846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:18 compute-0 sudo[84894]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jayquxqihflwlkyjcimvicochwqiqtqy ; /usr/bin/python3'
Oct 08 09:45:18 compute-0 sudo[84894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:45:18 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v70: 5 pgs: 1 unknown, 1 creating+peering, 3 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 08 09:45:18 compute-0 python3[84896]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:45:19 compute-0 podman[84910]: 2025-10-08 09:45:19.036568826 +0000 UTC m=+0.060427307 container create 75d3a28e34d60bdb72aa7e5306de8d388df23341c805a21e03cae970b82280aa (image=quay.io/ceph/ceph:v19, name=nostalgic_haslett, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:45:19 compute-0 systemd[1]: Started libpod-conmon-75d3a28e34d60bdb72aa7e5306de8d388df23341c805a21e03cae970b82280aa.scope.
Oct 08 09:45:19 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:45:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89192ae0dfc04ecf6a8022a8af42fae7a40dabc1cf6773eaec6bbf0b22324e35/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89192ae0dfc04ecf6a8022a8af42fae7a40dabc1cf6773eaec6bbf0b22324e35/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:19 compute-0 podman[84910]: 2025-10-08 09:45:19.005396313 +0000 UTC m=+0.029254814 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:45:19 compute-0 podman[84910]: 2025-10-08 09:45:19.116599296 +0000 UTC m=+0.140457797 container init 75d3a28e34d60bdb72aa7e5306de8d388df23341c805a21e03cae970b82280aa (image=quay.io/ceph/ceph:v19, name=nostalgic_haslett, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:45:19 compute-0 podman[84910]: 2025-10-08 09:45:19.122411947 +0000 UTC m=+0.146270428 container start 75d3a28e34d60bdb72aa7e5306de8d388df23341c805a21e03cae970b82280aa (image=quay.io/ceph/ceph:v19, name=nostalgic_haslett, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 08 09:45:19 compute-0 podman[84910]: 2025-10-08 09:45:19.140402073 +0000 UTC m=+0.164260574 container attach 75d3a28e34d60bdb72aa7e5306de8d388df23341c805a21e03cae970b82280aa (image=quay.io/ceph/ceph:v19, name=nostalgic_haslett, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 08 09:45:19 compute-0 podman[84952]: 2025-10-08 09:45:19.21795161 +0000 UTC m=+0.051651134 container create 8eaf731249ed346d856e388608bd702da756ea516b5a5c48ffb081395391b24d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_allen, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:45:19 compute-0 systemd[1]: Started libpod-conmon-8eaf731249ed346d856e388608bd702da756ea516b5a5c48ffb081395391b24d.scope.
Oct 08 09:45:19 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:45:19 compute-0 podman[84952]: 2025-10-08 09:45:19.189090083 +0000 UTC m=+0.022789627 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:45:19 compute-0 podman[84952]: 2025-10-08 09:45:19.286109986 +0000 UTC m=+0.119809540 container init 8eaf731249ed346d856e388608bd702da756ea516b5a5c48ffb081395391b24d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_allen, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Oct 08 09:45:19 compute-0 podman[84952]: 2025-10-08 09:45:19.291360694 +0000 UTC m=+0.125060218 container start 8eaf731249ed346d856e388608bd702da756ea516b5a5c48ffb081395391b24d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_allen, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 08 09:45:19 compute-0 hungry_allen[84987]: 167 167
Oct 08 09:45:19 compute-0 systemd[1]: libpod-8eaf731249ed346d856e388608bd702da756ea516b5a5c48ffb081395391b24d.scope: Deactivated successfully.
Oct 08 09:45:19 compute-0 podman[84952]: 2025-10-08 09:45:19.310153594 +0000 UTC m=+0.143853128 container attach 8eaf731249ed346d856e388608bd702da756ea516b5a5c48ffb081395391b24d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_allen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:45:19 compute-0 podman[84952]: 2025-10-08 09:45:19.310607812 +0000 UTC m=+0.144307336 container died 8eaf731249ed346d856e388608bd702da756ea516b5a5c48ffb081395391b24d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_allen, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:45:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-93b79c101f3f252014f4067458e9d9cbe67b1d994dfd90d6b6194f70c158d11b-merged.mount: Deactivated successfully.
Oct 08 09:45:19 compute-0 podman[84952]: 2025-10-08 09:45:19.35610839 +0000 UTC m=+0.189807914 container remove 8eaf731249ed346d856e388608bd702da756ea516b5a5c48ffb081395391b24d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_allen, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 08 09:45:19 compute-0 systemd[1]: libpod-conmon-8eaf731249ed346d856e388608bd702da756ea516b5a5c48ffb081395391b24d.scope: Deactivated successfully.
Oct 08 09:45:19 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.swlvov started
Oct 08 09:45:19 compute-0 ceph-mgr[73869]: mgr.server handle_open ignoring open from mgr.compute-1.swlvov 192.168.122.101:0/1376433089; not ready for session (expect reconnect)
Oct 08 09:45:19 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Oct 08 09:45:19 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/672510145' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 08 09:45:19 compute-0 podman[85014]: 2025-10-08 09:45:19.552582279 +0000 UTC m=+0.041745073 container create 34685d558f39947bf4f332ec16fe3a847209940724badf98c27bb0b7dfe1debc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid)
Oct 08 09:45:19 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/739388561' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 08 09:45:19 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/739388561' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 08 09:45:19 compute-0 ceph-mon[73572]: osdmap e19: 3 total, 2 up, 3 in
Oct 08 09:45:19 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 08 09:45:19 compute-0 ceph-mon[73572]: mgrmap e10: compute-0.ixicfj(active, since 116s), standbys: compute-2.mtagwx
Oct 08 09:45:19 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mgr metadata", "who": "compute-2.mtagwx", "id": "compute-2.mtagwx"}]: dispatch
Oct 08 09:45:19 compute-0 ceph-mon[73572]: pgmap v70: 5 pgs: 1 unknown, 1 creating+peering, 3 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 08 09:45:19 compute-0 ceph-mon[73572]: Standby manager daemon compute-1.swlvov started
Oct 08 09:45:19 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/672510145' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 08 09:45:19 compute-0 systemd[1]: Started libpod-conmon-34685d558f39947bf4f332ec16fe3a847209940724badf98c27bb0b7dfe1debc.scope.
Oct 08 09:45:19 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Oct 08 09:45:19 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:45:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4271d283ac9c99cade98f1d643acac068002fd2539aff20fc00e4d9fdda3926/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4271d283ac9c99cade98f1d643acac068002fd2539aff20fc00e4d9fdda3926/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4271d283ac9c99cade98f1d643acac068002fd2539aff20fc00e4d9fdda3926/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4271d283ac9c99cade98f1d643acac068002fd2539aff20fc00e4d9fdda3926/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:19 compute-0 podman[85014]: 2025-10-08 09:45:19.535644526 +0000 UTC m=+0.024807330 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:45:19 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/672510145' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 08 09:45:19 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e20 e20: 3 total, 2 up, 3 in
Oct 08 09:45:19 compute-0 nostalgic_haslett[84948]: pool 'cephfs.cephfs.meta' created
Oct 08 09:45:19 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 2 up, 3 in
Oct 08 09:45:19 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 08 09:45:19 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 08 09:45:19 compute-0 podman[85014]: 2025-10-08 09:45:19.643138765 +0000 UTC m=+0.132301569 container init 34685d558f39947bf4f332ec16fe3a847209940724badf98c27bb0b7dfe1debc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_chaplygin, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct 08 09:45:19 compute-0 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 08 09:45:19 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 20 pg[6.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [1] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:19 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 20 pg[5.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [1] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:19 compute-0 podman[85014]: 2025-10-08 09:45:19.653089447 +0000 UTC m=+0.142252241 container start 34685d558f39947bf4f332ec16fe3a847209940724badf98c27bb0b7dfe1debc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct 08 09:45:19 compute-0 podman[85014]: 2025-10-08 09:45:19.657542552 +0000 UTC m=+0.146705326 container attach 34685d558f39947bf4f332ec16fe3a847209940724badf98c27bb0b7dfe1debc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_chaplygin, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct 08 09:45:19 compute-0 systemd[1]: libpod-75d3a28e34d60bdb72aa7e5306de8d388df23341c805a21e03cae970b82280aa.scope: Deactivated successfully.
Oct 08 09:45:19 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.ixicfj(active, since 117s), standbys: compute-2.mtagwx, compute-1.swlvov
Oct 08 09:45:19 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.swlvov", "id": "compute-1.swlvov"} v 0)
Oct 08 09:45:19 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mgr metadata", "who": "compute-1.swlvov", "id": "compute-1.swlvov"}]: dispatch
Oct 08 09:45:19 compute-0 podman[85036]: 2025-10-08 09:45:19.698122475 +0000 UTC m=+0.023711815 container died 75d3a28e34d60bdb72aa7e5306de8d388df23341c805a21e03cae970b82280aa (image=quay.io/ceph/ceph:v19, name=nostalgic_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:45:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-89192ae0dfc04ecf6a8022a8af42fae7a40dabc1cf6773eaec6bbf0b22324e35-merged.mount: Deactivated successfully.
Oct 08 09:45:19 compute-0 podman[85036]: 2025-10-08 09:45:19.747402079 +0000 UTC m=+0.072991409 container remove 75d3a28e34d60bdb72aa7e5306de8d388df23341c805a21e03cae970b82280aa (image=quay.io/ceph/ceph:v19, name=nostalgic_haslett, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325)
Oct 08 09:45:19 compute-0 systemd[1]: libpod-conmon-75d3a28e34d60bdb72aa7e5306de8d388df23341c805a21e03cae970b82280aa.scope: Deactivated successfully.
Oct 08 09:45:19 compute-0 sudo[84894]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:19 compute-0 sudo[85088]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqxclmameepzjeolhtzrbqyaogtucywp ; /usr/bin/python3'
Oct 08 09:45:19 compute-0 sudo[85088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:45:20 compute-0 python3[85093]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:45:20 compute-0 podman[85125]: 2025-10-08 09:45:20.159893658 +0000 UTC m=+0.060313012 container create 29f5911a8d96b462472199027dd8ebed7b8804e74f5e59c9a16d8804bde2f33e (image=quay.io/ceph/ceph:v19, name=jolly_fermat, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 08 09:45:20 compute-0 systemd[1]: Started libpod-conmon-29f5911a8d96b462472199027dd8ebed7b8804e74f5e59c9a16d8804bde2f33e.scope.
Oct 08 09:45:20 compute-0 podman[85125]: 2025-10-08 09:45:20.123046269 +0000 UTC m=+0.023465643 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:45:20 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:45:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12b60ec70a6c93318129c7e2e07fcaa9f9ff8097b95e2fc9495498aed930daa9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12b60ec70a6c93318129c7e2e07fcaa9f9ff8097b95e2fc9495498aed930daa9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:20 compute-0 podman[85125]: 2025-10-08 09:45:20.272279499 +0000 UTC m=+0.172698873 container init 29f5911a8d96b462472199027dd8ebed7b8804e74f5e59c9a16d8804bde2f33e (image=quay.io/ceph/ceph:v19, name=jolly_fermat, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 08 09:45:20 compute-0 lvm[85163]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 08 09:45:20 compute-0 podman[85125]: 2025-10-08 09:45:20.279104902 +0000 UTC m=+0.179524256 container start 29f5911a8d96b462472199027dd8ebed7b8804e74f5e59c9a16d8804bde2f33e (image=quay.io/ceph/ceph:v19, name=jolly_fermat, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:45:20 compute-0 lvm[85163]: VG ceph_vg0 finished
Oct 08 09:45:20 compute-0 podman[85125]: 2025-10-08 09:45:20.293781801 +0000 UTC m=+0.194201165 container attach 29f5911a8d96b462472199027dd8ebed7b8804e74f5e59c9a16d8804bde2f33e (image=quay.io/ceph/ceph:v19, name=jolly_fermat, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:45:20 compute-0 busy_chaplygin[85030]: {}
Oct 08 09:45:20 compute-0 systemd[1]: libpod-34685d558f39947bf4f332ec16fe3a847209940724badf98c27bb0b7dfe1debc.scope: Deactivated successfully.
Oct 08 09:45:20 compute-0 podman[85014]: 2025-10-08 09:45:20.360823521 +0000 UTC m=+0.849986295 container died 34685d558f39947bf4f332ec16fe3a847209940724badf98c27bb0b7dfe1debc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_chaplygin, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct 08 09:45:20 compute-0 systemd[1]: libpod-34685d558f39947bf4f332ec16fe3a847209940724badf98c27bb0b7dfe1debc.scope: Consumed 1.134s CPU time.
Oct 08 09:45:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-e4271d283ac9c99cade98f1d643acac068002fd2539aff20fc00e4d9fdda3926-merged.mount: Deactivated successfully.
Oct 08 09:45:20 compute-0 podman[85014]: 2025-10-08 09:45:20.561108199 +0000 UTC m=+1.050270973 container remove 34685d558f39947bf4f332ec16fe3a847209940724badf98c27bb0b7dfe1debc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_chaplygin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:45:20 compute-0 systemd[1]: libpod-conmon-34685d558f39947bf4f332ec16fe3a847209940724badf98c27bb0b7dfe1debc.scope: Deactivated successfully.
Oct 08 09:45:20 compute-0 sudo[84846]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:20 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 09:45:20 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Oct 08 09:45:20 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2341319636' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 08 09:45:20 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:20 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Oct 08 09:45:20 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 09:45:20 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2341319636' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 08 09:45:20 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e21 e21: 3 total, 2 up, 3 in
Oct 08 09:45:20 compute-0 jolly_fermat[85159]: pool 'cephfs.cephfs.data' created
Oct 08 09:45:20 compute-0 systemd[1]: libpod-29f5911a8d96b462472199027dd8ebed7b8804e74f5e59c9a16d8804bde2f33e.scope: Deactivated successfully.
Oct 08 09:45:20 compute-0 podman[85125]: 2025-10-08 09:45:20.692754499 +0000 UTC m=+0.593173863 container died 29f5911a8d96b462472199027dd8ebed7b8804e74f5e59c9a16d8804bde2f33e (image=quay.io/ceph/ceph:v19, name=jolly_fermat, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:45:20 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 2 up, 3 in
Oct 08 09:45:20 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 08 09:45:20 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 08 09:45:20 compute-0 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 08 09:45:20 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/672510145' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 08 09:45:20 compute-0 ceph-mon[73572]: osdmap e20: 3 total, 2 up, 3 in
Oct 08 09:45:20 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 08 09:45:20 compute-0 ceph-mon[73572]: mgrmap e11: compute-0.ixicfj(active, since 117s), standbys: compute-2.mtagwx, compute-1.swlvov
Oct 08 09:45:20 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mgr metadata", "who": "compute-1.swlvov", "id": "compute-1.swlvov"}]: dispatch
Oct 08 09:45:20 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/2341319636' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 08 09:45:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 21 pg[6.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [1] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:20 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-12b60ec70a6c93318129c7e2e07fcaa9f9ff8097b95e2fc9495498aed930daa9-merged.mount: Deactivated successfully.
Oct 08 09:45:20 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v73: 7 pgs: 3 unknown, 1 creating+peering, 3 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 08 09:45:20 compute-0 podman[85125]: 2025-10-08 09:45:20.981159321 +0000 UTC m=+0.881578675 container remove 29f5911a8d96b462472199027dd8ebed7b8804e74f5e59c9a16d8804bde2f33e (image=quay.io/ceph/ceph:v19, name=jolly_fermat, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct 08 09:45:20 compute-0 sudo[85088]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:21 compute-0 systemd[1]: libpod-conmon-29f5911a8d96b462472199027dd8ebed7b8804e74f5e59c9a16d8804bde2f33e.scope: Deactivated successfully.
Oct 08 09:45:21 compute-0 sudo[85236]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-serkayexackbjngngzpnqhjggfxdypjb ; /usr/bin/python3'
Oct 08 09:45:21 compute-0 sudo[85236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:45:21 compute-0 python3[85238]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:45:21 compute-0 podman[85239]: 2025-10-08 09:45:21.377549681 +0000 UTC m=+0.038583551 container create dc4df8fd2543f71b3b440e888874b878eae57a364be10a9b769b467b7698c396 (image=quay.io/ceph/ceph:v19, name=hardcore_dubinsky, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:45:21 compute-0 systemd[1]: Started libpod-conmon-dc4df8fd2543f71b3b440e888874b878eae57a364be10a9b769b467b7698c396.scope.
Oct 08 09:45:21 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:45:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44944ed71359d63dc9071dd6ac82d0fd7db9863075a14b665dc8661bf6ab239a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44944ed71359d63dc9071dd6ac82d0fd7db9863075a14b665dc8661bf6ab239a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:21 compute-0 podman[85239]: 2025-10-08 09:45:21.36064172 +0000 UTC m=+0.021675630 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:45:21 compute-0 podman[85239]: 2025-10-08 09:45:21.458164195 +0000 UTC m=+0.119198065 container init dc4df8fd2543f71b3b440e888874b878eae57a364be10a9b769b467b7698c396 (image=quay.io/ceph/ceph:v19, name=hardcore_dubinsky, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:45:21 compute-0 podman[85239]: 2025-10-08 09:45:21.463877022 +0000 UTC m=+0.124910922 container start dc4df8fd2543f71b3b440e888874b878eae57a364be10a9b769b467b7698c396 (image=quay.io/ceph/ceph:v19, name=hardcore_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 08 09:45:21 compute-0 podman[85239]: 2025-10-08 09:45:21.467496632 +0000 UTC m=+0.128530522 container attach dc4df8fd2543f71b3b440e888874b878eae57a364be10a9b769b467b7698c396 (image=quay.io/ceph/ceph:v19, name=hardcore_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:45:21 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Oct 08 09:45:21 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3377256593' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Oct 08 09:45:21 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Oct 08 09:45:21 compute-0 ceph-mon[73572]: log_channel(cluster) log [WRN] : Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 08 09:45:22 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3377256593' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Oct 08 09:45:22 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e22 e22: 3 total, 2 up, 3 in
Oct 08 09:45:22 compute-0 hardcore_dubinsky[85254]: enabled application 'rbd' on pool 'vms'
Oct 08 09:45:22 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 2 up, 3 in
Oct 08 09:45:22 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 08 09:45:22 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 08 09:45:22 compute-0 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 08 09:45:22 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:22 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/2341319636' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 08 09:45:22 compute-0 ceph-mon[73572]: osdmap e21: 3 total, 2 up, 3 in
Oct 08 09:45:22 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 08 09:45:22 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:22 compute-0 ceph-mon[73572]: pgmap v73: 7 pgs: 3 unknown, 1 creating+peering, 3 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 08 09:45:22 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3377256593' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Oct 08 09:45:22 compute-0 systemd[1]: libpod-dc4df8fd2543f71b3b440e888874b878eae57a364be10a9b769b467b7698c396.scope: Deactivated successfully.
Oct 08 09:45:22 compute-0 podman[85239]: 2025-10-08 09:45:22.175383772 +0000 UTC m=+0.836417642 container died dc4df8fd2543f71b3b440e888874b878eae57a364be10a9b769b467b7698c396 (image=quay.io/ceph/ceph:v19, name=hardcore_dubinsky, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:45:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-44944ed71359d63dc9071dd6ac82d0fd7db9863075a14b665dc8661bf6ab239a-merged.mount: Deactivated successfully.
Oct 08 09:45:22 compute-0 podman[85239]: 2025-10-08 09:45:22.230592742 +0000 UTC m=+0.891626662 container remove dc4df8fd2543f71b3b440e888874b878eae57a364be10a9b769b467b7698c396 (image=quay.io/ceph/ceph:v19, name=hardcore_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 08 09:45:22 compute-0 systemd[1]: libpod-conmon-dc4df8fd2543f71b3b440e888874b878eae57a364be10a9b769b467b7698c396.scope: Deactivated successfully.
Oct 08 09:45:22 compute-0 sudo[85236]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:22 compute-0 sudo[85315]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzodgntzlloobvhkzahzznxjeafdzqdp ; /usr/bin/python3'
Oct 08 09:45:22 compute-0 sudo[85315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:45:22 compute-0 python3[85317]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:45:22 compute-0 podman[85318]: 2025-10-08 09:45:22.54709939 +0000 UTC m=+0.038296129 container create 77c9f890363acfdf472e9345e10a2920ea96fe98762916c37854bd2dcd9f1a5c (image=quay.io/ceph/ceph:v19, name=silly_shirley, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 08 09:45:22 compute-0 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_09:45:22
Oct 08 09:45:22 compute-0 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 08 09:45:22 compute-0 ceph-mgr[73869]: [balancer INFO root] Some PGs (0.428571) are unknown; try again later
Oct 08 09:45:22 compute-0 systemd[1]: Started libpod-conmon-77c9f890363acfdf472e9345e10a2920ea96fe98762916c37854bd2dcd9f1a5c.scope.
Oct 08 09:45:22 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:45:22 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct 08 09:45:22 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Oct 08 09:45:22 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 08 09:45:22 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Oct 08 09:45:22 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 08 09:45:22 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Oct 08 09:45:22 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 08 09:45:22 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Oct 08 09:45:22 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 08 09:45:22 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Oct 08 09:45:22 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 08 09:45:22 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Oct 08 09:45:22 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 08 09:45:22 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Oct 08 09:45:22 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 08 09:45:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9fcab9e8508930760cd4e04bb79930f01a7fb7d5b61d20a8688ce85c292e4ca/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:22 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Oct 08 09:45:22 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Oct 08 09:45:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9fcab9e8508930760cd4e04bb79930f01a7fb7d5b61d20a8688ce85c292e4ca/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:22 compute-0 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 08 09:45:22 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:45:22 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:45:22 compute-0 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 08 09:45:22 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 09:45:22 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 09:45:22 compute-0 podman[85318]: 2025-10-08 09:45:22.604805754 +0000 UTC m=+0.096002523 container init 77c9f890363acfdf472e9345e10a2920ea96fe98762916c37854bd2dcd9f1a5c (image=quay.io/ceph/ceph:v19, name=silly_shirley, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct 08 09:45:22 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:45:22 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:45:22 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:45:22 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:45:22 compute-0 podman[85318]: 2025-10-08 09:45:22.617431907 +0000 UTC m=+0.108628646 container start 77c9f890363acfdf472e9345e10a2920ea96fe98762916c37854bd2dcd9f1a5c (image=quay.io/ceph/ceph:v19, name=silly_shirley, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct 08 09:45:22 compute-0 podman[85318]: 2025-10-08 09:45:22.622593521 +0000 UTC m=+0.113790280 container attach 77c9f890363acfdf472e9345e10a2920ea96fe98762916c37854bd2dcd9f1a5c (image=quay.io/ceph/ceph:v19, name=silly_shirley, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:45:22 compute-0 podman[85318]: 2025-10-08 09:45:22.531264573 +0000 UTC m=+0.022461322 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:45:22 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v75: 7 pgs: 1 unknown, 6 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 08 09:45:22 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Oct 08 09:45:22 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1601367079' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Oct 08 09:45:23 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Oct 08 09:45:23 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Oct 08 09:45:23 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1601367079' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Oct 08 09:45:23 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e23 e23: 3 total, 2 up, 3 in
Oct 08 09:45:23 compute-0 silly_shirley[85334]: enabled application 'rbd' on pool 'volumes'
Oct 08 09:45:23 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 2 up, 3 in
Oct 08 09:45:23 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 08 09:45:23 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 08 09:45:23 compute-0 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 08 09:45:23 compute-0 ceph-mgr[73869]: [progress INFO root] update: starting ev 0ec7ed32-6b33-4f8f-9254-a63145d84250 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Oct 08 09:45:23 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Oct 08 09:45:23 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Oct 08 09:45:23 compute-0 systemd[1]: libpod-77c9f890363acfdf472e9345e10a2920ea96fe98762916c37854bd2dcd9f1a5c.scope: Deactivated successfully.
Oct 08 09:45:23 compute-0 podman[85318]: 2025-10-08 09:45:23.182594828 +0000 UTC m=+0.673791567 container died 77c9f890363acfdf472e9345e10a2920ea96fe98762916c37854bd2dcd9f1a5c (image=quay.io/ceph/ceph:v19, name=silly_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 08 09:45:23 compute-0 ceph-mon[73572]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 08 09:45:23 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3377256593' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Oct 08 09:45:23 compute-0 ceph-mon[73572]: osdmap e22: 3 total, 2 up, 3 in
Oct 08 09:45:23 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 08 09:45:23 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Oct 08 09:45:23 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/1601367079' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Oct 08 09:45:23 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Oct 08 09:45:23 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/1601367079' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Oct 08 09:45:23 compute-0 ceph-mon[73572]: osdmap e23: 3 total, 2 up, 3 in
Oct 08 09:45:23 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 08 09:45:23 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Oct 08 09:45:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9fcab9e8508930760cd4e04bb79930f01a7fb7d5b61d20a8688ce85c292e4ca-merged.mount: Deactivated successfully.
Oct 08 09:45:23 compute-0 podman[85318]: 2025-10-08 09:45:23.222695691 +0000 UTC m=+0.713892440 container remove 77c9f890363acfdf472e9345e10a2920ea96fe98762916c37854bd2dcd9f1a5c (image=quay.io/ceph/ceph:v19, name=silly_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 08 09:45:23 compute-0 systemd[1]: libpod-conmon-77c9f890363acfdf472e9345e10a2920ea96fe98762916c37854bd2dcd9f1a5c.scope: Deactivated successfully.
Oct 08 09:45:23 compute-0 sudo[85315]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:23 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e23 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:45:23 compute-0 sudo[85393]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rijbiyhdsbasttfmizwivbijumszrsxx ; /usr/bin/python3'
Oct 08 09:45:23 compute-0 sudo[85393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:45:23 compute-0 python3[85395]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:45:23 compute-0 podman[85396]: 2025-10-08 09:45:23.571424185 +0000 UTC m=+0.055440891 container create a69f749ee1e24f25db88f0e02e5195db0fd57eefd8df786544d9db483af35fae (image=quay.io/ceph/ceph:v19, name=angry_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:45:23 compute-0 systemd[1]: Started libpod-conmon-a69f749ee1e24f25db88f0e02e5195db0fd57eefd8df786544d9db483af35fae.scope.
Oct 08 09:45:23 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:45:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9278d57ff5d76926e8baab6f379af1aee34be2c3fe3af7006cc8853ee315687/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9278d57ff5d76926e8baab6f379af1aee34be2c3fe3af7006cc8853ee315687/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:23 compute-0 podman[85396]: 2025-10-08 09:45:23.546201148 +0000 UTC m=+0.030217934 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:45:23 compute-0 podman[85396]: 2025-10-08 09:45:23.646364964 +0000 UTC m=+0.130381710 container init a69f749ee1e24f25db88f0e02e5195db0fd57eefd8df786544d9db483af35fae (image=quay.io/ceph/ceph:v19, name=angry_poincare, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:45:23 compute-0 podman[85396]: 2025-10-08 09:45:23.656254043 +0000 UTC m=+0.140270759 container start a69f749ee1e24f25db88f0e02e5195db0fd57eefd8df786544d9db483af35fae (image=quay.io/ceph/ceph:v19, name=angry_poincare, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:45:23 compute-0 podman[85396]: 2025-10-08 09:45:23.660650356 +0000 UTC m=+0.144667112 container attach a69f749ee1e24f25db88f0e02e5195db0fd57eefd8df786544d9db483af35fae (image=quay.io/ceph/ceph:v19, name=angry_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 08 09:45:23 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Oct 08 09:45:23 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Oct 08 09:45:23 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:45:23 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:45:23 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-2
Oct 08 09:45:23 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-2
Oct 08 09:45:24 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Oct 08 09:45:24 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/841109346' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Oct 08 09:45:24 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Oct 08 09:45:24 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Oct 08 09:45:24 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/841109346' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Oct 08 09:45:24 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e24 e24: 3 total, 2 up, 3 in
Oct 08 09:45:24 compute-0 angry_poincare[85411]: enabled application 'rbd' on pool 'backups'
Oct 08 09:45:24 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 2 up, 3 in
Oct 08 09:45:24 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 08 09:45:24 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 08 09:45:24 compute-0 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 08 09:45:24 compute-0 ceph-mgr[73869]: [progress INFO root] update: starting ev 2227baea-d11a-4cde-b678-995960ba9c5f (PG autoscaler increasing pool 3 PGs from 1 to 32)
Oct 08 09:45:24 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Oct 08 09:45:24 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Oct 08 09:45:24 compute-0 ceph-mon[73572]: pgmap v75: 7 pgs: 1 unknown, 6 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 08 09:45:24 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Oct 08 09:45:24 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:45:24 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/841109346' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Oct 08 09:45:24 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Oct 08 09:45:24 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/841109346' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Oct 08 09:45:24 compute-0 ceph-mon[73572]: osdmap e24: 3 total, 2 up, 3 in
Oct 08 09:45:24 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 08 09:45:24 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Oct 08 09:45:24 compute-0 systemd[1]: libpod-a69f749ee1e24f25db88f0e02e5195db0fd57eefd8df786544d9db483af35fae.scope: Deactivated successfully.
Oct 08 09:45:24 compute-0 conmon[85411]: conmon a69f749ee1e24f25db88 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a69f749ee1e24f25db88f0e02e5195db0fd57eefd8df786544d9db483af35fae.scope/container/memory.events
Oct 08 09:45:24 compute-0 podman[85396]: 2025-10-08 09:45:24.223050062 +0000 UTC m=+0.707066768 container died a69f749ee1e24f25db88f0e02e5195db0fd57eefd8df786544d9db483af35fae (image=quay.io/ceph/ceph:v19, name=angry_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:45:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9278d57ff5d76926e8baab6f379af1aee34be2c3fe3af7006cc8853ee315687-merged.mount: Deactivated successfully.
Oct 08 09:45:24 compute-0 podman[85396]: 2025-10-08 09:45:24.274474505 +0000 UTC m=+0.758491231 container remove a69f749ee1e24f25db88f0e02e5195db0fd57eefd8df786544d9db483af35fae (image=quay.io/ceph/ceph:v19, name=angry_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:45:24 compute-0 systemd[1]: libpod-conmon-a69f749ee1e24f25db88f0e02e5195db0fd57eefd8df786544d9db483af35fae.scope: Deactivated successfully.
Oct 08 09:45:24 compute-0 sudo[85393]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:24 compute-0 sudo[85471]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sypksgyekdwwslthonfwvkrzlrwbgdbz ; /usr/bin/python3'
Oct 08 09:45:24 compute-0 sudo[85471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:45:24 compute-0 python3[85473]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:45:24 compute-0 podman[85474]: 2025-10-08 09:45:24.64083132 +0000 UTC m=+0.054354775 container create 0679de31226ac9e167841135cb206f8d383fb43893d6304920b9081069047b84 (image=quay.io/ceph/ceph:v19, name=lucid_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:45:24 compute-0 systemd[1]: Started libpod-conmon-0679de31226ac9e167841135cb206f8d383fb43893d6304920b9081069047b84.scope.
Oct 08 09:45:24 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:45:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9cabda36346f1107ec927bf1612ff44e61279f47c52c081cbfd5192517acff2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9cabda36346f1107ec927bf1612ff44e61279f47c52c081cbfd5192517acff2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:24 compute-0 podman[85474]: 2025-10-08 09:45:24.705971292 +0000 UTC m=+0.119494827 container init 0679de31226ac9e167841135cb206f8d383fb43893d6304920b9081069047b84 (image=quay.io/ceph/ceph:v19, name=lucid_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:45:24 compute-0 podman[85474]: 2025-10-08 09:45:24.614174174 +0000 UTC m=+0.027697719 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:45:24 compute-0 podman[85474]: 2025-10-08 09:45:24.712841947 +0000 UTC m=+0.126365432 container start 0679de31226ac9e167841135cb206f8d383fb43893d6304920b9081069047b84 (image=quay.io/ceph/ceph:v19, name=lucid_hellman, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:45:24 compute-0 podman[85474]: 2025-10-08 09:45:24.71580041 +0000 UTC m=+0.129323875 container attach 0679de31226ac9e167841135cb206f8d383fb43893d6304920b9081069047b84 (image=quay.io/ceph/ceph:v19, name=lucid_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:45:24 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v78: 7 pgs: 1 unknown, 6 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 08 09:45:24 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Oct 08 09:45:24 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 08 09:45:24 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Oct 08 09:45:24 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 08 09:45:25 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Oct 08 09:45:25 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2076445319' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Oct 08 09:45:25 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Oct 08 09:45:25 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Oct 08 09:45:25 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Oct 08 09:45:25 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Oct 08 09:45:25 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2076445319' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Oct 08 09:45:25 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e25 e25: 3 total, 2 up, 3 in
Oct 08 09:45:25 compute-0 lucid_hellman[85489]: enabled application 'rbd' on pool 'images'
Oct 08 09:45:25 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 2 up, 3 in
Oct 08 09:45:25 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 08 09:45:25 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 08 09:45:25 compute-0 ceph-mgr[73869]: [progress INFO root] update: starting ev 05a1d3c6-ed35-41d2-9081-50c37b873654 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Oct 08 09:45:25 compute-0 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 08 09:45:25 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Oct 08 09:45:25 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Oct 08 09:45:25 compute-0 ceph-mon[73572]: Deploying daemon osd.2 on compute-2
Oct 08 09:45:25 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 08 09:45:25 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 08 09:45:25 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/2076445319' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Oct 08 09:45:25 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Oct 08 09:45:25 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Oct 08 09:45:25 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Oct 08 09:45:25 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/2076445319' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Oct 08 09:45:25 compute-0 ceph-mon[73572]: osdmap e25: 3 total, 2 up, 3 in
Oct 08 09:45:25 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 08 09:45:25 compute-0 systemd[1]: libpod-0679de31226ac9e167841135cb206f8d383fb43893d6304920b9081069047b84.scope: Deactivated successfully.
Oct 08 09:45:25 compute-0 podman[85474]: 2025-10-08 09:45:25.226793283 +0000 UTC m=+0.640316808 container died 0679de31226ac9e167841135cb206f8d383fb43893d6304920b9081069047b84 (image=quay.io/ceph/ceph:v19, name=lucid_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct 08 09:45:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9cabda36346f1107ec927bf1612ff44e61279f47c52c081cbfd5192517acff2-merged.mount: Deactivated successfully.
Oct 08 09:45:25 compute-0 podman[85474]: 2025-10-08 09:45:25.270454295 +0000 UTC m=+0.683977750 container remove 0679de31226ac9e167841135cb206f8d383fb43893d6304920b9081069047b84 (image=quay.io/ceph/ceph:v19, name=lucid_hellman, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 08 09:45:25 compute-0 systemd[1]: libpod-conmon-0679de31226ac9e167841135cb206f8d383fb43893d6304920b9081069047b84.scope: Deactivated successfully.
Oct 08 09:45:25 compute-0 sudo[85471]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:25 compute-0 sudo[85548]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eymfbvffteikonihtyaxrrgzeggkayrq ; /usr/bin/python3'
Oct 08 09:45:25 compute-0 sudo[85548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:45:25 compute-0 python3[85550]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:45:25 compute-0 podman[85551]: 2025-10-08 09:45:25.678196586 +0000 UTC m=+0.040447899 container create f9dba830df8e8c5d1edfa23c8edf504d296ea3b1f3860bd37a5c70289b21d492 (image=quay.io/ceph/ceph:v19, name=vibrant_lamport, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct 08 09:45:25 compute-0 systemd[1]: Started libpod-conmon-f9dba830df8e8c5d1edfa23c8edf504d296ea3b1f3860bd37a5c70289b21d492.scope.
Oct 08 09:45:25 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:45:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/962d2b9e85bf9e67d3ef657038b6e595d92cfd7c44ff43c90edfc8534bdb7ab6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/962d2b9e85bf9e67d3ef657038b6e595d92cfd7c44ff43c90edfc8534bdb7ab6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:25 compute-0 podman[85551]: 2025-10-08 09:45:25.733972059 +0000 UTC m=+0.096223372 container init f9dba830df8e8c5d1edfa23c8edf504d296ea3b1f3860bd37a5c70289b21d492 (image=quay.io/ceph/ceph:v19, name=vibrant_lamport, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct 08 09:45:25 compute-0 podman[85551]: 2025-10-08 09:45:25.739471808 +0000 UTC m=+0.101723121 container start f9dba830df8e8c5d1edfa23c8edf504d296ea3b1f3860bd37a5c70289b21d492 (image=quay.io/ceph/ceph:v19, name=vibrant_lamport, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:45:25 compute-0 podman[85551]: 2025-10-08 09:45:25.742490213 +0000 UTC m=+0.104741526 container attach f9dba830df8e8c5d1edfa23c8edf504d296ea3b1f3860bd37a5c70289b21d492 (image=quay.io/ceph/ceph:v19, name=vibrant_lamport, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 08 09:45:25 compute-0 podman[85551]: 2025-10-08 09:45:25.660988952 +0000 UTC m=+0.023240285 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:45:26 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Oct 08 09:45:26 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3506179030' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Oct 08 09:45:26 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Oct 08 09:45:26 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Oct 08 09:45:26 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3506179030' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Oct 08 09:45:26 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e26 e26: 3 total, 2 up, 3 in
Oct 08 09:45:26 compute-0 vibrant_lamport[85566]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Oct 08 09:45:26 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 2 up, 3 in
Oct 08 09:45:26 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 08 09:45:26 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 08 09:45:26 compute-0 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 08 09:45:26 compute-0 ceph-mgr[73869]: [progress INFO root] update: starting ev d5e796e6-991f-4eb6-8371-6de3595026e8 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Oct 08 09:45:26 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"} v 0)
Oct 08 09:45:26 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct 08 09:45:26 compute-0 ceph-mon[73572]: pgmap v78: 7 pgs: 1 unknown, 6 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 08 09:45:26 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Oct 08 09:45:26 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3506179030' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Oct 08 09:45:26 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Oct 08 09:45:26 compute-0 systemd[1]: libpod-f9dba830df8e8c5d1edfa23c8edf504d296ea3b1f3860bd37a5c70289b21d492.scope: Deactivated successfully.
Oct 08 09:45:26 compute-0 podman[85551]: 2025-10-08 09:45:26.236256692 +0000 UTC m=+0.598508005 container died f9dba830df8e8c5d1edfa23c8edf504d296ea3b1f3860bd37a5c70289b21d492 (image=quay.io/ceph/ceph:v19, name=vibrant_lamport, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:45:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-962d2b9e85bf9e67d3ef657038b6e595d92cfd7c44ff43c90edfc8534bdb7ab6-merged.mount: Deactivated successfully.
Oct 08 09:45:26 compute-0 podman[85551]: 2025-10-08 09:45:26.274281258 +0000 UTC m=+0.636532571 container remove f9dba830df8e8c5d1edfa23c8edf504d296ea3b1f3860bd37a5c70289b21d492 (image=quay.io/ceph/ceph:v19, name=vibrant_lamport, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:45:26 compute-0 systemd[1]: libpod-conmon-f9dba830df8e8c5d1edfa23c8edf504d296ea3b1f3860bd37a5c70289b21d492.scope: Deactivated successfully.
Oct 08 09:45:26 compute-0 sudo[85548]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:26 compute-0 sudo[85627]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzheewmprhrjrjycljuuvyuiqvjhnhlr ; /usr/bin/python3'
Oct 08 09:45:26 compute-0 sudo[85627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:45:26 compute-0 python3[85629]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:45:26 compute-0 ceph-mgr[73869]: [progress WARNING root] Starting Global Recovery Event,63 pgs not in active + clean state
Oct 08 09:45:26 compute-0 podman[85630]: 2025-10-08 09:45:26.630314316 +0000 UTC m=+0.044260426 container create c1639a0c2f220a0cf3b22296c2d07390db8852769a968d66af2c21f03fab86d7 (image=quay.io/ceph/ceph:v19, name=confident_meitner, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:45:26 compute-0 systemd[1]: Started libpod-conmon-c1639a0c2f220a0cf3b22296c2d07390db8852769a968d66af2c21f03fab86d7.scope.
Oct 08 09:45:26 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:45:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cb256490d7ed2d4db6de2eec988bcd199b9e95b6ff041938f8df0243d7e184d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cb256490d7ed2d4db6de2eec988bcd199b9e95b6ff041938f8df0243d7e184d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:26 compute-0 podman[85630]: 2025-10-08 09:45:26.610368339 +0000 UTC m=+0.024314519 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:45:26 compute-0 podman[85630]: 2025-10-08 09:45:26.714867883 +0000 UTC m=+0.128813993 container init c1639a0c2f220a0cf3b22296c2d07390db8852769a968d66af2c21f03fab86d7 (image=quay.io/ceph/ceph:v19, name=confident_meitner, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct 08 09:45:26 compute-0 podman[85630]: 2025-10-08 09:45:26.724692731 +0000 UTC m=+0.138638831 container start c1639a0c2f220a0cf3b22296c2d07390db8852769a968d66af2c21f03fab86d7 (image=quay.io/ceph/ceph:v19, name=confident_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:45:26 compute-0 podman[85630]: 2025-10-08 09:45:26.727840331 +0000 UTC m=+0.141786471 container attach c1639a0c2f220a0cf3b22296c2d07390db8852769a968d66af2c21f03fab86d7 (image=quay.io/ceph/ceph:v19, name=confident_meitner, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct 08 09:45:26 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v81: 69 pgs: 1 peering, 62 unknown, 6 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 08 09:45:26 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Oct 08 09:45:26 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 08 09:45:26 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Oct 08 09:45:26 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 08 09:45:27 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Oct 08 09:45:27 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3314825613' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Oct 08 09:45:27 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Oct 08 09:45:27 compute-0 ceph-mon[73572]: log_channel(cluster) log [WRN] : Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 08 09:45:27 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Oct 08 09:45:27 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Oct 08 09:45:27 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Oct 08 09:45:27 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3314825613' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Oct 08 09:45:27 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e27 e27: 3 total, 2 up, 3 in
Oct 08 09:45:27 compute-0 confident_meitner[85646]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Oct 08 09:45:27 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 2 up, 3 in
Oct 08 09:45:27 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 08 09:45:27 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 08 09:45:27 compute-0 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 08 09:45:27 compute-0 ceph-mgr[73869]: [progress INFO root] update: starting ev 203c02a5-7e6a-438b-b565-7702405c80f6 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Oct 08 09:45:27 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Oct 08 09:45:27 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Oct 08 09:45:27 compute-0 systemd[1]: libpod-c1639a0c2f220a0cf3b22296c2d07390db8852769a968d66af2c21f03fab86d7.scope: Deactivated successfully.
Oct 08 09:45:27 compute-0 conmon[85646]: conmon c1639a0c2f220a0cf3b2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c1639a0c2f220a0cf3b22296c2d07390db8852769a968d66af2c21f03fab86d7.scope/container/memory.events
Oct 08 09:45:27 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3506179030' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Oct 08 09:45:27 compute-0 ceph-mon[73572]: osdmap e26: 3 total, 2 up, 3 in
Oct 08 09:45:27 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 08 09:45:27 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct 08 09:45:27 compute-0 podman[85630]: 2025-10-08 09:45:27.384224696 +0000 UTC m=+0.798170816 container died c1639a0c2f220a0cf3b22296c2d07390db8852769a968d66af2c21f03fab86d7 (image=quay.io/ceph/ceph:v19, name=confident_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:45:27 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 08 09:45:27 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 08 09:45:27 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3314825613' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Oct 08 09:45:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-4cb256490d7ed2d4db6de2eec988bcd199b9e95b6ff041938f8df0243d7e184d-merged.mount: Deactivated successfully.
Oct 08 09:45:27 compute-0 podman[85630]: 2025-10-08 09:45:27.433412816 +0000 UTC m=+0.847358916 container remove c1639a0c2f220a0cf3b22296c2d07390db8852769a968d66af2c21f03fab86d7 (image=quay.io/ceph/ceph:v19, name=confident_meitner, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:45:27 compute-0 systemd[1]: libpod-conmon-c1639a0c2f220a0cf3b22296c2d07390db8852769a968d66af2c21f03fab86d7.scope: Deactivated successfully.
Oct 08 09:45:27 compute-0 sudo[85627]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:28 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e27 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:45:28 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Oct 08 09:45:28 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Oct 08 09:45:28 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e28 e28: 3 total, 2 up, 3 in
Oct 08 09:45:28 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 2 up, 3 in
Oct 08 09:45:28 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 08 09:45:28 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 08 09:45:28 compute-0 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 08 09:45:28 compute-0 ceph-mgr[73869]: [progress INFO root] update: starting ev 086b694d-8d81-43a9-9ec1-a27dc45770c2 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Oct 08 09:45:28 compute-0 ceph-mgr[73869]: [progress INFO root] complete: finished ev 0ec7ed32-6b33-4f8f-9254-a63145d84250 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Oct 08 09:45:28 compute-0 ceph-mgr[73869]: [progress INFO root] Completed event 0ec7ed32-6b33-4f8f-9254-a63145d84250 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 5 seconds
Oct 08 09:45:28 compute-0 ceph-mgr[73869]: [progress INFO root] complete: finished ev 2227baea-d11a-4cde-b678-995960ba9c5f (PG autoscaler increasing pool 3 PGs from 1 to 32)
Oct 08 09:45:28 compute-0 ceph-mgr[73869]: [progress INFO root] Completed event 2227baea-d11a-4cde-b678-995960ba9c5f (PG autoscaler increasing pool 3 PGs from 1 to 32) in 4 seconds
Oct 08 09:45:28 compute-0 ceph-mgr[73869]: [progress INFO root] complete: finished ev 05a1d3c6-ed35-41d2-9081-50c37b873654 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Oct 08 09:45:28 compute-0 ceph-mgr[73869]: [progress INFO root] Completed event 05a1d3c6-ed35-41d2-9081-50c37b873654 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 3 seconds
Oct 08 09:45:28 compute-0 ceph-mgr[73869]: [progress INFO root] complete: finished ev d5e796e6-991f-4eb6-8371-6de3595026e8 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Oct 08 09:45:28 compute-0 ceph-mgr[73869]: [progress INFO root] Completed event d5e796e6-991f-4eb6-8371-6de3595026e8 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 2 seconds
Oct 08 09:45:28 compute-0 ceph-mgr[73869]: [progress INFO root] complete: finished ev 203c02a5-7e6a-438b-b565-7702405c80f6 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Oct 08 09:45:28 compute-0 ceph-mgr[73869]: [progress INFO root] Completed event 203c02a5-7e6a-438b-b565-7702405c80f6 (PG autoscaler increasing pool 6 PGs from 1 to 32) in 1 seconds
Oct 08 09:45:28 compute-0 ceph-mgr[73869]: [progress INFO root] complete: finished ev 086b694d-8d81-43a9-9ec1-a27dc45770c2 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Oct 08 09:45:28 compute-0 ceph-mgr[73869]: [progress INFO root] Completed event 086b694d-8d81-43a9-9ec1-a27dc45770c2 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Oct 08 09:45:28 compute-0 ceph-mon[73572]: pgmap v81: 69 pgs: 1 peering, 62 unknown, 6 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 08 09:45:28 compute-0 ceph-mon[73572]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 08 09:45:28 compute-0 ceph-mon[73572]: 2.1e scrub starts
Oct 08 09:45:28 compute-0 ceph-mon[73572]: 2.1e scrub ok
Oct 08 09:45:28 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Oct 08 09:45:28 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Oct 08 09:45:28 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Oct 08 09:45:28 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3314825613' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Oct 08 09:45:28 compute-0 ceph-mon[73572]: osdmap e27: 3 total, 2 up, 3 in
Oct 08 09:45:28 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 08 09:45:28 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Oct 08 09:45:28 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Oct 08 09:45:28 compute-0 ceph-mon[73572]: osdmap e28: 3 total, 2 up, 3 in
Oct 08 09:45:28 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 08 09:45:28 compute-0 python3[85757]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 27 pg[5.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=27 pruub=15.088713646s) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active pruub 71.723999023s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 25 pg[3.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=25 pruub=13.063826561s) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active pruub 69.699134827s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 27 pg[4.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=27 pruub=14.060723305s) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active pruub 70.696052551s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=25 pruub=13.063826561s) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown pruub 69.699134827s@ mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=27 pruub=14.060723305s) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown pruub 70.696052551s@ mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.11( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.12( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.13( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=27 pruub=15.088713646s) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown pruub 71.723999023s@ mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.15( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.16( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.14( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.17( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.18( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.d( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.e( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.1f( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.f( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.10( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.1b( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.1c( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.1d( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.1e( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.3( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.4( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.1( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.2( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.5( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.6( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.7( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.8( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.9( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.b( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.c( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.19( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.1a( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.10( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.11( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.1a( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.1b( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.12( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.13( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.1e( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.1f( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.14( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.15( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.c( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.d( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.e( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.a( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.f( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.16( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.17( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.1( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.2( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.3( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.4( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.5( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.6( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.7( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.8( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.18( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.19( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.9( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.a( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.b( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.1c( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.1d( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.4( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.5( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.6( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.7( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.12( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.13( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.16( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.17( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.1a( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.2( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.3( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.a( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.b( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.c( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.d( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.1( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.e( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.f( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.10( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.11( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.15( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.1b( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.1c( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.8( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.9( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.18( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.19( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.14( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.1d( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.1e( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.1f( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:28 compute-0 python3[85828]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759916728.161193-33698-228603819106303/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=ad866aa1f51f395809dd7ac5cb7a56d43c167b49 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:45:28 compute-0 systemd[74898]: Starting Mark boot as successful...
Oct 08 09:45:28 compute-0 systemd[74898]: Finished Mark boot as successful.
Oct 08 09:45:28 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v84: 131 pgs: 1 peering, 124 unknown, 6 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 08 09:45:28 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Oct 08 09:45:28 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 08 09:45:28 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} v 0)
Oct 08 09:45:28 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 08 09:45:29 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 08 09:45:29 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:29 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 08 09:45:29 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:29 compute-0 sudo[85929]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rijwsegvryslfxzkgufsqcgylrdrmhen ; /usr/bin/python3'
Oct 08 09:45:29 compute-0 sudo[85929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:45:29 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Oct 08 09:45:29 compute-0 ceph-mon[73572]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct 08 09:45:29 compute-0 ceph-mon[73572]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct 08 09:45:29 compute-0 ceph-mon[73572]: 2.1f scrub starts
Oct 08 09:45:29 compute-0 ceph-mon[73572]: 2.1f scrub ok
Oct 08 09:45:29 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 08 09:45:29 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 08 09:45:29 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:29 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:29 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Oct 08 09:45:29 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct 08 09:45:29 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e29 e29: 3 total, 2 up, 3 in
Oct 08 09:45:29 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 2 up, 3 in
Oct 08 09:45:29 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 08 09:45:29 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 08 09:45:29 compute-0 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[6.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.471620560s) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 72.974006653s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[6.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.471620560s) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown pruub 72.974006653s@ mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.19( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.18( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.1f( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.1e( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.19( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.1d( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.1c( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.1b( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.1b( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.1a( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.1a( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.1c( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.1d( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.1d( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.1a( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.e( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.1c( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.9( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.f( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.f( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.8( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.18( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.2( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.e( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.3( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.1b( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.4( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.4( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.5( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.5( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.2( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.6( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.3( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.5( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.2( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.1( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.3( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.7( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.6( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.0( empty local-lis/les=27/29 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.7( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.1( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.7( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.1( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.0( empty local-lis/les=27/29 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.6( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.d( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.4( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.c( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.d( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.b( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.0( empty local-lis/les=25/29 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.b( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.a( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.a( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.d( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.9( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.b( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.e( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.c( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.8( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.f( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.9( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.17( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.a( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.16( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.16( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.11( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.10( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.17( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.12( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.15( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.14( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.c( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.14( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.13( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.13( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.12( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.14( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.12( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.13( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.11( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.15( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.16( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.10( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.10( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.8( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.17( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.11( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.18( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.1f( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.19( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.1e( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.1e( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.1f( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.15( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:29 compute-0 python3[85931]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 08 09:45:29 compute-0 sudo[85929]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:29 compute-0 sudo[86004]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvjdsldyunizvbpwgldzttcvghgclnni ; /usr/bin/python3'
Oct 08 09:45:29 compute-0 sudo[86004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:45:29 compute-0 python3[86006]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759916729.1539378-33712-206561102441581/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=39cc0911497a7006f64158006f884d8a68db01c1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:45:29 compute-0 sudo[86004]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:29 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Oct 08 09:45:29 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Oct 08 09:45:30 compute-0 sudo[86054]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfhoiodtivxpaporhpyylhjbknlhulug ; /usr/bin/python3'
Oct 08 09:45:30 compute-0 sudo[86054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:45:30 compute-0 python3[86056]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:45:30 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Oct 08 09:45:30 compute-0 ceph-mon[73572]: pgmap v84: 131 pgs: 1 peering, 124 unknown, 6 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 08 09:45:30 compute-0 ceph-mon[73572]: 2.1b deep-scrub starts
Oct 08 09:45:30 compute-0 ceph-mon[73572]: 2.1b deep-scrub ok
Oct 08 09:45:30 compute-0 ceph-mon[73572]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct 08 09:45:30 compute-0 ceph-mon[73572]: Cluster is now healthy
Oct 08 09:45:30 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Oct 08 09:45:30 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct 08 09:45:30 compute-0 ceph-mon[73572]: osdmap e29: 3 total, 2 up, 3 in
Oct 08 09:45:30 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 08 09:45:30 compute-0 podman[86057]: 2025-10-08 09:45:30.468943878 +0000 UTC m=+0.107578564 container create 6eeba7e503e5cf5e724d6fedb3016a017c828f73b132c4baa5acdc2c869b0fa7 (image=quay.io/ceph/ceph:v19, name=bold_yonath, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:45:30 compute-0 podman[86057]: 2025-10-08 09:45:30.386684726 +0000 UTC m=+0.025319422 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:45:30 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e30 e30: 3 total, 2 up, 3 in
Oct 08 09:45:30 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 2 up, 3 in
Oct 08 09:45:30 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 08 09:45:30 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 08 09:45:30 compute-0 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.1a( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.1b( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.18( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.1e( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.19( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.1f( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.c( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.d( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.1( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.6( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.7( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.4( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.3( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.2( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.5( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.e( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.f( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.9( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.8( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.b( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.a( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.15( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.14( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.17( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.16( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.11( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.10( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.13( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.12( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.1d( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.1c( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.18( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.1b( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.19( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.1e( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.1f( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.d( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.1a( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.c( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.1( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.6( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.7( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.4( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.3( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.0( empty local-lis/les=29/30 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.e( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.5( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.2( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.9( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.8( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.b( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.a( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.f( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.14( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.15( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.17( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.10( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.11( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.16( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.13( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.1d( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.12( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.1c( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:30 compute-0 systemd[1]: Started libpod-conmon-6eeba7e503e5cf5e724d6fedb3016a017c828f73b132c4baa5acdc2c869b0fa7.scope.
Oct 08 09:45:30 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:45:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/786e00bb1f81d45bec0f865176f11a69c5c4a917bc95cc37be6bb32e46bb904f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/786e00bb1f81d45bec0f865176f11a69c5c4a917bc95cc37be6bb32e46bb904f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/786e00bb1f81d45bec0f865176f11a69c5c4a917bc95cc37be6bb32e46bb904f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:30 compute-0 podman[86057]: 2025-10-08 09:45:30.554212934 +0000 UTC m=+0.192847650 container init 6eeba7e503e5cf5e724d6fedb3016a017c828f73b132c4baa5acdc2c869b0fa7 (image=quay.io/ceph/ceph:v19, name=bold_yonath, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:45:30 compute-0 podman[86057]: 2025-10-08 09:45:30.562579611 +0000 UTC m=+0.201214307 container start 6eeba7e503e5cf5e724d6fedb3016a017c828f73b132c4baa5acdc2c869b0fa7 (image=quay.io/ceph/ceph:v19, name=bold_yonath, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:45:30 compute-0 podman[86057]: 2025-10-08 09:45:30.565831446 +0000 UTC m=+0.204466152 container attach 6eeba7e503e5cf5e724d6fedb3016a017c828f73b132c4baa5acdc2c869b0fa7 (image=quay.io/ceph/ceph:v19, name=bold_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct 08 09:45:30 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Oct 08 09:45:30 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2962484888' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 08 09:45:30 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2962484888' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct 08 09:45:30 compute-0 bold_yonath[86072]: 
Oct 08 09:45:30 compute-0 bold_yonath[86072]: [global]
Oct 08 09:45:30 compute-0 bold_yonath[86072]:         fsid = 787292cc-8154-50c4-9e00-e9be3e817149
Oct 08 09:45:30 compute-0 bold_yonath[86072]:         mon_host = 192.168.122.100
Oct 08 09:45:30 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v87: 193 pgs: 124 unknown, 69 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 08 09:45:30 compute-0 systemd[1]: libpod-6eeba7e503e5cf5e724d6fedb3016a017c828f73b132c4baa5acdc2c869b0fa7.scope: Deactivated successfully.
Oct 08 09:45:30 compute-0 podman[86057]: 2025-10-08 09:45:30.970415137 +0000 UTC m=+0.609049863 container died 6eeba7e503e5cf5e724d6fedb3016a017c828f73b132c4baa5acdc2c869b0fa7 (image=quay.io/ceph/ceph:v19, name=bold_yonath, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 08 09:45:30 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Oct 08 09:45:30 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Oct 08 09:45:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-786e00bb1f81d45bec0f865176f11a69c5c4a917bc95cc37be6bb32e46bb904f-merged.mount: Deactivated successfully.
Oct 08 09:45:31 compute-0 podman[86057]: 2025-10-08 09:45:31.054658691 +0000 UTC m=+0.693293367 container remove 6eeba7e503e5cf5e724d6fedb3016a017c828f73b132c4baa5acdc2c869b0fa7 (image=quay.io/ceph/ceph:v19, name=bold_yonath, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0)
Oct 08 09:45:31 compute-0 systemd[1]: libpod-conmon-6eeba7e503e5cf5e724d6fedb3016a017c828f73b132c4baa5acdc2c869b0fa7.scope: Deactivated successfully.
Oct 08 09:45:31 compute-0 sudo[86054]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:31 compute-0 sudo[86135]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svctcummyrdgzsxasqijukflyvaoaclq ; /usr/bin/python3'
Oct 08 09:45:31 compute-0 sudo[86135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:45:31 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 08 09:45:31 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:31 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 08 09:45:31 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:31 compute-0 sudo[86138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 08 09:45:31 compute-0 sudo[86138]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:31 compute-0 sudo[86138]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:31 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Oct 08 09:45:31 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct 08 09:45:31 compute-0 python3[86137]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:45:31 compute-0 podman[86163]: 2025-10-08 09:45:31.484058231 +0000 UTC m=+0.039845264 container create 1bc84e706ed50e20bb08412e4fa01fe25148e2fd6b2e203811b5c5b7d7749ba8 (image=quay.io/ceph/ceph:v19, name=pensive_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default)
Oct 08 09:45:31 compute-0 ceph-mon[73572]: 4.19 scrub starts
Oct 08 09:45:31 compute-0 ceph-mon[73572]: 4.19 scrub ok
Oct 08 09:45:31 compute-0 ceph-mon[73572]: 2.7 scrub starts
Oct 08 09:45:31 compute-0 ceph-mon[73572]: 2.7 scrub ok
Oct 08 09:45:31 compute-0 ceph-mon[73572]: osdmap e30: 3 total, 2 up, 3 in
Oct 08 09:45:31 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 08 09:45:31 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/2962484888' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 08 09:45:31 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/2962484888' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct 08 09:45:31 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:31 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:31 compute-0 ceph-mon[73572]: from='osd.2 [v2:192.168.122.102:6800/2890316650,v1:192.168.122.102:6801/2890316650]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct 08 09:45:31 compute-0 ceph-mon[73572]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct 08 09:45:31 compute-0 systemd[1]: Started libpod-conmon-1bc84e706ed50e20bb08412e4fa01fe25148e2fd6b2e203811b5c5b7d7749ba8.scope.
Oct 08 09:45:31 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:45:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/314f9020b44ee769459974a96c34bfb6f89f3181d0dbf7cf44026f2db47ea1e9/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/314f9020b44ee769459974a96c34bfb6f89f3181d0dbf7cf44026f2db47ea1e9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/314f9020b44ee769459974a96c34bfb6f89f3181d0dbf7cf44026f2db47ea1e9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:31 compute-0 podman[86163]: 2025-10-08 09:45:31.54405861 +0000 UTC m=+0.099845663 container init 1bc84e706ed50e20bb08412e4fa01fe25148e2fd6b2e203811b5c5b7d7749ba8 (image=quay.io/ceph/ceph:v19, name=pensive_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 08 09:45:31 compute-0 podman[86163]: 2025-10-08 09:45:31.549163701 +0000 UTC m=+0.104950734 container start 1bc84e706ed50e20bb08412e4fa01fe25148e2fd6b2e203811b5c5b7d7749ba8 (image=quay.io/ceph/ceph:v19, name=pensive_wozniak, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:45:31 compute-0 podman[86163]: 2025-10-08 09:45:31.55227063 +0000 UTC m=+0.108057673 container attach 1bc84e706ed50e20bb08412e4fa01fe25148e2fd6b2e203811b5c5b7d7749ba8 (image=quay.io/ceph/ceph:v19, name=pensive_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:45:31 compute-0 podman[86163]: 2025-10-08 09:45:31.469210445 +0000 UTC m=+0.024997498 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:45:31 compute-0 sudo[86181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:45:31 compute-0 sudo[86181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:31 compute-0 sudo[86181]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:31 compute-0 ceph-mgr[73869]: [progress INFO root] Writing back 11 completed events
Oct 08 09:45:31 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 08 09:45:31 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:31 compute-0 sudo[86207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 08 09:45:31 compute-0 sudo[86207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:32 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Oct 08 09:45:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0)
Oct 08 09:45:32 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Oct 08 09:45:32 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2319921068' entity='client.admin' 
Oct 08 09:45:32 compute-0 pensive_wozniak[86178]: set ssl_option
Oct 08 09:45:32 compute-0 systemd[1]: libpod-1bc84e706ed50e20bb08412e4fa01fe25148e2fd6b2e203811b5c5b7d7749ba8.scope: Deactivated successfully.
Oct 08 09:45:32 compute-0 podman[86163]: 2025-10-08 09:45:32.051253086 +0000 UTC m=+0.607040139 container died 1bc84e706ed50e20bb08412e4fa01fe25148e2fd6b2e203811b5c5b7d7749ba8 (image=quay.io/ceph/ceph:v19, name=pensive_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:45:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-314f9020b44ee769459974a96c34bfb6f89f3181d0dbf7cf44026f2db47ea1e9-merged.mount: Deactivated successfully.
Oct 08 09:45:32 compute-0 podman[86163]: 2025-10-08 09:45:32.098509366 +0000 UTC m=+0.654296409 container remove 1bc84e706ed50e20bb08412e4fa01fe25148e2fd6b2e203811b5c5b7d7749ba8 (image=quay.io/ceph/ceph:v19, name=pensive_wozniak, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True)
Oct 08 09:45:32 compute-0 systemd[1]: libpod-conmon-1bc84e706ed50e20bb08412e4fa01fe25148e2fd6b2e203811b5c5b7d7749ba8.scope: Deactivated successfully.
Oct 08 09:45:32 compute-0 sudo[86135]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:32 compute-0 sudo[86207]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Oct 08 09:45:32 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Oct 08 09:45:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e31 e31: 3 total, 2 up, 3 in
Oct 08 09:45:32 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 2 up, 3 in
Oct 08 09:45:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 08 09:45:32 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 08 09:45:32 compute-0 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 08 09:45:32 compute-0 sudo[86320]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhuyodlkvegejcmrnkvzyudcowbhikkp ; /usr/bin/python3'
Oct 08 09:45:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]} v 0)
Oct 08 09:45:32 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Oct 08 09:45:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e31 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-2,root=default}
Oct 08 09:45:32 compute-0 sudo[86320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:45:32 compute-0 python3[86322]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:45:32 compute-0 ceph-mon[73572]: pgmap v87: 193 pgs: 124 unknown, 69 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 08 09:45:32 compute-0 ceph-mon[73572]: 5.18 scrub starts
Oct 08 09:45:32 compute-0 ceph-mon[73572]: 5.18 scrub ok
Oct 08 09:45:32 compute-0 ceph-mon[73572]: 2.9 scrub starts
Oct 08 09:45:32 compute-0 ceph-mon[73572]: 2.9 scrub ok
Oct 08 09:45:32 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:32 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/2319921068' entity='client.admin' 
Oct 08 09:45:32 compute-0 ceph-mon[73572]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Oct 08 09:45:32 compute-0 ceph-mon[73572]: osdmap e31: 3 total, 2 up, 3 in
Oct 08 09:45:32 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 08 09:45:32 compute-0 ceph-mon[73572]: from='osd.2 [v2:192.168.122.102:6800/2890316650,v1:192.168.122.102:6801/2890316650]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Oct 08 09:45:32 compute-0 ceph-mon[73572]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Oct 08 09:45:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 08 09:45:32 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 08 09:45:32 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:32 compute-0 podman[86323]: 2025-10-08 09:45:32.571215932 +0000 UTC m=+0.063339199 container create 3a71b939d02f6e3384222f16d1bd39cb82cfa21def95589c2ae465e977a94ff2 (image=quay.io/ceph/ceph:v19, name=recursing_knuth, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:45:32 compute-0 systemd[1]: Started libpod-conmon-3a71b939d02f6e3384222f16d1bd39cb82cfa21def95589c2ae465e977a94ff2.scope.
Oct 08 09:45:32 compute-0 podman[86323]: 2025-10-08 09:45:32.5480194 +0000 UTC m=+0.040142677 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:45:32 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:45:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/470b765adf329324f93b33a71d3c832e375d3c3f6be104e7473d0fd439b61792/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/470b765adf329324f93b33a71d3c832e375d3c3f6be104e7473d0fd439b61792/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/470b765adf329324f93b33a71d3c832e375d3c3f6be104e7473d0fd439b61792/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:32 compute-0 podman[86323]: 2025-10-08 09:45:32.672839807 +0000 UTC m=+0.164963134 container init 3a71b939d02f6e3384222f16d1bd39cb82cfa21def95589c2ae465e977a94ff2 (image=quay.io/ceph/ceph:v19, name=recursing_knuth, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 08 09:45:32 compute-0 podman[86323]: 2025-10-08 09:45:32.680827749 +0000 UTC m=+0.172951016 container start 3a71b939d02f6e3384222f16d1bd39cb82cfa21def95589c2ae465e977a94ff2 (image=quay.io/ceph/ceph:v19, name=recursing_knuth, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:45:32 compute-0 podman[86323]: 2025-10-08 09:45:32.686332347 +0000 UTC m=+0.178455664 container attach 3a71b939d02f6e3384222f16d1bd39cb82cfa21def95589c2ae465e977a94ff2 (image=quay.io/ceph/ceph:v19, name=recursing_knuth, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Oct 08 09:45:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 08 09:45:32 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 08 09:45:32 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:32 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v89: 193 pgs: 193 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 08 09:45:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Oct 08 09:45:32 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 08 09:45:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Oct 08 09:45:32 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 08 09:45:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Oct 08 09:45:32 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 08 09:45:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Oct 08 09:45:32 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 08 09:45:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Oct 08 09:45:32 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 08 09:45:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Oct 08 09:45:32 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 08 09:45:33 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Oct 08 09:45:33 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Oct 08 09:45:33 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.14298 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 09:45:33 compute-0 ceph-mgr[73869]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct 08 09:45:33 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct 08 09:45:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Oct 08 09:45:33 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:33 compute-0 ceph-mgr[73869]: [cephadm INFO root] Saving service ingress.rgw.default spec with placement count:2
Oct 08 09:45:33 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Saving service ingress.rgw.default spec with placement count:2
Oct 08 09:45:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Oct 08 09:45:33 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:33 compute-0 recursing_knuth[86338]: Scheduled rgw.rgw update...
Oct 08 09:45:33 compute-0 recursing_knuth[86338]: Scheduled ingress.rgw.default update...
Oct 08 09:45:33 compute-0 systemd[1]: libpod-3a71b939d02f6e3384222f16d1bd39cb82cfa21def95589c2ae465e977a94ff2.scope: Deactivated successfully.
Oct 08 09:45:33 compute-0 podman[86323]: 2025-10-08 09:45:33.121682973 +0000 UTC m=+0.613806230 container died 3a71b939d02f6e3384222f16d1bd39cb82cfa21def95589c2ae465e977a94ff2 (image=quay.io/ceph/ceph:v19, name=recursing_knuth, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:45:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-470b765adf329324f93b33a71d3c832e375d3c3f6be104e7473d0fd439b61792-merged.mount: Deactivated successfully.
Oct 08 09:45:33 compute-0 podman[86323]: 2025-10-08 09:45:33.17342763 +0000 UTC m=+0.665550887 container remove 3a71b939d02f6e3384222f16d1bd39cb82cfa21def95589c2ae465e977a94ff2 (image=quay.io/ceph/ceph:v19, name=recursing_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct 08 09:45:33 compute-0 systemd[1]: libpod-conmon-3a71b939d02f6e3384222f16d1bd39cb82cfa21def95589c2ae465e977a94ff2.scope: Deactivated successfully.
Oct 08 09:45:33 compute-0 sudo[86320]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Oct 08 09:45:33 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Oct 08 09:45:33 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 08 09:45:33 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 08 09:45:33 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 08 09:45:33 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 08 09:45:33 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 08 09:45:33 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 08 09:45:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e32 e32: 3 total, 2 up, 3 in
Oct 08 09:45:33 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 2 up, 3 in
Oct 08 09:45:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 08 09:45:33 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 08 09:45:33 compute-0 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.18( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.121037483s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.512550354s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.1a( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.178676605s) [0] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 74.570198059s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.18( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.120996475s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512550354s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.1a( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.178634644s) [0] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.570198059s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.19( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.114104271s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.505683899s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.19( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.114104271s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.505683899s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.18( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.114066124s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.505737305s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.18( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.114026070s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.505737305s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.1b( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.174949646s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 74.566734314s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.1b( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.174949646s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.566734314s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.1d( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.119994164s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.511878967s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.1d( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.119994164s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.511878967s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.1b( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.119948387s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.511909485s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.1a( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.119895935s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.511878967s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.1b( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.119919777s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.511909485s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.1a( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.119863510s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.511878967s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.1c( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.119695663s) [0] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.511886597s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.1b( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.119695663s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.511962891s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.1c( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.119617462s) [0] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.511886597s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.1b( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.119663239s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.511962891s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.1a( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.119583130s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.512023926s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.1a( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.119583130s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512023926s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.19( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.177409172s) [0] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 74.570030212s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.1b( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.119441032s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.512107849s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.19( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.177370071s) [0] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.570030212s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.1e( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.177398682s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 74.570114136s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.1b( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.119441032s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512107849s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.1e( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.177398682s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.570114136s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.1c( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.119413376s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.512397766s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.1a( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.119321823s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.512313843s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.1c( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.119057655s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.512062073s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.1c( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.119380951s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512397766s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.1a( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.119321823s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512313843s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.1c( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.119057655s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512062073s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.1d( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.119115829s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.512184143s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.1d( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.119115829s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512184143s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.e( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.119060516s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.512351990s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.f( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.119127274s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.512466431s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.e( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.119025230s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512351990s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.f( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.119091988s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512466431s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.8( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.119064331s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.512535095s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.8( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.119064331s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512535095s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.e( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.119032860s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.512588501s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.e( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.119032860s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512588501s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.9( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.118725777s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.512420654s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.3( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.118890762s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.512626648s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.9( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.118725777s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512420654s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.d( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.176372528s) [0] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 74.570159912s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.3( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.118890762s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512626648s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.d( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.176337242s) [0] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.570159912s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.1( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.176275253s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 74.570251465s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.2( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.118567467s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.512573242s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.1( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.176275253s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.570251465s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.3( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.119441986s) [0] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.513511658s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.2( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.118532181s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512573242s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.3( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.119409561s) [0] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513511658s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.5( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.118524551s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.512710571s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.5( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.118454933s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512710571s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.4( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.118489265s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.512802124s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.4( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.118489265s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512802124s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.7( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.175937653s) [0] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 74.570274353s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.6( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.118301392s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.512779236s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.7( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.118440628s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.512924194s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.6( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.118301392s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512779236s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.7( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.118406296s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512924194s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.2( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.118491173s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.513366699s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.1( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.118454933s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.513374329s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.2( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.118491173s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513366699s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.7( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.175724030s) [0] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.570274353s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.1( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.118454933s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513374329s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.0( empty local-lis/les=27/29 n=0 ec=19/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.118487358s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.513656616s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.3( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.175113678s) [0] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 74.570327759s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.0( empty local-lis/les=27/29 n=0 ec=19/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.118487358s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513656616s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.3( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.175046921s) [0] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.570327759s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.2( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.175057411s) [0] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 74.570373535s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.1( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.118297577s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.513610840s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.1( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.118257523s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513610840s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.2( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.174987793s) [0] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.570373535s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.0( empty local-lis/les=25/29 n=0 ec=17/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.118144989s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.513679504s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.5( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.174762726s) [0] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 74.570335388s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.0( empty local-lis/les=25/29 n=0 ec=17/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.118144989s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513679504s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.5( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.117730141s) [0] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.513336182s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.5( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.174729347s) [0] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.570335388s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.5( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.117713928s) [0] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513336182s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.d( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.117918015s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.513687134s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.a( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.118183136s) [0] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.514030457s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.d( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.117879868s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513687134s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.a( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.118150711s) [0] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.514030457s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.d( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.117716789s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.513748169s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.d( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.117716789s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513748169s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.e( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.174188614s) [0] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 74.570335388s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.e( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.174156189s) [0] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.570335388s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.c( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.117549896s) [0] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.513961792s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.c( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.117510796s) [0] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513961792s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.c( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.117620468s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.514175415s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.a( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.117230415s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.513832092s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.c( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.117584229s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.514175415s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.a( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.117197037s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513832092s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.d( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.117197037s) [0] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.513862610s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.b( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.117210388s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.513908386s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.b( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.117210388s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513908386s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.8( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.173627853s) [0] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 74.570388794s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.8( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.173590660s) [0] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.570388794s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.9( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.117014885s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.513893127s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.9( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.117014885s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513893127s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.d( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.117106438s) [0] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513862610s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.e( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.116837502s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.513923645s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.8( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.116839409s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.513961792s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.e( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.116837502s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513923645s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.8( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.116839409s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513961792s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.8( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.116585732s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.513954163s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.f( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.116562843s) [0] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.513999939s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.9( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.116542816s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.513999939s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.f( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.116525650s) [0] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513999939s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.9( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.116506577s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513999939s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.8( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.116585732s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513954163s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.10( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.116289139s) [0] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.514091492s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.16( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.116175652s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.514053345s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.10( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.116255760s) [0] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.514091492s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.16( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.116141319s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.514053345s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.15( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.172485352s) [0] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 74.570465088s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.a( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.172746658s) [0] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 74.570419312s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.11( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.116067886s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.514076233s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.15( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.172447205s) [0] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.570465088s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.a( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.171899796s) [0] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.570419312s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.11( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.116067886s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.514076233s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.15( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.115336418s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.514129639s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.15( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.115336418s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.514129639s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.17( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.171558380s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 74.570465088s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.14( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.115266800s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.514190674s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.15( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.115489006s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.514427185s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.14( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.115266800s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.514190674s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.15( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.115456581s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.514427185s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.13( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.115301132s) [0] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.514312744s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.17( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.171558380s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.570465088s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.13( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.115226746s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.514312744s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.13( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.115263939s) [0] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.514312744s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.13( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.115194321s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.514312744s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.14( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.115078926s) [0] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.514358521s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.14( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.115044594s) [0] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.514358521s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.15( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.122462273s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.521919250s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.15( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.122462273s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.521919250s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.13( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.114801407s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.514404297s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.13( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.114801407s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.514404297s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.16( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.114745140s) [0] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.514450073s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.16( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.114709854s) [0] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.514450073s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.10( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.114684105s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.514457703s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.10( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.114649773s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.514457703s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.11( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.118941307s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.518859863s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.12( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.170584679s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 74.570556641s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.11( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.118906975s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.518859863s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.12( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.170584679s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.570556641s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.1f( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.118884087s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.518920898s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.1f( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.118884087s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.518920898s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.1c( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.170555115s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 74.570907593s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.1f( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.118961334s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.519348145s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.1c( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.170555115s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.570907593s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.1f( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.118926048s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.519348145s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.12( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.113756180s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.514350891s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.12( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.113756180s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.514350891s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:33 compute-0 ceph-mgr[73869]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/2890316650; not ready for session (expect reconnect)
Oct 08 09:45:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 08 09:45:33 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 08 09:45:33 compute-0 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 08 09:45:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[2.19( empty local-lis/les=0/0 n=0 ec=25/15 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[7.13( empty local-lis/les=0/0 n=0 ec=29/21 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[7.10( empty local-lis/les=0/0 n=0 ec=29/21 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[7.b( empty local-lis/les=0/0 n=0 ec=29/21 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[7.8( empty local-lis/les=0/0 n=0 ec=29/21 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[2.e( empty local-lis/les=0/0 n=0 ec=25/15 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[7.9( empty local-lis/les=0/0 n=0 ec=29/21 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[7.6( empty local-lis/les=0/0 n=0 ec=29/21 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[7.e( empty local-lis/les=0/0 n=0 ec=29/21 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[2.1( empty local-lis/les=0/0 n=0 ec=25/15 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[2.4( empty local-lis/les=0/0 n=0 ec=25/15 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[7.4( empty local-lis/les=0/0 n=0 ec=29/21 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[2.6( empty local-lis/les=0/0 n=0 ec=25/15 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[7.3( empty local-lis/les=0/0 n=0 ec=29/21 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[7.2( empty local-lis/les=0/0 n=0 ec=29/21 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[7.1e( empty local-lis/les=0/0 n=0 ec=29/21 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[7.f( empty local-lis/les=0/0 n=0 ec=29/21 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[2.9( empty local-lis/les=0/0 n=0 ec=25/15 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[7.1b( empty local-lis/les=0/0 n=0 ec=29/21 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[2.1e( empty local-lis/les=0/0 n=0 ec=25/15 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[2.1f( empty local-lis/les=0/0 n=0 ec=25/15 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[7.18( empty local-lis/les=0/0 n=0 ec=29/21 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:33 compute-0 ceph-mon[73572]: 3.1f scrub starts
Oct 08 09:45:33 compute-0 ceph-mon[73572]: 3.1f scrub ok
Oct 08 09:45:33 compute-0 ceph-mon[73572]: 2.6 scrub starts
Oct 08 09:45:33 compute-0 ceph-mon[73572]: 2.6 scrub ok
Oct 08 09:45:33 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:33 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:33 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:33 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:33 compute-0 ceph-mon[73572]: pgmap v89: 193 pgs: 193 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 08 09:45:33 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 08 09:45:33 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 08 09:45:33 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 08 09:45:33 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 08 09:45:33 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 08 09:45:33 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 08 09:45:33 compute-0 ceph-mon[73572]: from='client.14298 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 09:45:33 compute-0 ceph-mon[73572]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct 08 09:45:33 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:33 compute-0 ceph-mon[73572]: Saving service ingress.rgw.default spec with placement count:2
Oct 08 09:45:33 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:33 compute-0 ceph-mon[73572]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Oct 08 09:45:33 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 08 09:45:33 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 08 09:45:33 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 08 09:45:33 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 08 09:45:33 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 08 09:45:33 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 08 09:45:33 compute-0 ceph-mon[73572]: osdmap e32: 3 total, 2 up, 3 in
Oct 08 09:45:33 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 08 09:45:33 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 08 09:45:33 compute-0 python3[86450]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_dashboard.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 08 09:45:33 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Oct 08 09:45:34 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Oct 08 09:45:34 compute-0 python3[86521]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759916733.399294-33731-3548316112324/source dest=/tmp/ceph_dashboard.yml mode=0644 force=True follow=False _original_basename=ceph_monitoring_stack.yml.j2 checksum=2701faaa92cae31b5bbad92984c27e2af7a44b84 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:45:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Oct 08 09:45:34 compute-0 ceph-mgr[73869]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/2890316650; not ready for session (expect reconnect)
Oct 08 09:45:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 08 09:45:34 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 08 09:45:34 compute-0 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 08 09:45:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e33 e33: 3 total, 2 up, 3 in
Oct 08 09:45:34 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 2 up, 3 in
Oct 08 09:45:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 08 09:45:34 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 08 09:45:34 compute-0 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 08 09:45:34 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 33 pg[2.1e( empty local-lis/les=32/33 n=0 ec=25/15 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:34 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 33 pg[2.1f( empty local-lis/les=32/33 n=0 ec=25/15 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:34 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 33 pg[7.18( empty local-lis/les=32/33 n=0 ec=29/21 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:34 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 33 pg[7.1b( empty local-lis/les=32/33 n=0 ec=29/21 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:34 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 33 pg[7.1e( empty local-lis/les=32/33 n=0 ec=29/21 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:34 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 33 pg[2.9( empty local-lis/les=32/33 n=0 ec=25/15 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:34 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 33 pg[7.3( empty local-lis/les=32/33 n=0 ec=29/21 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:34 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 33 pg[7.2( empty local-lis/les=32/33 n=0 ec=29/21 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:34 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 33 pg[2.6( empty local-lis/les=32/33 n=0 ec=25/15 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:34 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 33 pg[7.4( empty local-lis/les=32/33 n=0 ec=29/21 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:34 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 33 pg[7.6( empty local-lis/les=32/33 n=0 ec=29/21 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:34 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 33 pg[7.e( empty local-lis/les=32/33 n=0 ec=29/21 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:34 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 33 pg[2.4( empty local-lis/les=32/33 n=0 ec=25/15 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:34 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 33 pg[7.f( empty local-lis/les=32/33 n=0 ec=29/21 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:34 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 33 pg[7.8( empty local-lis/les=32/33 n=0 ec=29/21 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:34 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 33 pg[2.1( empty local-lis/les=32/33 n=0 ec=25/15 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:34 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 33 pg[7.9( empty local-lis/les=32/33 n=0 ec=29/21 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:34 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 33 pg[7.b( empty local-lis/les=32/33 n=0 ec=29/21 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:34 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 33 pg[2.e( empty local-lis/les=32/33 n=0 ec=25/15 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:34 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 33 pg[7.10( empty local-lis/les=32/33 n=0 ec=29/21 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:34 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 33 pg[7.13( empty local-lis/les=32/33 n=0 ec=29/21 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:34 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 33 pg[2.19( empty local-lis/les=32/33 n=0 ec=25/15 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:34 compute-0 ceph-mon[73572]: purged_snaps scrub starts
Oct 08 09:45:34 compute-0 ceph-mon[73572]: purged_snaps scrub ok
Oct 08 09:45:34 compute-0 ceph-mon[73572]: 3.1e scrub starts
Oct 08 09:45:34 compute-0 ceph-mon[73572]: 3.1e scrub ok
Oct 08 09:45:34 compute-0 ceph-mon[73572]: 2.4 scrub starts
Oct 08 09:45:34 compute-0 ceph-mon[73572]: 2.4 scrub ok
Oct 08 09:45:34 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 08 09:45:34 compute-0 ceph-mon[73572]: osdmap e33: 3 total, 2 up, 3 in
Oct 08 09:45:34 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 08 09:45:34 compute-0 sudo[86569]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wljzrhyytnjnllpdfmmkmtxhwjsmyzbp ; /usr/bin/python3'
Oct 08 09:45:34 compute-0 sudo[86569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:45:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 08 09:45:34 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 08 09:45:34 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Oct 08 09:45:34 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Oct 08 09:45:34 compute-0 ceph-mgr[73869]: [cephadm INFO root] Adjusting osd_memory_target on compute-2 to 127.8M
Oct 08 09:45:34 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-2 to 127.8M
Oct 08 09:45:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Oct 08 09:45:34 compute-0 ceph-mgr[73869]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-2 to 134062899: error parsing value: Value '134062899' is below minimum 939524096
Oct 08 09:45:34 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-2 to 134062899: error parsing value: Value '134062899' is below minimum 939524096
Oct 08 09:45:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:45:34 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:45:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 08 09:45:34 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 09:45:34 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v92: 193 pgs: 193 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 08 09:45:34 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Oct 08 09:45:34 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Oct 08 09:45:34 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Oct 08 09:45:34 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Oct 08 09:45:34 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Oct 08 09:45:34 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Oct 08 09:45:34 compute-0 python3[86571]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_dashboard.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:45:34 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 6.18 scrub starts
Oct 08 09:45:35 compute-0 sudo[86572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 08 09:45:35 compute-0 sudo[86572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:35 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 6.18 scrub ok
Oct 08 09:45:35 compute-0 sudo[86572]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:35 compute-0 podman[86595]: 2025-10-08 09:45:35.052655372 +0000 UTC m=+0.044312158 container create f2155d8e19883df6e3d9067cba65d6038908c2bf031c9882301fb2086efefe0a (image=quay.io/ceph/ceph:v19, name=laughing_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 08 09:45:35 compute-0 sudo[86603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/etc/ceph
Oct 08 09:45:35 compute-0 sudo[86603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:35 compute-0 sudo[86603]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:35 compute-0 systemd[1]: Started libpod-conmon-f2155d8e19883df6e3d9067cba65d6038908c2bf031c9882301fb2086efefe0a.scope.
Oct 08 09:45:35 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:45:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6e68b1dc7aa381cccbc7483b1bcbf96fefbeee83db759e580c5ac1eb212adde/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6e68b1dc7aa381cccbc7483b1bcbf96fefbeee83db759e580c5ac1eb212adde/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6e68b1dc7aa381cccbc7483b1bcbf96fefbeee83db759e580c5ac1eb212adde/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:35 compute-0 sudo[86635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/etc/ceph/ceph.conf.new
Oct 08 09:45:35 compute-0 sudo[86635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:35 compute-0 sudo[86635]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:35 compute-0 podman[86595]: 2025-10-08 09:45:35.123725581 +0000 UTC m=+0.115382397 container init f2155d8e19883df6e3d9067cba65d6038908c2bf031c9882301fb2086efefe0a (image=quay.io/ceph/ceph:v19, name=laughing_pasteur, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct 08 09:45:35 compute-0 podman[86595]: 2025-10-08 09:45:35.032642223 +0000 UTC m=+0.024299029 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:45:35 compute-0 podman[86595]: 2025-10-08 09:45:35.129310552 +0000 UTC m=+0.120967338 container start f2155d8e19883df6e3d9067cba65d6038908c2bf031c9882301fb2086efefe0a (image=quay.io/ceph/ceph:v19, name=laughing_pasteur, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:45:35 compute-0 podman[86595]: 2025-10-08 09:45:35.134598041 +0000 UTC m=+0.126254827 container attach f2155d8e19883df6e3d9067cba65d6038908c2bf031c9882301fb2086efefe0a (image=quay.io/ceph/ceph:v19, name=laughing_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:45:35 compute-0 sudo[86666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149
Oct 08 09:45:35 compute-0 sudo[86666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:35 compute-0 sudo[86666]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:35 compute-0 sudo[86691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/etc/ceph/ceph.conf.new
Oct 08 09:45:35 compute-0 sudo[86691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:35 compute-0 sudo[86691]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:35 compute-0 ceph-mgr[73869]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/2890316650; not ready for session (expect reconnect)
Oct 08 09:45:35 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 08 09:45:35 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 08 09:45:35 compute-0 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 08 09:45:35 compute-0 sudo[86758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/etc/ceph/ceph.conf.new
Oct 08 09:45:35 compute-0 sudo[86758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:35 compute-0 sudo[86758]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:35 compute-0 sudo[86783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/etc/ceph/ceph.conf.new
Oct 08 09:45:35 compute-0 sudo[86783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:35 compute-0 sudo[86783]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:35 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.14304 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 09:45:35 compute-0 ceph-mgr[73869]: [cephadm INFO root] Saving service node-exporter spec with placement *
Oct 08 09:45:35 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Saving service node-exporter spec with placement *
Oct 08 09:45:35 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Oct 08 09:45:35 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct 08 09:45:35 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct 08 09:45:35 compute-0 sudo[86808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Oct 08 09:45:35 compute-0 sudo[86808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:35 compute-0 sudo[86808]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:35 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:35 compute-0 ceph-mgr[73869]: [cephadm INFO root] Saving service grafana spec with placement compute-0;count:1
Oct 08 09:45:35 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Saving service grafana spec with placement compute-0;count:1
Oct 08 09:45:35 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Oct 08 09:45:35 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct 08 09:45:35 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct 08 09:45:35 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:35 compute-0 ceph-mgr[73869]: [cephadm INFO root] Saving service prometheus spec with placement compute-0;count:1
Oct 08 09:45:35 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Saving service prometheus spec with placement compute-0;count:1
Oct 08 09:45:35 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Oct 08 09:45:35 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:35 compute-0 ceph-mgr[73869]: [cephadm INFO root] Saving service alertmanager spec with placement compute-0;count:1
Oct 08 09:45:35 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Saving service alertmanager spec with placement compute-0;count:1
Oct 08 09:45:35 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Oct 08 09:45:35 compute-0 sudo[86834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config
Oct 08 09:45:35 compute-0 sudo[86834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:35 compute-0 sudo[86834]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:35 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:35 compute-0 laughing_pasteur[86660]: Scheduled node-exporter update...
Oct 08 09:45:35 compute-0 laughing_pasteur[86660]: Scheduled grafana update...
Oct 08 09:45:35 compute-0 laughing_pasteur[86660]: Scheduled prometheus update...
Oct 08 09:45:35 compute-0 laughing_pasteur[86660]: Scheduled alertmanager update...
Oct 08 09:45:35 compute-0 systemd[1]: libpod-f2155d8e19883df6e3d9067cba65d6038908c2bf031c9882301fb2086efefe0a.scope: Deactivated successfully.
Oct 08 09:45:35 compute-0 podman[86595]: 2025-10-08 09:45:35.626439802 +0000 UTC m=+0.618096608 container died f2155d8e19883df6e3d9067cba65d6038908c2bf031c9882301fb2086efefe0a (image=quay.io/ceph/ceph:v19, name=laughing_pasteur, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct 08 09:45:35 compute-0 sudo[86859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config
Oct 08 09:45:35 compute-0 sudo[86859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:35 compute-0 sudo[86859]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6e68b1dc7aa381cccbc7483b1bcbf96fefbeee83db759e580c5ac1eb212adde-merged.mount: Deactivated successfully.
Oct 08 09:45:35 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct 08 09:45:35 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct 08 09:45:35 compute-0 podman[86595]: 2025-10-08 09:45:35.676642284 +0000 UTC m=+0.668299070 container remove f2155d8e19883df6e3d9067cba65d6038908c2bf031c9882301fb2086efefe0a (image=quay.io/ceph/ceph:v19, name=laughing_pasteur, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2)
Oct 08 09:45:35 compute-0 systemd[1]: libpod-conmon-f2155d8e19883df6e3d9067cba65d6038908c2bf031c9882301fb2086efefe0a.scope: Deactivated successfully.
Oct 08 09:45:35 compute-0 sudo[86569]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:35 compute-0 sudo[86896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf.new
Oct 08 09:45:35 compute-0 sudo[86896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:35 compute-0 sudo[86896]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:35 compute-0 sudo[86922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149
Oct 08 09:45:35 compute-0 sudo[86922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:35 compute-0 sudo[86922]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:35 compute-0 sudo[86947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf.new
Oct 08 09:45:35 compute-0 sudo[86947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:35 compute-0 sudo[86947]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:35 compute-0 sudo[86995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf.new
Oct 08 09:45:35 compute-0 sudo[86995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:35 compute-0 sudo[86995]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:35 compute-0 ceph-mon[73572]: 5.19 scrub starts
Oct 08 09:45:35 compute-0 ceph-mon[73572]: 5.19 scrub ok
Oct 08 09:45:35 compute-0 ceph-mon[73572]: 2.1a scrub starts
Oct 08 09:45:35 compute-0 ceph-mon[73572]: 2.1a scrub ok
Oct 08 09:45:35 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:35 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:35 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Oct 08 09:45:35 compute-0 ceph-mon[73572]: Adjusting osd_memory_target on compute-2 to 127.8M
Oct 08 09:45:35 compute-0 ceph-mon[73572]: Unable to set osd_memory_target on compute-2 to 134062899: error parsing value: Value '134062899' is below minimum 939524096
Oct 08 09:45:35 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:45:35 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 09:45:35 compute-0 ceph-mon[73572]: pgmap v92: 193 pgs: 193 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 08 09:45:35 compute-0 ceph-mon[73572]: Updating compute-0:/etc/ceph/ceph.conf
Oct 08 09:45:35 compute-0 ceph-mon[73572]: Updating compute-1:/etc/ceph/ceph.conf
Oct 08 09:45:35 compute-0 ceph-mon[73572]: Updating compute-2:/etc/ceph/ceph.conf
Oct 08 09:45:35 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 08 09:45:35 compute-0 ceph-mon[73572]: from='client.14304 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 09:45:35 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:35 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:35 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:35 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:35 compute-0 sudo[87020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf.new
Oct 08 09:45:35 compute-0 sudo[87020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:36 compute-0 sudo[87020]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:36 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Oct 08 09:45:36 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Oct 08 09:45:36 compute-0 sudo[87045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf.new /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct 08 09:45:36 compute-0 sudo[87045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:36 compute-0 sudo[87045]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:36 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 09:45:36 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 08 09:45:36 compute-0 sudo[87093]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyjdcugdohoaiopuhtqxucvjvuatwxrr ; /usr/bin/python3'
Oct 08 09:45:36 compute-0 sudo[87093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:45:36 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:36 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 09:45:36 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:36 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 08 09:45:36 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:36 compute-0 python3[87095]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/server_port 8443 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:45:36 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:36 compute-0 ceph-mgr[73869]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/2890316650; not ready for session (expect reconnect)
Oct 08 09:45:36 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 08 09:45:36 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 08 09:45:36 compute-0 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 08 09:45:36 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 08 09:45:36 compute-0 podman[87096]: 2025-10-08 09:45:36.358366219 +0000 UTC m=+0.099547190 container create 8f14f7dc38cb0f06e9a171e6fc1c267bdcc705e935a8834e96871f94534619f1 (image=quay.io/ceph/ceph:v19, name=affectionate_yalow, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:45:36 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:36 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 08 09:45:36 compute-0 podman[87096]: 2025-10-08 09:45:36.278311319 +0000 UTC m=+0.019492310 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:45:36 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:36 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 08 09:45:36 compute-0 systemd[1]: Started libpod-conmon-8f14f7dc38cb0f06e9a171e6fc1c267bdcc705e935a8834e96871f94534619f1.scope.
Oct 08 09:45:36 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:36 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 08 09:45:36 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 09:45:36 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 08 09:45:36 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 09:45:36 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:45:36 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:45:36 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:45:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91e13f0d0c7a0ecc60e3eadbee5a8a9b92c571bfba4bd6b57e81f73340cef0c3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91e13f0d0c7a0ecc60e3eadbee5a8a9b92c571bfba4bd6b57e81f73340cef0c3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91e13f0d0c7a0ecc60e3eadbee5a8a9b92c571bfba4bd6b57e81f73340cef0c3/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:36 compute-0 podman[87096]: 2025-10-08 09:45:36.44036388 +0000 UTC m=+0.181544871 container init 8f14f7dc38cb0f06e9a171e6fc1c267bdcc705e935a8834e96871f94534619f1 (image=quay.io/ceph/ceph:v19, name=affectionate_yalow, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct 08 09:45:36 compute-0 podman[87096]: 2025-10-08 09:45:36.445102826 +0000 UTC m=+0.186283817 container start 8f14f7dc38cb0f06e9a171e6fc1c267bdcc705e935a8834e96871f94534619f1 (image=quay.io/ceph/ceph:v19, name=affectionate_yalow, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 08 09:45:36 compute-0 podman[87096]: 2025-10-08 09:45:36.450780542 +0000 UTC m=+0.191961523 container attach 8f14f7dc38cb0f06e9a171e6fc1c267bdcc705e935a8834e96871f94534619f1 (image=quay.io/ceph/ceph:v19, name=affectionate_yalow, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:45:36 compute-0 sudo[87114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:45:36 compute-0 sudo[87114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:36 compute-0 sudo[87114]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:36 compute-0 sudo[87140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 08 09:45:36 compute-0 sudo[87140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:36 compute-0 ceph-mgr[73869]: [progress INFO root] Completed event f1f7cd03-9f1a-4216-9173-a4ef5b56243c (Global Recovery Event) in 10 seconds
Oct 08 09:45:36 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/server_port}] v 0)
Oct 08 09:45:36 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3966809353' entity='client.admin' 
Oct 08 09:45:36 compute-0 systemd[1]: libpod-8f14f7dc38cb0f06e9a171e6fc1c267bdcc705e935a8834e96871f94534619f1.scope: Deactivated successfully.
Oct 08 09:45:36 compute-0 podman[87096]: 2025-10-08 09:45:36.831657599 +0000 UTC m=+0.572838620 container died 8f14f7dc38cb0f06e9a171e6fc1c267bdcc705e935a8834e96871f94534619f1 (image=quay.io/ceph/ceph:v19, name=affectionate_yalow, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:45:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-91e13f0d0c7a0ecc60e3eadbee5a8a9b92c571bfba4bd6b57e81f73340cef0c3-merged.mount: Deactivated successfully.
Oct 08 09:45:36 compute-0 podman[87096]: 2025-10-08 09:45:36.910125314 +0000 UTC m=+0.651306285 container remove 8f14f7dc38cb0f06e9a171e6fc1c267bdcc705e935a8834e96871f94534619f1 (image=quay.io/ceph/ceph:v19, name=affectionate_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:45:36 compute-0 sudo[87093]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:36 compute-0 systemd[1]: libpod-conmon-8f14f7dc38cb0f06e9a171e6fc1c267bdcc705e935a8834e96871f94534619f1.scope: Deactivated successfully.
Oct 08 09:45:36 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v93: 193 pgs: 193 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 08 09:45:36 compute-0 ceph-mon[73572]: 6.18 scrub starts
Oct 08 09:45:36 compute-0 ceph-mon[73572]: 6.18 scrub ok
Oct 08 09:45:36 compute-0 ceph-mon[73572]: 7.1c scrub starts
Oct 08 09:45:36 compute-0 ceph-mon[73572]: 7.1c scrub ok
Oct 08 09:45:36 compute-0 ceph-mon[73572]: Saving service node-exporter spec with placement *
Oct 08 09:45:36 compute-0 ceph-mon[73572]: Updating compute-1:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct 08 09:45:36 compute-0 ceph-mon[73572]: Saving service grafana spec with placement compute-0;count:1
Oct 08 09:45:36 compute-0 ceph-mon[73572]: Updating compute-0:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct 08 09:45:36 compute-0 ceph-mon[73572]: Saving service prometheus spec with placement compute-0;count:1
Oct 08 09:45:36 compute-0 ceph-mon[73572]: Saving service alertmanager spec with placement compute-0;count:1
Oct 08 09:45:36 compute-0 ceph-mon[73572]: Updating compute-2:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct 08 09:45:36 compute-0 ceph-mon[73572]: 5.1d scrub starts
Oct 08 09:45:36 compute-0 ceph-mon[73572]: 5.1d scrub ok
Oct 08 09:45:36 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:36 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:36 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:36 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:36 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 08 09:45:36 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:36 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:36 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:36 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 09:45:36 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 09:45:36 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:45:36 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3966809353' entity='client.admin' 
Oct 08 09:45:36 compute-0 podman[87236]: 2025-10-08 09:45:36.990679044 +0000 UTC m=+0.048827445 container create 16015a05b9190f05dc9e49782f22b1651ce473bfa24c767b223d33290cbffa64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_ishizaka, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:45:37 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 6.1f scrub starts
Oct 08 09:45:37 compute-0 systemd[1]: Started libpod-conmon-16015a05b9190f05dc9e49782f22b1651ce473bfa24c767b223d33290cbffa64.scope.
Oct 08 09:45:37 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 6.1f scrub ok
Oct 08 09:45:37 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:45:37 compute-0 podman[87236]: 2025-10-08 09:45:37.054176829 +0000 UTC m=+0.112325230 container init 16015a05b9190f05dc9e49782f22b1651ce473bfa24c767b223d33290cbffa64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_ishizaka, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:45:37 compute-0 podman[87236]: 2025-10-08 09:45:37.061900849 +0000 UTC m=+0.120049260 container start 16015a05b9190f05dc9e49782f22b1651ce473bfa24c767b223d33290cbffa64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct 08 09:45:37 compute-0 silly_ishizaka[87252]: 167 167
Oct 08 09:45:37 compute-0 systemd[1]: libpod-16015a05b9190f05dc9e49782f22b1651ce473bfa24c767b223d33290cbffa64.scope: Deactivated successfully.
Oct 08 09:45:37 compute-0 podman[87236]: 2025-10-08 09:45:37.067303633 +0000 UTC m=+0.125452074 container attach 16015a05b9190f05dc9e49782f22b1651ce473bfa24c767b223d33290cbffa64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Oct 08 09:45:37 compute-0 podman[87236]: 2025-10-08 09:45:37.067581985 +0000 UTC m=+0.125730396 container died 16015a05b9190f05dc9e49782f22b1651ce473bfa24c767b223d33290cbffa64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_ishizaka, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:45:37 compute-0 podman[87236]: 2025-10-08 09:45:36.97322528 +0000 UTC m=+0.031373701 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:45:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b7c36b75f158b1a4faa85ac697b6014b9c43ca447a6975eb6a80c12e5df69c8-merged.mount: Deactivated successfully.
Oct 08 09:45:37 compute-0 podman[87236]: 2025-10-08 09:45:37.117685773 +0000 UTC m=+0.175834184 container remove 16015a05b9190f05dc9e49782f22b1651ce473bfa24c767b223d33290cbffa64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_ishizaka, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:45:37 compute-0 sudo[87292]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwrulmbvdcrjsjgcmflnfuwthzfxtfge ; /usr/bin/python3'
Oct 08 09:45:37 compute-0 systemd[1]: libpod-conmon-16015a05b9190f05dc9e49782f22b1651ce473bfa24c767b223d33290cbffa64.scope: Deactivated successfully.
Oct 08 09:45:37 compute-0 sudo[87292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:45:37 compute-0 podman[87303]: 2025-10-08 09:45:37.288664564 +0000 UTC m=+0.045899005 container create 711a3e5c608c1c898034f4ee8f55f42c27ea17f75abdb735b420c7389feffaa5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:45:37 compute-0 python3[87297]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/ssl_server_port 8443 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:45:37 compute-0 ceph-mgr[73869]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/2890316650; not ready for session (expect reconnect)
Oct 08 09:45:37 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 08 09:45:37 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 08 09:45:37 compute-0 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 08 09:45:37 compute-0 systemd[1]: Started libpod-conmon-711a3e5c608c1c898034f4ee8f55f42c27ea17f75abdb735b420c7389feffaa5.scope.
Oct 08 09:45:37 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:45:37 compute-0 podman[87317]: 2025-10-08 09:45:37.360595637 +0000 UTC m=+0.053269820 container create 461530d8334eae3f2771cdb0c926da225bc4dc8606d45fb1ed858397328e35e4 (image=quay.io/ceph/ceph:v19, name=lucid_galois, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:45:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a553839c831bcba7d82d94566b1abeadce4c4234df9bae55e7eaacfb1a260d44/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a553839c831bcba7d82d94566b1abeadce4c4234df9bae55e7eaacfb1a260d44/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a553839c831bcba7d82d94566b1abeadce4c4234df9bae55e7eaacfb1a260d44/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a553839c831bcba7d82d94566b1abeadce4c4234df9bae55e7eaacfb1a260d44/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a553839c831bcba7d82d94566b1abeadce4c4234df9bae55e7eaacfb1a260d44/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:37 compute-0 podman[87303]: 2025-10-08 09:45:37.265210271 +0000 UTC m=+0.022444722 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:45:37 compute-0 podman[87303]: 2025-10-08 09:45:37.376706825 +0000 UTC m=+0.133941266 container init 711a3e5c608c1c898034f4ee8f55f42c27ea17f75abdb735b420c7389feffaa5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_easley, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:45:37 compute-0 podman[87303]: 2025-10-08 09:45:37.383261737 +0000 UTC m=+0.140496168 container start 711a3e5c608c1c898034f4ee8f55f42c27ea17f75abdb735b420c7389feffaa5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_easley, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 08 09:45:37 compute-0 podman[87303]: 2025-10-08 09:45:37.387365108 +0000 UTC m=+0.144599539 container attach 711a3e5c608c1c898034f4ee8f55f42c27ea17f75abdb735b420c7389feffaa5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_easley, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct 08 09:45:37 compute-0 systemd[1]: Started libpod-conmon-461530d8334eae3f2771cdb0c926da225bc4dc8606d45fb1ed858397328e35e4.scope.
Oct 08 09:45:37 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Oct 08 09:45:37 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:45:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/555a56cfe474a6ad3cf6eb32850f876051874dc60dd80aa552a7176beb41ec95/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/555a56cfe474a6ad3cf6eb32850f876051874dc60dd80aa552a7176beb41ec95/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/555a56cfe474a6ad3cf6eb32850f876051874dc60dd80aa552a7176beb41ec95/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:37 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Oct 08 09:45:37 compute-0 ceph-mon[73572]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.102:6800/2890316650,v1:192.168.122.102:6801/2890316650] boot
Oct 08 09:45:37 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Oct 08 09:45:37 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 08 09:45:37 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 08 09:45:37 compute-0 podman[87317]: 2025-10-08 09:45:37.429164282 +0000 UTC m=+0.121838495 container init 461530d8334eae3f2771cdb0c926da225bc4dc8606d45fb1ed858397328e35e4 (image=quay.io/ceph/ceph:v19, name=lucid_galois, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 08 09:45:37 compute-0 podman[87317]: 2025-10-08 09:45:37.338105664 +0000 UTC m=+0.030779867 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:45:37 compute-0 podman[87317]: 2025-10-08 09:45:37.435098517 +0000 UTC m=+0.127772700 container start 461530d8334eae3f2771cdb0c926da225bc4dc8606d45fb1ed858397328e35e4 (image=quay.io/ceph/ceph:v19, name=lucid_galois, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:45:37 compute-0 podman[87317]: 2025-10-08 09:45:37.440323975 +0000 UTC m=+0.132998158 container attach 461530d8334eae3f2771cdb0c926da225bc4dc8606d45fb1ed858397328e35e4 (image=quay.io/ceph/ceph:v19, name=lucid_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 08 09:45:37 compute-0 elated_easley[87333]: --> passed data devices: 0 physical, 1 LVM
Oct 08 09:45:37 compute-0 elated_easley[87333]: --> All data devices are unavailable
Oct 08 09:45:37 compute-0 systemd[1]: libpod-711a3e5c608c1c898034f4ee8f55f42c27ea17f75abdb735b420c7389feffaa5.scope: Deactivated successfully.
Oct 08 09:45:37 compute-0 podman[87303]: 2025-10-08 09:45:37.719231522 +0000 UTC m=+0.476465973 container died 711a3e5c608c1c898034f4ee8f55f42c27ea17f75abdb735b420c7389feffaa5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:45:37 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ssl_server_port}] v 0)
Oct 08 09:45:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-a553839c831bcba7d82d94566b1abeadce4c4234df9bae55e7eaacfb1a260d44-merged.mount: Deactivated successfully.
Oct 08 09:45:37 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1514290386' entity='client.admin' 
Oct 08 09:45:37 compute-0 podman[87303]: 2025-10-08 09:45:37.788779877 +0000 UTC m=+0.546014308 container remove 711a3e5c608c1c898034f4ee8f55f42c27ea17f75abdb735b420c7389feffaa5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_easley, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:45:37 compute-0 systemd[1]: libpod-conmon-711a3e5c608c1c898034f4ee8f55f42c27ea17f75abdb735b420c7389feffaa5.scope: Deactivated successfully.
Oct 08 09:45:37 compute-0 systemd[1]: libpod-461530d8334eae3f2771cdb0c926da225bc4dc8606d45fb1ed858397328e35e4.scope: Deactivated successfully.
Oct 08 09:45:37 compute-0 podman[87317]: 2025-10-08 09:45:37.804335902 +0000 UTC m=+0.497010115 container died 461530d8334eae3f2771cdb0c926da225bc4dc8606d45fb1ed858397328e35e4 (image=quay.io/ceph/ceph:v19, name=lucid_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:45:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-555a56cfe474a6ad3cf6eb32850f876051874dc60dd80aa552a7176beb41ec95-merged.mount: Deactivated successfully.
Oct 08 09:45:37 compute-0 sudo[87140]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[4.19( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.585172653s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.505683899s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[4.19( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.585124493s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.505683899s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[3.1d( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=34 pruub=7.591294765s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.511878967s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[6.1b( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=8.646142960s) [2] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.566734314s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[3.1d( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=34 pruub=7.591255665s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.511878967s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[6.1b( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=8.646098137s) [2] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.566734314s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[5.1a( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.591252327s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512023926s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[5.1a( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.591234684s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512023926s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[3.1b( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=34 pruub=7.591255665s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512107849s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[3.1b( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=34 pruub=7.591235161s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512107849s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[4.1c( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.590954781s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512062073s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[4.1c( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.590933800s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512062073s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[6.1e( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=8.648948669s) [2] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.570114136s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[6.1e( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=8.648931503s) [2] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.570114136s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[4.1d( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.590936184s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512184143s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[3.1a( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=34 pruub=7.591032028s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512313843s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[4.1d( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.590922356s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512184143s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[3.1a( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=34 pruub=7.591004372s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512313843s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[3.9( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=34 pruub=7.591074467s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512420654s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[3.9( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=34 pruub=7.591054916s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512420654s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[5.e( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.591136456s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512588501s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[3.8( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=34 pruub=7.591069221s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512535095s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[5.e( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.591107368s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512588501s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[3.8( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=34 pruub=7.591048717s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512535095s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[6.1( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=8.648645401s) [2] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.570251465s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[6.1( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=8.648628235s) [2] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.570251465s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[5.4( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.591042519s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512802124s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[5.4( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.591028214s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512802124s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[4.6( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.590711117s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512779236s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[4.6( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.590661526s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512779236s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[4.2( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.591164112s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513366699s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[4.1( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.591093540s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513374329s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[4.2( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.591070652s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513366699s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[4.1( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.591074944s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513374329s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[5.0( empty local-lis/les=27/29 n=0 ec=19/19 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.591279507s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513656616s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[5.0( empty local-lis/les=27/29 n=0 ec=19/19 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.591251850s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513656616s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[3.0( empty local-lis/les=25/29 n=0 ec=17/17 lis/c=25/25 les/c/f=29/29/0 sis=34 pruub=7.591155529s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513679504s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[3.0( empty local-lis/les=25/29 n=0 ec=17/17 lis/c=25/25 les/c/f=29/29/0 sis=34 pruub=7.591136932s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513679504s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[5.d( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.591093063s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513748169s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[4.3( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.590025902s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512626648s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[4.3( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.589921951s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512626648s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[5.b( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.590962410s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513908386s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[5.b( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.590939522s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513908386s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[4.9( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.590873241s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513893127s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[4.9( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.590769291s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513893127s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[5.8( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.590764523s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513961792s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[5.8( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.590742588s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513961792s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[4.8( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.590628147s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513954163s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[3.11( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=34 pruub=7.590536594s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.514076233s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[4.8( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.590607643s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513954163s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[5.d( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.590866566s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513748169s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[3.11( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=34 pruub=7.590515614s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.514076233s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[6.17( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=8.646706581s) [2] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.570465088s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[3.e( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=34 pruub=7.590148926s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513923645s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[6.17( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=8.646681786s) [2] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.570465088s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[3.e( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=34 pruub=7.590128422s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513923645s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[5.12( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.590230942s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.514350891s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[5.12( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.590206623s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.514350891s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[4.15( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.589753628s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.514129639s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[4.14( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.589865685s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.514190674s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[5.13( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.589458466s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.514404297s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[4.15( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.589147091s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.514129639s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[5.13( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.589407444s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.514404297s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[3.15( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=34 pruub=7.596970558s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.521919250s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[3.15( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=34 pruub=7.596794605s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.521919250s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[4.14( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.589154243s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.514190674s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[4.1f( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.592108727s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.518920898s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[4.1f( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.592087269s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.518920898s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[6.1c( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=8.644055367s) [2] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.570907593s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[6.1c( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=8.644032478s) [2] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.570907593s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[6.12( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=8.643602371s) [2] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.570556641s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:45:37 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[6.12( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=8.643568039s) [2] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.570556641s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:45:37 compute-0 podman[87317]: 2025-10-08 09:45:37.862700433 +0000 UTC m=+0.555374616 container remove 461530d8334eae3f2771cdb0c926da225bc4dc8606d45fb1ed858397328e35e4 (image=quay.io/ceph/ceph:v19, name=lucid_galois, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:45:37 compute-0 systemd[1]: libpod-conmon-461530d8334eae3f2771cdb0c926da225bc4dc8606d45fb1ed858397328e35e4.scope: Deactivated successfully.
Oct 08 09:45:37 compute-0 sudo[87292]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:37 compute-0 sudo[87403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:45:37 compute-0 sudo[87403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:37 compute-0 sudo[87403]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:37 compute-0 sudo[87428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- lvm list --format json
Oct 08 09:45:37 compute-0 sudo[87428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:37 compute-0 ceph-mon[73572]: 2.17 scrub starts
Oct 08 09:45:37 compute-0 ceph-mon[73572]: 2.17 scrub ok
Oct 08 09:45:37 compute-0 ceph-mon[73572]: OSD bench result of 8207.449951 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 08 09:45:37 compute-0 ceph-mon[73572]: pgmap v93: 193 pgs: 193 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 08 09:45:37 compute-0 ceph-mon[73572]: 6.1f scrub starts
Oct 08 09:45:37 compute-0 ceph-mon[73572]: 6.1f scrub ok
Oct 08 09:45:37 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 08 09:45:37 compute-0 ceph-mon[73572]: osd.2 [v2:192.168.122.102:6800/2890316650,v1:192.168.122.102:6801/2890316650] boot
Oct 08 09:45:37 compute-0 ceph-mon[73572]: osdmap e34: 3 total, 3 up, 3 in
Oct 08 09:45:37 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 08 09:45:37 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/1514290386' entity='client.admin' 
Oct 08 09:45:38 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 6.c deep-scrub starts
Oct 08 09:45:38 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 6.c deep-scrub ok
Oct 08 09:45:38 compute-0 sudo[87476]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-raiiixewnkhplkizbtywyfzrvsigftea ; /usr/bin/python3'
Oct 08 09:45:38 compute-0 sudo[87476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:45:38 compute-0 python3[87478]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/ssl false _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:45:38 compute-0 podman[87503]: 2025-10-08 09:45:38.265424506 +0000 UTC m=+0.053215048 container create 0ffb8f705043d0dd55ba3a7df6130b90d89a6a8342a9f5e994500908f0bed1db (image=quay.io/ceph/ceph:v19, name=amazing_swirles, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct 08 09:45:38 compute-0 systemd[1]: Started libpod-conmon-0ffb8f705043d0dd55ba3a7df6130b90d89a6a8342a9f5e994500908f0bed1db.scope.
Oct 08 09:45:38 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:45:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/918541905b2588f9fec28fd1fa033415f6f06258d72f80daf7d8c8dbf7c5bad3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/918541905b2588f9fec28fd1fa033415f6f06258d72f80daf7d8c8dbf7c5bad3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/918541905b2588f9fec28fd1fa033415f6f06258d72f80daf7d8c8dbf7c5bad3/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:38 compute-0 podman[87503]: 2025-10-08 09:45:38.235668863 +0000 UTC m=+0.023459425 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:45:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:45:38 compute-0 podman[87503]: 2025-10-08 09:45:38.338679175 +0000 UTC m=+0.126469727 container init 0ffb8f705043d0dd55ba3a7df6130b90d89a6a8342a9f5e994500908f0bed1db (image=quay.io/ceph/ceph:v19, name=amazing_swirles, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:45:38 compute-0 podman[87503]: 2025-10-08 09:45:38.344696364 +0000 UTC m=+0.132486906 container start 0ffb8f705043d0dd55ba3a7df6130b90d89a6a8342a9f5e994500908f0bed1db (image=quay.io/ceph/ceph:v19, name=amazing_swirles, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 08 09:45:38 compute-0 podman[87503]: 2025-10-08 09:45:38.348821845 +0000 UTC m=+0.136612377 container attach 0ffb8f705043d0dd55ba3a7df6130b90d89a6a8342a9f5e994500908f0bed1db (image=quay.io/ceph/ceph:v19, name=amazing_swirles, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 08 09:45:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Oct 08 09:45:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Oct 08 09:45:38 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Oct 08 09:45:38 compute-0 podman[87538]: 2025-10-08 09:45:38.443743912 +0000 UTC m=+0.035368618 container create cfce3773f0c7bf23b5beb147d3dfd18bdccd09704f9882c5c7ae796b70395371 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:45:38 compute-0 systemd[1]: Started libpod-conmon-cfce3773f0c7bf23b5beb147d3dfd18bdccd09704f9882c5c7ae796b70395371.scope.
Oct 08 09:45:38 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:45:38 compute-0 podman[87538]: 2025-10-08 09:45:38.494847112 +0000 UTC m=+0.086471848 container init cfce3773f0c7bf23b5beb147d3dfd18bdccd09704f9882c5c7ae796b70395371 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_chebyshev, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct 08 09:45:38 compute-0 podman[87538]: 2025-10-08 09:45:38.500324089 +0000 UTC m=+0.091948795 container start cfce3773f0c7bf23b5beb147d3dfd18bdccd09704f9882c5c7ae796b70395371 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:45:38 compute-0 festive_chebyshev[87573]: 167 167
Oct 08 09:45:38 compute-0 systemd[1]: libpod-cfce3773f0c7bf23b5beb147d3dfd18bdccd09704f9882c5c7ae796b70395371.scope: Deactivated successfully.
Oct 08 09:45:38 compute-0 conmon[87573]: conmon cfce3773f0c7bf23b5be <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cfce3773f0c7bf23b5beb147d3dfd18bdccd09704f9882c5c7ae796b70395371.scope/container/memory.events
Oct 08 09:45:38 compute-0 podman[87538]: 2025-10-08 09:45:38.508885734 +0000 UTC m=+0.100510470 container attach cfce3773f0c7bf23b5beb147d3dfd18bdccd09704f9882c5c7ae796b70395371 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:45:38 compute-0 podman[87538]: 2025-10-08 09:45:38.509399286 +0000 UTC m=+0.101024012 container died cfce3773f0c7bf23b5beb147d3dfd18bdccd09704f9882c5c7ae796b70395371 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_chebyshev, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 08 09:45:38 compute-0 podman[87538]: 2025-10-08 09:45:38.429286182 +0000 UTC m=+0.020910908 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:45:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-5453c8781f517bf011d9a881940494728f830248255f1034bd31f67b1f4e325c-merged.mount: Deactivated successfully.
Oct 08 09:45:38 compute-0 podman[87538]: 2025-10-08 09:45:38.569688692 +0000 UTC m=+0.161313408 container remove cfce3773f0c7bf23b5beb147d3dfd18bdccd09704f9882c5c7ae796b70395371 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 08 09:45:38 compute-0 systemd[1]: libpod-conmon-cfce3773f0c7bf23b5beb147d3dfd18bdccd09704f9882c5c7ae796b70395371.scope: Deactivated successfully.
Oct 08 09:45:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ssl}] v 0)
Oct 08 09:45:38 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2213379190' entity='client.admin' 
Oct 08 09:45:38 compute-0 podman[87595]: 2025-10-08 09:45:38.762629313 +0000 UTC m=+0.077296019 container create fd6f21b30f82e7c68d858c7b0cc8cf6537d415905353a02b29b8cbc1b257557a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_napier, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:45:38 compute-0 systemd[1]: libpod-0ffb8f705043d0dd55ba3a7df6130b90d89a6a8342a9f5e994500908f0bed1db.scope: Deactivated successfully.
Oct 08 09:45:38 compute-0 podman[87503]: 2025-10-08 09:45:38.774673458 +0000 UTC m=+0.562463990 container died 0ffb8f705043d0dd55ba3a7df6130b90d89a6a8342a9f5e994500908f0bed1db (image=quay.io/ceph/ceph:v19, name=amazing_swirles, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:45:38 compute-0 systemd[1]: Started libpod-conmon-fd6f21b30f82e7c68d858c7b0cc8cf6537d415905353a02b29b8cbc1b257557a.scope.
Oct 08 09:45:38 compute-0 podman[87595]: 2025-10-08 09:45:38.704530128 +0000 UTC m=+0.019196854 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:45:38 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:45:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66619cffad215eae24821cedc028663bcfc8f86a26df3f3f6d2d52f19b666281/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66619cffad215eae24821cedc028663bcfc8f86a26df3f3f6d2d52f19b666281/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66619cffad215eae24821cedc028663bcfc8f86a26df3f3f6d2d52f19b666281/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66619cffad215eae24821cedc028663bcfc8f86a26df3f3f6d2d52f19b666281/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:38 compute-0 podman[87595]: 2025-10-08 09:45:38.881005403 +0000 UTC m=+0.195672119 container init fd6f21b30f82e7c68d858c7b0cc8cf6537d415905353a02b29b8cbc1b257557a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_napier, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:45:38 compute-0 podman[87595]: 2025-10-08 09:45:38.888895095 +0000 UTC m=+0.203561801 container start fd6f21b30f82e7c68d858c7b0cc8cf6537d415905353a02b29b8cbc1b257557a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 08 09:45:38 compute-0 podman[87595]: 2025-10-08 09:45:38.936079192 +0000 UTC m=+0.250745908 container attach fd6f21b30f82e7c68d858c7b0cc8cf6537d415905353a02b29b8cbc1b257557a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:45:38 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v96: 193 pgs: 57 peering, 136 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:45:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-918541905b2588f9fec28fd1fa033415f6f06258d72f80daf7d8c8dbf7c5bad3-merged.mount: Deactivated successfully.
Oct 08 09:45:38 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 4.f scrub starts
Oct 08 09:45:39 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 4.f scrub ok
Oct 08 09:45:39 compute-0 ceph-mon[73572]: 7.12 scrub starts
Oct 08 09:45:39 compute-0 ceph-mon[73572]: 7.12 scrub ok
Oct 08 09:45:39 compute-0 ceph-mon[73572]: 6.c deep-scrub starts
Oct 08 09:45:39 compute-0 ceph-mon[73572]: 6.c deep-scrub ok
Oct 08 09:45:39 compute-0 ceph-mon[73572]: osdmap e35: 3 total, 3 up, 3 in
Oct 08 09:45:39 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/2213379190' entity='client.admin' 
Oct 08 09:45:39 compute-0 podman[87503]: 2025-10-08 09:45:39.113663243 +0000 UTC m=+0.901453805 container remove 0ffb8f705043d0dd55ba3a7df6130b90d89a6a8342a9f5e994500908f0bed1db (image=quay.io/ceph/ceph:v19, name=amazing_swirles, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:45:39 compute-0 systemd[1]: libpod-conmon-0ffb8f705043d0dd55ba3a7df6130b90d89a6a8342a9f5e994500908f0bed1db.scope: Deactivated successfully.
Oct 08 09:45:39 compute-0 sudo[87476]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:39 compute-0 frosty_napier[87621]: {
Oct 08 09:45:39 compute-0 frosty_napier[87621]:     "1": [
Oct 08 09:45:39 compute-0 frosty_napier[87621]:         {
Oct 08 09:45:39 compute-0 frosty_napier[87621]:             "devices": [
Oct 08 09:45:39 compute-0 frosty_napier[87621]:                 "/dev/loop3"
Oct 08 09:45:39 compute-0 frosty_napier[87621]:             ],
Oct 08 09:45:39 compute-0 frosty_napier[87621]:             "lv_name": "ceph_lv0",
Oct 08 09:45:39 compute-0 frosty_napier[87621]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 09:45:39 compute-0 frosty_napier[87621]:             "lv_size": "21470642176",
Oct 08 09:45:39 compute-0 frosty_napier[87621]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 08 09:45:39 compute-0 frosty_napier[87621]:             "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 09:45:39 compute-0 frosty_napier[87621]:             "name": "ceph_lv0",
Oct 08 09:45:39 compute-0 frosty_napier[87621]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 09:45:39 compute-0 frosty_napier[87621]:             "tags": {
Oct 08 09:45:39 compute-0 frosty_napier[87621]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 08 09:45:39 compute-0 frosty_napier[87621]:                 "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 09:45:39 compute-0 frosty_napier[87621]:                 "ceph.cephx_lockbox_secret": "",
Oct 08 09:45:39 compute-0 frosty_napier[87621]:                 "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct 08 09:45:39 compute-0 frosty_napier[87621]:                 "ceph.cluster_name": "ceph",
Oct 08 09:45:39 compute-0 frosty_napier[87621]:                 "ceph.crush_device_class": "",
Oct 08 09:45:39 compute-0 frosty_napier[87621]:                 "ceph.encrypted": "0",
Oct 08 09:45:39 compute-0 frosty_napier[87621]:                 "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct 08 09:45:39 compute-0 frosty_napier[87621]:                 "ceph.osd_id": "1",
Oct 08 09:45:39 compute-0 frosty_napier[87621]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 08 09:45:39 compute-0 frosty_napier[87621]:                 "ceph.type": "block",
Oct 08 09:45:39 compute-0 frosty_napier[87621]:                 "ceph.vdo": "0",
Oct 08 09:45:39 compute-0 frosty_napier[87621]:                 "ceph.with_tpm": "0"
Oct 08 09:45:39 compute-0 frosty_napier[87621]:             },
Oct 08 09:45:39 compute-0 frosty_napier[87621]:             "type": "block",
Oct 08 09:45:39 compute-0 frosty_napier[87621]:             "vg_name": "ceph_vg0"
Oct 08 09:45:39 compute-0 frosty_napier[87621]:         }
Oct 08 09:45:39 compute-0 frosty_napier[87621]:     ]
Oct 08 09:45:39 compute-0 frosty_napier[87621]: }
Oct 08 09:45:39 compute-0 systemd[1]: libpod-fd6f21b30f82e7c68d858c7b0cc8cf6537d415905353a02b29b8cbc1b257557a.scope: Deactivated successfully.
Oct 08 09:45:39 compute-0 podman[87595]: 2025-10-08 09:45:39.258390885 +0000 UTC m=+0.573057611 container died fd6f21b30f82e7c68d858c7b0cc8cf6537d415905353a02b29b8cbc1b257557a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:45:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-66619cffad215eae24821cedc028663bcfc8f86a26df3f3f6d2d52f19b666281-merged.mount: Deactivated successfully.
Oct 08 09:45:39 compute-0 podman[87595]: 2025-10-08 09:45:39.342349816 +0000 UTC m=+0.657016522 container remove fd6f21b30f82e7c68d858c7b0cc8cf6537d415905353a02b29b8cbc1b257557a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_napier, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 08 09:45:39 compute-0 systemd[1]: libpod-conmon-fd6f21b30f82e7c68d858c7b0cc8cf6537d415905353a02b29b8cbc1b257557a.scope: Deactivated successfully.
Oct 08 09:45:39 compute-0 sudo[87428]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:39 compute-0 sudo[87648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:45:39 compute-0 sudo[87648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:39 compute-0 sudo[87648]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:39 compute-0 sudo[87673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- raw list --format json
Oct 08 09:45:39 compute-0 sudo[87673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:39 compute-0 sudo[87721]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhdhspmqdlxknwbyvxkvfsgmvzceqfih ; /usr/bin/python3'
Oct 08 09:45:39 compute-0 sudo[87721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:45:39 compute-0 python3[87723]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a -f 'name=ceph-?(.*)-mgr.*' --format \{\{\.Command\}\} --no-trunc
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:45:39 compute-0 sudo[87721]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:39 compute-0 podman[87776]: 2025-10-08 09:45:39.901138611 +0000 UTC m=+0.054987967 container create 25d005aba565c0018a0feceadb2ac50b47e1b3ee4611f5dde4e11c9f3b967751 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Oct 08 09:45:39 compute-0 systemd[1]: Started libpod-conmon-25d005aba565c0018a0feceadb2ac50b47e1b3ee4611f5dde4e11c9f3b967751.scope.
Oct 08 09:45:39 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:45:39 compute-0 podman[87776]: 2025-10-08 09:45:39.866393782 +0000 UTC m=+0.020243168 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:45:39 compute-0 podman[87776]: 2025-10-08 09:45:39.997058534 +0000 UTC m=+0.150907930 container init 25d005aba565c0018a0feceadb2ac50b47e1b3ee4611f5dde4e11c9f3b967751 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_keldysh, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 08 09:45:40 compute-0 podman[87776]: 2025-10-08 09:45:40.003884742 +0000 UTC m=+0.157734108 container start 25d005aba565c0018a0feceadb2ac50b47e1b3ee4611f5dde4e11c9f3b967751 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_keldysh, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:45:40 compute-0 admiring_keldysh[87792]: 167 167
Oct 08 09:45:40 compute-0 systemd[1]: libpod-25d005aba565c0018a0feceadb2ac50b47e1b3ee4611f5dde4e11c9f3b967751.scope: Deactivated successfully.
Oct 08 09:45:40 compute-0 podman[87776]: 2025-10-08 09:45:40.026811055 +0000 UTC m=+0.180660431 container attach 25d005aba565c0018a0feceadb2ac50b47e1b3ee4611f5dde4e11c9f3b967751 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_keldysh, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:45:40 compute-0 podman[87776]: 2025-10-08 09:45:40.027824507 +0000 UTC m=+0.181673863 container died 25d005aba565c0018a0feceadb2ac50b47e1b3ee4611f5dde4e11c9f3b967751 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct 08 09:45:40 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 3.4 deep-scrub starts
Oct 08 09:45:40 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 3.4 deep-scrub ok
Oct 08 09:45:40 compute-0 ceph-mon[73572]: 2.14 scrub starts
Oct 08 09:45:40 compute-0 ceph-mon[73572]: 2.14 scrub ok
Oct 08 09:45:40 compute-0 ceph-mon[73572]: pgmap v96: 193 pgs: 57 peering, 136 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:45:40 compute-0 ceph-mon[73572]: 4.f scrub starts
Oct 08 09:45:40 compute-0 ceph-mon[73572]: 4.f scrub ok
Oct 08 09:45:40 compute-0 ceph-mon[73572]: 2.1c scrub starts
Oct 08 09:45:40 compute-0 ceph-mon[73572]: 2.1c scrub ok
Oct 08 09:45:40 compute-0 sudo[87833]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzwyipiwpesalqlsaypkfejefaawfpzm ; /usr/bin/python3'
Oct 08 09:45:40 compute-0 sudo[87833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:45:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f93151b2c2b7c8818db2ec9a0e6a5f7a2c11eca44f7b99ddea79680b2bc6436-merged.mount: Deactivated successfully.
Oct 08 09:45:40 compute-0 podman[87776]: 2025-10-08 09:45:40.132610373 +0000 UTC m=+0.286459719 container remove 25d005aba565c0018a0feceadb2ac50b47e1b3ee4611f5dde4e11c9f3b967751 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:45:40 compute-0 systemd[1]: libpod-conmon-25d005aba565c0018a0feceadb2ac50b47e1b3ee4611f5dde4e11c9f3b967751.scope: Deactivated successfully.
Oct 08 09:45:40 compute-0 python3[87835]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-0.ixicfj/server_addr 192.168.122.100
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:45:40 compute-0 podman[87843]: 2025-10-08 09:45:40.290897578 +0000 UTC m=+0.036940610 container create 36c83e3c93384f582896714b60cd9638a1f4af38bbf8aa48be133d29b0cb4579 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:45:40 compute-0 podman[87844]: 2025-10-08 09:45:40.362783444 +0000 UTC m=+0.108996201 container create d4da444ac1ba4d0bc1bda31b4166ebb3e30236af35c0f712776d258d9537756e (image=quay.io/ceph/ceph:v19, name=happy_elion, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct 08 09:45:40 compute-0 podman[87843]: 2025-10-08 09:45:40.273814782 +0000 UTC m=+0.019857844 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:45:40 compute-0 podman[87844]: 2025-10-08 09:45:40.277590473 +0000 UTC m=+0.023803330 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:45:40 compute-0 systemd[1]: Started libpod-conmon-36c83e3c93384f582896714b60cd9638a1f4af38bbf8aa48be133d29b0cb4579.scope.
Oct 08 09:45:40 compute-0 systemd[1]: Started libpod-conmon-d4da444ac1ba4d0bc1bda31b4166ebb3e30236af35c0f712776d258d9537756e.scope.
Oct 08 09:45:40 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:45:40 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:45:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b583f1fd72d0aa9a04bd2f21117db8d870025b6088e30845027bc4c14031ddec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fb4212c556d0d60c9f05cf637f2b3051141f9b2ece24676ab2bbfab8f090342/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fb4212c556d0d60c9f05cf637f2b3051141f9b2ece24676ab2bbfab8f090342/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fb4212c556d0d60c9f05cf637f2b3051141f9b2ece24676ab2bbfab8f090342/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b583f1fd72d0aa9a04bd2f21117db8d870025b6088e30845027bc4c14031ddec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b583f1fd72d0aa9a04bd2f21117db8d870025b6088e30845027bc4c14031ddec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b583f1fd72d0aa9a04bd2f21117db8d870025b6088e30845027bc4c14031ddec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:40 compute-0 podman[87844]: 2025-10-08 09:45:40.537513703 +0000 UTC m=+0.283726530 container init d4da444ac1ba4d0bc1bda31b4166ebb3e30236af35c0f712776d258d9537756e (image=quay.io/ceph/ceph:v19, name=happy_elion, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Oct 08 09:45:40 compute-0 podman[87844]: 2025-10-08 09:45:40.549180376 +0000 UTC m=+0.295393133 container start d4da444ac1ba4d0bc1bda31b4166ebb3e30236af35c0f712776d258d9537756e (image=quay.io/ceph/ceph:v19, name=happy_elion, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Oct 08 09:45:40 compute-0 podman[87843]: 2025-10-08 09:45:40.592361385 +0000 UTC m=+0.338404507 container init 36c83e3c93384f582896714b60cd9638a1f4af38bbf8aa48be133d29b0cb4579 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid)
Oct 08 09:45:40 compute-0 podman[87843]: 2025-10-08 09:45:40.602250571 +0000 UTC m=+0.348293653 container start 36c83e3c93384f582896714b60cd9638a1f4af38bbf8aa48be133d29b0cb4579 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_elbakyan, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:45:40 compute-0 podman[87843]: 2025-10-08 09:45:40.640208813 +0000 UTC m=+0.386251955 container attach 36c83e3c93384f582896714b60cd9638a1f4af38bbf8aa48be133d29b0cb4579 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_elbakyan, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:45:40 compute-0 podman[87844]: 2025-10-08 09:45:40.689942311 +0000 UTC m=+0.436155148 container attach d4da444ac1ba4d0bc1bda31b4166ebb3e30236af35c0f712776d258d9537756e (image=quay.io/ceph/ceph:v19, name=happy_elion, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default)
Oct 08 09:45:40 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-0.ixicfj/server_addr}] v 0)
Oct 08 09:45:40 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v97: 193 pgs: 57 peering, 136 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:45:41 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/8501056' entity='client.admin' 
Oct 08 09:45:41 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Oct 08 09:45:41 compute-0 systemd[1]: libpod-d4da444ac1ba4d0bc1bda31b4166ebb3e30236af35c0f712776d258d9537756e.scope: Deactivated successfully.
Oct 08 09:45:41 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Oct 08 09:45:41 compute-0 podman[87945]: 2025-10-08 09:45:41.065904398 +0000 UTC m=+0.025379102 container died d4da444ac1ba4d0bc1bda31b4166ebb3e30236af35c0f712776d258d9537756e (image=quay.io/ceph/ceph:v19, name=happy_elion, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 08 09:45:41 compute-0 lvm[87988]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 08 09:45:41 compute-0 lvm[87988]: VG ceph_vg0 finished
Oct 08 09:45:41 compute-0 ceph-mon[73572]: 2.11 scrub starts
Oct 08 09:45:41 compute-0 ceph-mon[73572]: 2.11 scrub ok
Oct 08 09:45:41 compute-0 ceph-mon[73572]: 3.4 deep-scrub starts
Oct 08 09:45:41 compute-0 ceph-mon[73572]: 3.4 deep-scrub ok
Oct 08 09:45:41 compute-0 ceph-mon[73572]: 2.16 scrub starts
Oct 08 09:45:41 compute-0 ceph-mon[73572]: 2.16 scrub ok
Oct 08 09:45:41 compute-0 ceph-mon[73572]: 2.5 scrub starts
Oct 08 09:45:41 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/8501056' entity='client.admin' 
Oct 08 09:45:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-0fb4212c556d0d60c9f05cf637f2b3051141f9b2ece24676ab2bbfab8f090342-merged.mount: Deactivated successfully.
Oct 08 09:45:41 compute-0 podman[87945]: 2025-10-08 09:45:41.352771799 +0000 UTC m=+0.312246503 container remove d4da444ac1ba4d0bc1bda31b4166ebb3e30236af35c0f712776d258d9537756e (image=quay.io/ceph/ceph:v19, name=happy_elion, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct 08 09:45:41 compute-0 lucid_elbakyan[87875]: {}
Oct 08 09:45:41 compute-0 systemd[1]: libpod-conmon-d4da444ac1ba4d0bc1bda31b4166ebb3e30236af35c0f712776d258d9537756e.scope: Deactivated successfully.
Oct 08 09:45:41 compute-0 systemd[1]: libpod-36c83e3c93384f582896714b60cd9638a1f4af38bbf8aa48be133d29b0cb4579.scope: Deactivated successfully.
Oct 08 09:45:41 compute-0 systemd[1]: libpod-36c83e3c93384f582896714b60cd9638a1f4af38bbf8aa48be133d29b0cb4579.scope: Consumed 1.063s CPU time.
Oct 08 09:45:41 compute-0 sudo[87833]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:41 compute-0 podman[87990]: 2025-10-08 09:45:41.41607596 +0000 UTC m=+0.023419598 container died 36c83e3c93384f582896714b60cd9638a1f4af38bbf8aa48be133d29b0cb4579 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_elbakyan, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:45:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-b583f1fd72d0aa9a04bd2f21117db8d870025b6088e30845027bc4c14031ddec-merged.mount: Deactivated successfully.
Oct 08 09:45:41 compute-0 podman[87990]: 2025-10-08 09:45:41.455990195 +0000 UTC m=+0.063333833 container remove 36c83e3c93384f582896714b60cd9638a1f4af38bbf8aa48be133d29b0cb4579 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_elbakyan, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True)
Oct 08 09:45:41 compute-0 systemd[1]: libpod-conmon-36c83e3c93384f582896714b60cd9638a1f4af38bbf8aa48be133d29b0cb4579.scope: Deactivated successfully.
Oct 08 09:45:41 compute-0 sudo[87673]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:41 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 09:45:41 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:41 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 09:45:41 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:41 compute-0 ceph-mgr[73869]: [progress INFO root] update: starting ev 84ae7ebc-c8b9-4226-9ef4-d352c70615bc (Updating rgw.rgw deployment (+3 -> 3))
Oct 08 09:45:41 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.pgshil", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Oct 08 09:45:41 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.pgshil", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 08 09:45:41 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.pgshil", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 08 09:45:41 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Oct 08 09:45:41 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:41 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:45:41 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:45:41 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-2.pgshil on compute-2
Oct 08 09:45:41 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-2.pgshil on compute-2
Oct 08 09:45:41 compute-0 ceph-mgr[73869]: [progress INFO root] Writing back 12 completed events
Oct 08 09:45:41 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 08 09:45:41 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:41 compute-0 ceph-mgr[73869]: [progress WARNING root] Starting Global Recovery Event,57 pgs not in active + clean state
Oct 08 09:45:42 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Oct 08 09:45:42 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Oct 08 09:45:42 compute-0 sudo[88027]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-viwxryccstbzpqhxcrichnwuypqotiuv ; /usr/bin/python3'
Oct 08 09:45:42 compute-0 sudo[88027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:45:42 compute-0 python3[88029]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-1.swlvov/server_addr 192.168.122.101
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:45:42 compute-0 podman[88030]: 2025-10-08 09:45:42.30789712 +0000 UTC m=+0.039337987 container create b975e5337497efbc70ca4368aef9ea909a256134619cd7a365a10ea70c19e02e (image=quay.io/ceph/ceph:v19, name=bold_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:45:42 compute-0 systemd[1]: Started libpod-conmon-b975e5337497efbc70ca4368aef9ea909a256134619cd7a365a10ea70c19e02e.scope.
Oct 08 09:45:42 compute-0 ceph-mon[73572]: 2.5 scrub ok
Oct 08 09:45:42 compute-0 ceph-mon[73572]: pgmap v97: 193 pgs: 57 peering, 136 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:45:42 compute-0 ceph-mon[73572]: 4.4 scrub starts
Oct 08 09:45:42 compute-0 ceph-mon[73572]: 4.4 scrub ok
Oct 08 09:45:42 compute-0 ceph-mon[73572]: 7.15 scrub starts
Oct 08 09:45:42 compute-0 ceph-mon[73572]: 7.15 scrub ok
Oct 08 09:45:42 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:42 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:42 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.pgshil", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 08 09:45:42 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.pgshil", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 08 09:45:42 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:42 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:45:42 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:42 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:45:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d032fc449a728361b35c05ad4c53d340ad8911472f4cb93690b9759f8109bd7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d032fc449a728361b35c05ad4c53d340ad8911472f4cb93690b9759f8109bd7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d032fc449a728361b35c05ad4c53d340ad8911472f4cb93690b9759f8109bd7/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:42 compute-0 podman[88030]: 2025-10-08 09:45:42.291646911 +0000 UTC m=+0.023087798 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:45:42 compute-0 podman[88030]: 2025-10-08 09:45:42.386719378 +0000 UTC m=+0.118160255 container init b975e5337497efbc70ca4368aef9ea909a256134619cd7a365a10ea70c19e02e (image=quay.io/ceph/ceph:v19, name=bold_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1)
Oct 08 09:45:42 compute-0 podman[88030]: 2025-10-08 09:45:42.392939276 +0000 UTC m=+0.124380153 container start b975e5337497efbc70ca4368aef9ea909a256134619cd7a365a10ea70c19e02e (image=quay.io/ceph/ceph:v19, name=bold_williamson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:45:42 compute-0 podman[88030]: 2025-10-08 09:45:42.395630062 +0000 UTC m=+0.127070929 container attach b975e5337497efbc70ca4368aef9ea909a256134619cd7a365a10ea70c19e02e (image=quay.io/ceph/ceph:v19, name=bold_williamson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct 08 09:45:42 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-1.swlvov/server_addr}] v 0)
Oct 08 09:45:42 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1595921047' entity='client.admin' 
Oct 08 09:45:42 compute-0 systemd[1]: libpod-b975e5337497efbc70ca4368aef9ea909a256134619cd7a365a10ea70c19e02e.scope: Deactivated successfully.
Oct 08 09:45:42 compute-0 podman[88030]: 2025-10-08 09:45:42.765999309 +0000 UTC m=+0.497440206 container died b975e5337497efbc70ca4368aef9ea909a256134619cd7a365a10ea70c19e02e (image=quay.io/ceph/ceph:v19, name=bold_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct 08 09:45:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d032fc449a728361b35c05ad4c53d340ad8911472f4cb93690b9759f8109bd7-merged.mount: Deactivated successfully.
Oct 08 09:45:42 compute-0 podman[88030]: 2025-10-08 09:45:42.803839468 +0000 UTC m=+0.535280345 container remove b975e5337497efbc70ca4368aef9ea909a256134619cd7a365a10ea70c19e02e (image=quay.io/ceph/ceph:v19, name=bold_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 08 09:45:42 compute-0 systemd[1]: libpod-conmon-b975e5337497efbc70ca4368aef9ea909a256134619cd7a365a10ea70c19e02e.scope: Deactivated successfully.
Oct 08 09:45:42 compute-0 sudo[88027]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:42 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v98: 193 pgs: 57 peering, 136 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:45:43 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Oct 08 09:45:43 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Oct 08 09:45:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:45:43 compute-0 ceph-mon[73572]: 2.1d scrub starts
Oct 08 09:45:43 compute-0 ceph-mon[73572]: 2.1d scrub ok
Oct 08 09:45:43 compute-0 ceph-mon[73572]: Deploying daemon rgw.rgw.compute-2.pgshil on compute-2
Oct 08 09:45:43 compute-0 ceph-mon[73572]: 5.5 scrub starts
Oct 08 09:45:43 compute-0 ceph-mon[73572]: 5.5 scrub ok
Oct 08 09:45:43 compute-0 ceph-mon[73572]: 7.17 deep-scrub starts
Oct 08 09:45:43 compute-0 ceph-mon[73572]: 7.17 deep-scrub ok
Oct 08 09:45:43 compute-0 ceph-mon[73572]: 7.5 scrub starts
Oct 08 09:45:43 compute-0 ceph-mon[73572]: 7.5 scrub ok
Oct 08 09:45:43 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/1595921047' entity='client.admin' 
Oct 08 09:45:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 08 09:45:43 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 08 09:45:43 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Oct 08 09:45:43 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.aaugis", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Oct 08 09:45:43 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.aaugis", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 08 09:45:43 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.aaugis", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 08 09:45:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Oct 08 09:45:43 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:45:43 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:45:43 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-1.aaugis on compute-1
Oct 08 09:45:43 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-1.aaugis on compute-1
Oct 08 09:45:43 compute-0 sudo[88107]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lptduvskzkrmjrprclggyhpkxnowofmb ; /usr/bin/python3'
Oct 08 09:45:43 compute-0 sudo[88107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:45:43 compute-0 python3[88109]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-2.mtagwx/server_addr 192.168.122.102
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:45:43 compute-0 podman[88110]: 2025-10-08 09:45:43.787749838 +0000 UTC m=+0.034397698 container create 1c50d3257d007e84467d629cceef6f800aefe4ea23729a5b250cc2353151ed76 (image=quay.io/ceph/ceph:v19, name=strange_dirac, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:45:43 compute-0 systemd[1]: Started libpod-conmon-1c50d3257d007e84467d629cceef6f800aefe4ea23729a5b250cc2353151ed76.scope.
Oct 08 09:45:43 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:45:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da9594b96a8dc5717da9464ca3c313359c351d6b8b01b8b59229284948a0742c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da9594b96a8dc5717da9464ca3c313359c351d6b8b01b8b59229284948a0742c/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da9594b96a8dc5717da9464ca3c313359c351d6b8b01b8b59229284948a0742c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:43 compute-0 podman[88110]: 2025-10-08 09:45:43.860480871 +0000 UTC m=+0.107128751 container init 1c50d3257d007e84467d629cceef6f800aefe4ea23729a5b250cc2353151ed76 (image=quay.io/ceph/ceph:v19, name=strange_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:45:43 compute-0 podman[88110]: 2025-10-08 09:45:43.867382472 +0000 UTC m=+0.114030322 container start 1c50d3257d007e84467d629cceef6f800aefe4ea23729a5b250cc2353151ed76 (image=quay.io/ceph/ceph:v19, name=strange_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 08 09:45:43 compute-0 podman[88110]: 2025-10-08 09:45:43.773895706 +0000 UTC m=+0.020543576 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:45:43 compute-0 podman[88110]: 2025-10-08 09:45:43.870874314 +0000 UTC m=+0.117522184 container attach 1c50d3257d007e84467d629cceef6f800aefe4ea23729a5b250cc2353151ed76 (image=quay.io/ceph/ceph:v19, name=strange_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid)
Oct 08 09:45:44 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Oct 08 09:45:44 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Oct 08 09:45:44 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-2.mtagwx/server_addr}] v 0)
Oct 08 09:45:44 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/277292669' entity='client.admin' 
Oct 08 09:45:44 compute-0 systemd[1]: libpod-1c50d3257d007e84467d629cceef6f800aefe4ea23729a5b250cc2353151ed76.scope: Deactivated successfully.
Oct 08 09:45:44 compute-0 podman[88110]: 2025-10-08 09:45:44.229530257 +0000 UTC m=+0.476178107 container died 1c50d3257d007e84467d629cceef6f800aefe4ea23729a5b250cc2353151ed76 (image=quay.io/ceph/ceph:v19, name=strange_dirac, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct 08 09:45:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-da9594b96a8dc5717da9464ca3c313359c351d6b8b01b8b59229284948a0742c-merged.mount: Deactivated successfully.
Oct 08 09:45:44 compute-0 podman[88110]: 2025-10-08 09:45:44.259871186 +0000 UTC m=+0.506519036 container remove 1c50d3257d007e84467d629cceef6f800aefe4ea23729a5b250cc2353151ed76 (image=quay.io/ceph/ceph:v19, name=strange_dirac, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:45:44 compute-0 systemd[1]: libpod-conmon-1c50d3257d007e84467d629cceef6f800aefe4ea23729a5b250cc2353151ed76.scope: Deactivated successfully.
Oct 08 09:45:44 compute-0 sudo[88107]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:44 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Oct 08 09:45:44 compute-0 ceph-mon[73572]: pgmap v98: 193 pgs: 57 peering, 136 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:45:44 compute-0 ceph-mon[73572]: 6.6 scrub starts
Oct 08 09:45:44 compute-0 ceph-mon[73572]: 6.6 scrub ok
Oct 08 09:45:44 compute-0 ceph-mon[73572]: 2.3 scrub starts
Oct 08 09:45:44 compute-0 ceph-mon[73572]: 2.3 scrub ok
Oct 08 09:45:44 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:44 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:44 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:44 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.aaugis", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 08 09:45:44 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.aaugis", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 08 09:45:44 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:44 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:45:44 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/277292669' entity='client.admin' 
Oct 08 09:45:44 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Oct 08 09:45:44 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Oct 08 09:45:44 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Oct 08 09:45:44 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.pgshil' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct 08 09:45:44 compute-0 sudo[88184]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljpmtkrpqdirkukbmjiwjqbuhwsgfvtb ; /usr/bin/python3'
Oct 08 09:45:44 compute-0 sudo[88184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:45:44 compute-0 python3[88186]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module disable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:45:44 compute-0 podman[88187]: 2025-10-08 09:45:44.622816586 +0000 UTC m=+0.034325887 container create b7d727fdcd8eeb45d37d714db92dbd2fcc957f31c016c023bb229058403abbf0 (image=quay.io/ceph/ceph:v19, name=hardcore_proskuriakova, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:45:44 compute-0 systemd[1]: Started libpod-conmon-b7d727fdcd8eeb45d37d714db92dbd2fcc957f31c016c023bb229058403abbf0.scope.
Oct 08 09:45:44 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:45:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94482dda6eb18080e9a2cb1d41ab72002a31e3633b00ede41b4129b913b8a75a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94482dda6eb18080e9a2cb1d41ab72002a31e3633b00ede41b4129b913b8a75a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94482dda6eb18080e9a2cb1d41ab72002a31e3633b00ede41b4129b913b8a75a/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:44 compute-0 podman[88187]: 2025-10-08 09:45:44.687014066 +0000 UTC m=+0.098523387 container init b7d727fdcd8eeb45d37d714db92dbd2fcc957f31c016c023bb229058403abbf0 (image=quay.io/ceph/ceph:v19, name=hardcore_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct 08 09:45:44 compute-0 podman[88187]: 2025-10-08 09:45:44.69244182 +0000 UTC m=+0.103951121 container start b7d727fdcd8eeb45d37d714db92dbd2fcc957f31c016c023bb229058403abbf0 (image=quay.io/ceph/ceph:v19, name=hardcore_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct 08 09:45:44 compute-0 podman[88187]: 2025-10-08 09:45:44.696593453 +0000 UTC m=+0.108102754 container attach b7d727fdcd8eeb45d37d714db92dbd2fcc957f31c016c023bb229058403abbf0 (image=quay.io/ceph/ceph:v19, name=hardcore_proskuriakova, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 08 09:45:44 compute-0 podman[88187]: 2025-10-08 09:45:44.607961322 +0000 UTC m=+0.019470643 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:45:44 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 36 pg[8.0( empty local-lis/les=0/0 n=0 ec=36/36 lis/c=0/0 les/c/f=0/0/0 sis=36) [1] r=0 lpr=36 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:44 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v100: 194 pgs: 1 unknown, 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:45:45 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Oct 08 09:45:45 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module disable", "module": "dashboard"} v 0)
Oct 08 09:45:45 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3257796446' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Oct 08 09:45:45 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Oct 08 09:45:45 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 08 09:45:45 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:45 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 08 09:45:45 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:45 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Oct 08 09:45:45 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:45 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.wdkdxi", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Oct 08 09:45:45 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.wdkdxi", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 08 09:45:45 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.wdkdxi", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 08 09:45:45 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Oct 08 09:45:45 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:45 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:45:45 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:45:45 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.wdkdxi on compute-0
Oct 08 09:45:45 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.wdkdxi on compute-0
Oct 08 09:45:45 compute-0 sudo[88226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:45:45 compute-0 sudo[88226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:45 compute-0 sudo[88226]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:45 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Oct 08 09:45:45 compute-0 ceph-mon[73572]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 08 09:45:45 compute-0 sudo[88251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 787292cc-8154-50c4-9e00-e9be3e817149
Oct 08 09:45:45 compute-0 sudo[88251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:45 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.pgshil' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Oct 08 09:45:45 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Oct 08 09:45:45 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Oct 08 09:45:45 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 37 pg[8.0( empty local-lis/les=36/37 n=0 ec=36/36 lis/c=0/0 les/c/f=0/0/0 sis=36) [1] r=0 lpr=36 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:45 compute-0 ceph-mon[73572]: 2.a scrub starts
Oct 08 09:45:45 compute-0 ceph-mon[73572]: 2.a scrub ok
Oct 08 09:45:45 compute-0 ceph-mon[73572]: Deploying daemon rgw.rgw.compute-1.aaugis on compute-1
Oct 08 09:45:45 compute-0 ceph-mon[73572]: 3.2 scrub starts
Oct 08 09:45:45 compute-0 ceph-mon[73572]: 3.2 scrub ok
Oct 08 09:45:45 compute-0 ceph-mon[73572]: 2.0 deep-scrub starts
Oct 08 09:45:45 compute-0 ceph-mon[73572]: 2.0 deep-scrub ok
Oct 08 09:45:45 compute-0 ceph-mon[73572]: osdmap e36: 3 total, 3 up, 3 in
Oct 08 09:45:45 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/947715731' entity='client.rgw.rgw.compute-2.pgshil' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct 08 09:45:45 compute-0 ceph-mon[73572]: from='client.? ' entity='client.rgw.rgw.compute-2.pgshil' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct 08 09:45:45 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3257796446' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Oct 08 09:45:45 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:45 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:45 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:45 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.wdkdxi", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 08 09:45:45 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.wdkdxi", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 08 09:45:45 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:45 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:45:45 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3257796446' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Oct 08 09:45:45 compute-0 hardcore_proskuriakova[88202]: module 'dashboard' is already disabled
Oct 08 09:45:45 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e12: compute-0.ixicfj(active, since 2m), standbys: compute-2.mtagwx, compute-1.swlvov
Oct 08 09:45:45 compute-0 systemd[1]: libpod-b7d727fdcd8eeb45d37d714db92dbd2fcc957f31c016c023bb229058403abbf0.scope: Deactivated successfully.
Oct 08 09:45:45 compute-0 podman[88187]: 2025-10-08 09:45:45.537210756 +0000 UTC m=+0.948720057 container died b7d727fdcd8eeb45d37d714db92dbd2fcc957f31c016c023bb229058403abbf0 (image=quay.io/ceph/ceph:v19, name=hardcore_proskuriakova, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 08 09:45:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-94482dda6eb18080e9a2cb1d41ab72002a31e3633b00ede41b4129b913b8a75a-merged.mount: Deactivated successfully.
Oct 08 09:45:45 compute-0 podman[88187]: 2025-10-08 09:45:45.572058648 +0000 UTC m=+0.983567949 container remove b7d727fdcd8eeb45d37d714db92dbd2fcc957f31c016c023bb229058403abbf0 (image=quay.io/ceph/ceph:v19, name=hardcore_proskuriakova, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:45:45 compute-0 systemd[1]: libpod-conmon-b7d727fdcd8eeb45d37d714db92dbd2fcc957f31c016c023bb229058403abbf0.scope: Deactivated successfully.
Oct 08 09:45:45 compute-0 sudo[88184]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:45 compute-0 sudo[88353]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktzqieisgrzuhkabzuqmglrbfnojdaka ; /usr/bin/python3'
Oct 08 09:45:45 compute-0 sudo[88353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:45:45 compute-0 podman[88361]: 2025-10-08 09:45:45.786389333 +0000 UTC m=+0.047862437 container create 1ef22460d184a24fb50b52bf50cfb3bcae5c05c090d4be77618b8e54e1f5a3ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_mclean, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:45:45 compute-0 systemd[1]: Started libpod-conmon-1ef22460d184a24fb50b52bf50cfb3bcae5c05c090d4be77618b8e54e1f5a3ff.scope.
Oct 08 09:45:45 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:45:45 compute-0 podman[88361]: 2025-10-08 09:45:45.758414302 +0000 UTC m=+0.019887416 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:45:45 compute-0 podman[88361]: 2025-10-08 09:45:45.86407361 +0000 UTC m=+0.125546714 container init 1ef22460d184a24fb50b52bf50cfb3bcae5c05c090d4be77618b8e54e1f5a3ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_mclean, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:45:45 compute-0 podman[88361]: 2025-10-08 09:45:45.870965139 +0000 UTC m=+0.132438233 container start 1ef22460d184a24fb50b52bf50cfb3bcae5c05c090d4be77618b8e54e1f5a3ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:45:45 compute-0 python3[88360]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module enable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:45:45 compute-0 sleepy_mclean[88378]: 167 167
Oct 08 09:45:45 compute-0 podman[88361]: 2025-10-08 09:45:45.874119205 +0000 UTC m=+0.135592329 container attach 1ef22460d184a24fb50b52bf50cfb3bcae5c05c090d4be77618b8e54e1f5a3ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_mclean, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:45:45 compute-0 systemd[1]: libpod-1ef22460d184a24fb50b52bf50cfb3bcae5c05c090d4be77618b8e54e1f5a3ff.scope: Deactivated successfully.
Oct 08 09:45:45 compute-0 podman[88361]: 2025-10-08 09:45:45.875216769 +0000 UTC m=+0.136689863 container died 1ef22460d184a24fb50b52bf50cfb3bcae5c05c090d4be77618b8e54e1f5a3ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_mclean, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:45:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e744f99c274767563f927a72ec34de532bd3de066f2d2fc98c1ee34bea0cb48-merged.mount: Deactivated successfully.
Oct 08 09:45:45 compute-0 podman[88361]: 2025-10-08 09:45:45.918413484 +0000 UTC m=+0.179886578 container remove 1ef22460d184a24fb50b52bf50cfb3bcae5c05c090d4be77618b8e54e1f5a3ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_mclean, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct 08 09:45:45 compute-0 systemd[1]: libpod-conmon-1ef22460d184a24fb50b52bf50cfb3bcae5c05c090d4be77618b8e54e1f5a3ff.scope: Deactivated successfully.
Oct 08 09:45:45 compute-0 podman[88383]: 2025-10-08 09:45:45.948373657 +0000 UTC m=+0.061768982 container create b32eaca0f8078b83ac41a0335a4c5487d5becc61a67adb7cd9ee6ff0d105d419 (image=quay.io/ceph/ceph:v19, name=youthful_bardeen, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:45:45 compute-0 systemd[1]: Started libpod-conmon-b32eaca0f8078b83ac41a0335a4c5487d5becc61a67adb7cd9ee6ff0d105d419.scope.
Oct 08 09:45:45 compute-0 systemd[1]: Reloading.
Oct 08 09:45:46 compute-0 podman[88383]: 2025-10-08 09:45:45.929203452 +0000 UTC m=+0.042598797 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:45:46 compute-0 systemd-rc-local-generator[88436]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:45:46 compute-0 systemd-sysv-generator[88440]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:45:46 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Oct 08 09:45:46 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Oct 08 09:45:46 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:45:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec3489bf9e0d9f1c165d1f2175d8c75494fc562480b67202ef7ef8da0e8ca50f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec3489bf9e0d9f1c165d1f2175d8c75494fc562480b67202ef7ef8da0e8ca50f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec3489bf9e0d9f1c165d1f2175d8c75494fc562480b67202ef7ef8da0e8ca50f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:46 compute-0 podman[88383]: 2025-10-08 09:45:46.27821436 +0000 UTC m=+0.391609675 container init b32eaca0f8078b83ac41a0335a4c5487d5becc61a67adb7cd9ee6ff0d105d419 (image=quay.io/ceph/ceph:v19, name=youthful_bardeen, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:45:46 compute-0 podman[88383]: 2025-10-08 09:45:46.292211626 +0000 UTC m=+0.405606951 container start b32eaca0f8078b83ac41a0335a4c5487d5becc61a67adb7cd9ee6ff0d105d419 (image=quay.io/ceph/ceph:v19, name=youthful_bardeen, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:45:46 compute-0 podman[88383]: 2025-10-08 09:45:46.300004373 +0000 UTC m=+0.413399708 container attach b32eaca0f8078b83ac41a0335a4c5487d5becc61a67adb7cd9ee6ff0d105d419 (image=quay.io/ceph/ceph:v19, name=youthful_bardeen, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:45:46 compute-0 systemd[1]: Reloading.
Oct 08 09:45:46 compute-0 systemd-rc-local-generator[88479]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:45:46 compute-0 systemd-sysv-generator[88484]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:45:46 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Oct 08 09:45:46 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Oct 08 09:45:46 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Oct 08 09:45:46 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 38 pg[9.0( empty local-lis/les=0/0 n=0 ec=38/38 lis/c=0/0 les/c/f=0/0/0 sis=38) [1] r=0 lpr=38 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:46 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Oct 08 09:45:46 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.pgshil' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 08 09:45:46 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Oct 08 09:45:46 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.aaugis' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 08 09:45:46 compute-0 ceph-mon[73572]: 7.1d scrub starts
Oct 08 09:45:46 compute-0 ceph-mon[73572]: 7.1d scrub ok
Oct 08 09:45:46 compute-0 ceph-mon[73572]: pgmap v100: 194 pgs: 1 unknown, 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:45:46 compute-0 ceph-mon[73572]: 3.1 scrub starts
Oct 08 09:45:46 compute-0 ceph-mon[73572]: 3.1 scrub ok
Oct 08 09:45:46 compute-0 ceph-mon[73572]: 7.0 scrub starts
Oct 08 09:45:46 compute-0 ceph-mon[73572]: Deploying daemon rgw.rgw.compute-0.wdkdxi on compute-0
Oct 08 09:45:46 compute-0 ceph-mon[73572]: 7.0 scrub ok
Oct 08 09:45:46 compute-0 ceph-mon[73572]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 08 09:45:46 compute-0 ceph-mon[73572]: from='client.? ' entity='client.rgw.rgw.compute-2.pgshil' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Oct 08 09:45:46 compute-0 ceph-mon[73572]: 7.a deep-scrub starts
Oct 08 09:45:46 compute-0 ceph-mon[73572]: osdmap e37: 3 total, 3 up, 3 in
Oct 08 09:45:46 compute-0 ceph-mon[73572]: 7.a deep-scrub ok
Oct 08 09:45:46 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3257796446' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Oct 08 09:45:46 compute-0 ceph-mon[73572]: mgrmap e12: compute-0.ixicfj(active, since 2m), standbys: compute-2.mtagwx, compute-1.swlvov
Oct 08 09:45:46 compute-0 ceph-mon[73572]: osdmap e38: 3 total, 3 up, 3 in
Oct 08 09:45:46 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/4200026288' entity='client.rgw.rgw.compute-2.pgshil' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 08 09:45:46 compute-0 ceph-mon[73572]: from='client.? ' entity='client.rgw.rgw.compute-2.pgshil' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 08 09:45:46 compute-0 ceph-mon[73572]: from='client.? ' entity='client.rgw.rgw.compute-1.aaugis' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 08 09:45:46 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/1900470648' entity='client.rgw.rgw.compute-1.aaugis' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 08 09:45:46 compute-0 systemd[1]: Starting Ceph rgw.rgw.compute-0.wdkdxi for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct 08 09:45:46 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0)
Oct 08 09:45:46 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/658446886' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Oct 08 09:45:46 compute-0 podman[88556]: 2025-10-08 09:45:46.816951974 +0000 UTC m=+0.046517367 container create c6c7ccd8691da02c370a2b1b8f6e81e0e8a2c78d520d1f38bf935f7230fcff70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-rgw-rgw-compute-0-wdkdxi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:45:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a68787ad48e9458d259bfa260d4ff4667ae81a05eeff6709f8952ea2e53d3187/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a68787ad48e9458d259bfa260d4ff4667ae81a05eeff6709f8952ea2e53d3187/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a68787ad48e9458d259bfa260d4ff4667ae81a05eeff6709f8952ea2e53d3187/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a68787ad48e9458d259bfa260d4ff4667ae81a05eeff6709f8952ea2e53d3187/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.wdkdxi supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:46 compute-0 podman[88556]: 2025-10-08 09:45:46.869388011 +0000 UTC m=+0.098953414 container init c6c7ccd8691da02c370a2b1b8f6e81e0e8a2c78d520d1f38bf935f7230fcff70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-rgw-rgw-compute-0-wdkdxi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:45:46 compute-0 podman[88556]: 2025-10-08 09:45:46.87363584 +0000 UTC m=+0.103201233 container start c6c7ccd8691da02c370a2b1b8f6e81e0e8a2c78d520d1f38bf935f7230fcff70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-rgw-rgw-compute-0-wdkdxi, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct 08 09:45:46 compute-0 bash[88556]: c6c7ccd8691da02c370a2b1b8f6e81e0e8a2c78d520d1f38bf935f7230fcff70
Oct 08 09:45:46 compute-0 podman[88556]: 2025-10-08 09:45:46.79874905 +0000 UTC m=+0.028314493 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:45:46 compute-0 systemd[1]: Started Ceph rgw.rgw.compute-0.wdkdxi for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct 08 09:45:46 compute-0 radosgw[88577]: deferred set uid:gid to 167:167 (ceph:ceph)
Oct 08 09:45:46 compute-0 radosgw[88577]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process radosgw, pid 2
Oct 08 09:45:46 compute-0 radosgw[88577]: framework: beast
Oct 08 09:45:46 compute-0 radosgw[88577]: framework conf key: endpoint, val: 192.168.122.100:8082
Oct 08 09:45:46 compute-0 radosgw[88577]: init_numa not setting numa affinity
Oct 08 09:45:46 compute-0 sudo[88251]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:46 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 09:45:46 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:46 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 09:45:46 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v103: 195 pgs: 2 unknown, 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:45:46 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:46 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Oct 08 09:45:46 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:46 compute-0 ceph-mgr[73869]: [progress INFO root] complete: finished ev 84ae7ebc-c8b9-4226-9ef4-d352c70615bc (Updating rgw.rgw deployment (+3 -> 3))
Oct 08 09:45:46 compute-0 ceph-mgr[73869]: [progress INFO root] Completed event 84ae7ebc-c8b9-4226-9ef4-d352c70615bc (Updating rgw.rgw deployment (+3 -> 3)) in 5 seconds
Oct 08 09:45:46 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct 08 09:45:46 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct 08 09:45:46 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Oct 08 09:45:46 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:46 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Oct 08 09:45:47 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:47 compute-0 ceph-mgr[73869]: [progress INFO root] update: starting ev 943a6973-6405-40f5-87ab-42ef16849f0e (Updating node-exporter deployment (+3 -> 3))
Oct 08 09:45:47 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-0 on compute-0
Oct 08 09:45:47 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-0 on compute-0
Oct 08 09:45:47 compute-0 sudo[89164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:45:47 compute-0 sudo[89164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:47 compute-0 sudo[89164]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:47 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 6.0 deep-scrub starts
Oct 08 09:45:47 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 6.0 deep-scrub ok
Oct 08 09:45:47 compute-0 sudo[89189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/node-exporter:v1.7.0 --timeout 895 _orch deploy --fsid 787292cc-8154-50c4-9e00-e9be3e817149
Oct 08 09:45:47 compute-0 sudo[89189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:47 compute-0 systemd[1]: Reloading.
Oct 08 09:45:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Oct 08 09:45:47 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.pgshil' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct 08 09:45:47 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.aaugis' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct 08 09:45:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Oct 08 09:45:47 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Oct 08 09:45:47 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 39 pg[9.0( empty local-lis/les=38/39 n=0 ec=38/38 lis/c=0/0 les/c/f=0/0/0 sis=38) [1] r=0 lpr=38 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:47 compute-0 systemd-rc-local-generator[89283]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:45:47 compute-0 systemd-sysv-generator[89290]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:45:47 compute-0 ceph-mon[73572]: 6.4 scrub starts
Oct 08 09:45:47 compute-0 ceph-mon[73572]: 6.4 scrub ok
Oct 08 09:45:47 compute-0 ceph-mon[73572]: 2.2 scrub starts
Oct 08 09:45:47 compute-0 ceph-mon[73572]: 2.2 scrub ok
Oct 08 09:45:47 compute-0 ceph-mon[73572]: 2.c scrub starts
Oct 08 09:45:47 compute-0 ceph-mon[73572]: 2.c scrub ok
Oct 08 09:45:47 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/658446886' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Oct 08 09:45:47 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:47 compute-0 ceph-mon[73572]: pgmap v103: 195 pgs: 2 unknown, 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:45:47 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:47 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:47 compute-0 ceph-mon[73572]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct 08 09:45:47 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:47 compute-0 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:47 compute-0 ceph-mon[73572]: Deploying daemon node-exporter.compute-0 on compute-0
Oct 08 09:45:47 compute-0 ceph-mon[73572]: from='client.? ' entity='client.rgw.rgw.compute-2.pgshil' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct 08 09:45:47 compute-0 ceph-mon[73572]: from='client.? ' entity='client.rgw.rgw.compute-1.aaugis' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct 08 09:45:47 compute-0 ceph-mon[73572]: osdmap e39: 3 total, 3 up, 3 in
Oct 08 09:45:47 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/658446886' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Oct 08 09:45:47 compute-0 ceph-mgr[73869]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct 08 09:45:47 compute-0 ceph-mgr[73869]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct 08 09:45:47 compute-0 ceph-mgr[73869]: mgr respawn  0: '/usr/bin/ceph-mgr'
Oct 08 09:45:47 compute-0 ceph-mgr[73869]: mgr respawn  1: '-n'
Oct 08 09:45:47 compute-0 ceph-mgr[73869]: mgr respawn  2: 'mgr.compute-0.ixicfj'
Oct 08 09:45:47 compute-0 ceph-mgr[73869]: mgr respawn  3: '-f'
Oct 08 09:45:47 compute-0 ceph-mgr[73869]: mgr respawn  4: '--setuser'
Oct 08 09:45:47 compute-0 ceph-mgr[73869]: mgr respawn  5: 'ceph'
Oct 08 09:45:47 compute-0 ceph-mgr[73869]: mgr respawn  6: '--setgroup'
Oct 08 09:45:47 compute-0 ceph-mgr[73869]: mgr respawn  7: 'ceph'
Oct 08 09:45:47 compute-0 ceph-mgr[73869]: mgr respawn  8: '--default-log-to-file=false'
Oct 08 09:45:47 compute-0 ceph-mgr[73869]: mgr respawn  9: '--default-log-to-journald=true'
Oct 08 09:45:47 compute-0 ceph-mgr[73869]: mgr respawn  10: '--default-log-to-stderr=false'
Oct 08 09:45:47 compute-0 ceph-mgr[73869]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Oct 08 09:45:47 compute-0 ceph-mgr[73869]: mgr respawn  exe_path /proc/self/exe
Oct 08 09:45:47 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e13: compute-0.ixicfj(active, since 2m), standbys: compute-2.mtagwx, compute-1.swlvov
Oct 08 09:45:47 compute-0 podman[88383]: 2025-10-08 09:45:47.573015585 +0000 UTC m=+1.686410930 container died b32eaca0f8078b83ac41a0335a4c5487d5becc61a67adb7cd9ee6ff0d105d419 (image=quay.io/ceph/ceph:v19, name=youthful_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:45:47 compute-0 sshd-session[75205]: Connection closed by 192.168.122.100 port 56854
Oct 08 09:45:47 compute-0 sshd-session[75091]: Connection closed by 192.168.122.100 port 56814
Oct 08 09:45:47 compute-0 sshd-session[75149]: Connection closed by 192.168.122.100 port 56836
Oct 08 09:45:47 compute-0 sshd-session[75004]: Connection closed by 192.168.122.100 port 56780
Oct 08 09:45:47 compute-0 sshd-session[75062]: Connection closed by 192.168.122.100 port 56800
Oct 08 09:45:47 compute-0 sshd-session[74975]: Connection closed by 192.168.122.100 port 56778
Oct 08 09:45:47 compute-0 sshd-session[75176]: Connection closed by 192.168.122.100 port 56844
Oct 08 09:45:47 compute-0 sshd-session[75120]: Connection closed by 192.168.122.100 port 56820
Oct 08 09:45:47 compute-0 sshd-session[75033]: Connection closed by 192.168.122.100 port 56786
Oct 08 09:45:47 compute-0 sshd-session[74946]: Connection closed by 192.168.122.100 port 56768
Oct 08 09:45:47 compute-0 sshd-session[74917]: Connection closed by 192.168.122.100 port 56762
Oct 08 09:45:47 compute-0 sshd-session[74916]: Connection closed by 192.168.122.100 port 56752
Oct 08 09:45:47 compute-0 systemd[1]: libpod-b32eaca0f8078b83ac41a0335a4c5487d5becc61a67adb7cd9ee6ff0d105d419.scope: Deactivated successfully.
Oct 08 09:45:47 compute-0 sshd-session[75117]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 08 09:45:47 compute-0 sshd-session[75059]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 08 09:45:47 compute-0 sshd-session[75001]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 08 09:45:47 compute-0 sshd-session[74894]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 08 09:45:47 compute-0 sshd-session[75173]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 08 09:45:47 compute-0 sshd-session[74943]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 08 09:45:47 compute-0 sshd-session[74911]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 08 09:45:47 compute-0 sshd-session[74972]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 08 09:45:47 compute-0 systemd[1]: session-32.scope: Deactivated successfully.
Oct 08 09:45:47 compute-0 sshd-session[75202]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 08 09:45:47 compute-0 systemd[1]: session-24.scope: Deactivated successfully.
Oct 08 09:45:47 compute-0 sshd-session[75146]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 08 09:45:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ignoring --setuser ceph since I am not root
Oct 08 09:45:47 compute-0 sshd-session[75030]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 08 09:45:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ignoring --setgroup ceph since I am not root
Oct 08 09:45:47 compute-0 sshd-session[75088]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 08 09:45:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec3489bf9e0d9f1c165d1f2175d8c75494fc562480b67202ef7ef8da0e8ca50f-merged.mount: Deactivated successfully.
Oct 08 09:45:47 compute-0 systemd[1]: session-25.scope: Deactivated successfully.
Oct 08 09:45:47 compute-0 systemd[1]: session-27.scope: Deactivated successfully.
Oct 08 09:45:47 compute-0 systemd[1]: session-26.scope: Deactivated successfully.
Oct 08 09:45:47 compute-0 systemd[1]: session-22.scope: Deactivated successfully.
Oct 08 09:45:47 compute-0 systemd[1]: session-33.scope: Deactivated successfully.
Oct 08 09:45:47 compute-0 systemd[1]: session-29.scope: Deactivated successfully.
Oct 08 09:45:47 compute-0 systemd[1]: session-31.scope: Deactivated successfully.
Oct 08 09:45:47 compute-0 systemd[1]: session-28.scope: Deactivated successfully.
Oct 08 09:45:47 compute-0 ceph-mgr[73869]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct 08 09:45:47 compute-0 systemd[1]: session-30.scope: Deactivated successfully.
Oct 08 09:45:47 compute-0 ceph-mgr[73869]: pidfile_write: ignore empty --pid-file
Oct 08 09:45:47 compute-0 podman[88383]: 2025-10-08 09:45:47.68253242 +0000 UTC m=+1.795927735 container remove b32eaca0f8078b83ac41a0335a4c5487d5becc61a67adb7cd9ee6ff0d105d419 (image=quay.io/ceph/ceph:v19, name=youthful_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:45:47 compute-0 systemd-logind[798]: Removed session 29.
Oct 08 09:45:47 compute-0 systemd[1]: libpod-conmon-b32eaca0f8078b83ac41a0335a4c5487d5becc61a67adb7cd9ee6ff0d105d419.scope: Deactivated successfully.
Oct 08 09:45:47 compute-0 systemd-logind[798]: Removed session 22.
Oct 08 09:45:47 compute-0 systemd-logind[798]: Removed session 31.
Oct 08 09:45:47 compute-0 systemd-logind[798]: Removed session 33.
Oct 08 09:45:47 compute-0 systemd-logind[798]: Removed session 32.
Oct 08 09:45:47 compute-0 systemd-logind[798]: Removed session 24.
Oct 08 09:45:47 compute-0 systemd-logind[798]: Session 26 logged out. Waiting for processes to exit.
Oct 08 09:45:47 compute-0 systemd-logind[798]: Session 34 logged out. Waiting for processes to exit.
Oct 08 09:45:47 compute-0 systemd-logind[798]: Session 25 logged out. Waiting for processes to exit.
Oct 08 09:45:47 compute-0 systemd-logind[798]: Session 28 logged out. Waiting for processes to exit.
Oct 08 09:45:47 compute-0 systemd-logind[798]: Session 30 logged out. Waiting for processes to exit.
Oct 08 09:45:47 compute-0 systemd-logind[798]: Session 27 logged out. Waiting for processes to exit.
Oct 08 09:45:47 compute-0 sudo[88353]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:47 compute-0 systemd-logind[798]: Removed session 25.
Oct 08 09:45:47 compute-0 systemd-logind[798]: Removed session 27.
Oct 08 09:45:47 compute-0 systemd-logind[798]: Removed session 26.
Oct 08 09:45:47 compute-0 systemd-logind[798]: Removed session 28.
Oct 08 09:45:47 compute-0 systemd-logind[798]: Removed session 30.
Oct 08 09:45:47 compute-0 systemd[1]: Reloading.
Oct 08 09:45:47 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'alerts'
Oct 08 09:45:47 compute-0 systemd-rc-local-generator[89355]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:45:47 compute-0 systemd-sysv-generator[89358]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:45:47 compute-0 ceph-mgr[73869]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 08 09:45:47 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'balancer'
Oct 08 09:45:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:45:47.824+0000 7f359c145140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 08 09:45:47 compute-0 ceph-mgr[73869]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 08 09:45:47 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'cephadm'
Oct 08 09:45:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:45:47.910+0000 7f359c145140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 08 09:45:47 compute-0 sudo[89391]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klsgnqksyyocnnrmclywkhikxuysbfct ; /usr/bin/python3'
Oct 08 09:45:47 compute-0 sudo[89391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:45:47 compute-0 systemd[1]: Starting Ceph node-exporter.compute-0 for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct 08 09:45:48 compute-0 python3[89396]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-username admin _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:45:48 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Oct 08 09:45:48 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Oct 08 09:45:48 compute-0 bash[89449]: Trying to pull quay.io/prometheus/node-exporter:v1.7.0...
Oct 08 09:45:48 compute-0 podman[89438]: 2025-10-08 09:45:48.148045225 +0000 UTC m=+0.045740604 container create b1564ce47afdf437a22dfc4e23b8735ffc224b8264db47d08fe8e7c95e7902b7 (image=quay.io/ceph/ceph:v19, name=competent_babbage, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Oct 08 09:45:48 compute-0 systemd[1]: Started libpod-conmon-b1564ce47afdf437a22dfc4e23b8735ffc224b8264db47d08fe8e7c95e7902b7.scope.
Oct 08 09:45:48 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:45:48 compute-0 podman[89438]: 2025-10-08 09:45:48.122947241 +0000 UTC m=+0.020642640 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:45:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8bd965d281b1de97d6090e780ed71c35986901d13a335527fc6961a2bdd0c7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8bd965d281b1de97d6090e780ed71c35986901d13a335527fc6961a2bdd0c7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8bd965d281b1de97d6090e780ed71c35986901d13a335527fc6961a2bdd0c7/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:48 compute-0 podman[89438]: 2025-10-08 09:45:48.23163455 +0000 UTC m=+0.129329959 container init b1564ce47afdf437a22dfc4e23b8735ffc224b8264db47d08fe8e7c95e7902b7 (image=quay.io/ceph/ceph:v19, name=competent_babbage, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:45:48 compute-0 podman[89438]: 2025-10-08 09:45:48.237699535 +0000 UTC m=+0.135394914 container start b1564ce47afdf437a22dfc4e23b8735ffc224b8264db47d08fe8e7c95e7902b7 (image=quay.io/ceph/ceph:v19, name=competent_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:45:48 compute-0 podman[89438]: 2025-10-08 09:45:48.241939224 +0000 UTC m=+0.139634603 container attach b1564ce47afdf437a22dfc4e23b8735ffc224b8264db47d08fe8e7c95e7902b7 (image=quay.io/ceph/ceph:v19, name=competent_babbage, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct 08 09:45:48 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:45:48 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Oct 08 09:45:48 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Oct 08 09:45:48 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Oct 08 09:45:48 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Oct 08 09:45:48 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4157537618' entity='client.rgw.rgw.compute-0.wdkdxi' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 08 09:45:48 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Oct 08 09:45:48 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.aaugis' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 08 09:45:48 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Oct 08 09:45:48 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.pgshil' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 08 09:45:48 compute-0 ceph-mon[73572]: 6.0 deep-scrub starts
Oct 08 09:45:48 compute-0 ceph-mon[73572]: 6.0 deep-scrub ok
Oct 08 09:45:48 compute-0 ceph-mon[73572]: 7.7 deep-scrub starts
Oct 08 09:45:48 compute-0 ceph-mon[73572]: 7.7 deep-scrub ok
Oct 08 09:45:48 compute-0 ceph-mon[73572]: 7.16 scrub starts
Oct 08 09:45:48 compute-0 ceph-mon[73572]: 7.16 scrub ok
Oct 08 09:45:48 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/658446886' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Oct 08 09:45:48 compute-0 ceph-mon[73572]: mgrmap e13: compute-0.ixicfj(active, since 2m), standbys: compute-2.mtagwx, compute-1.swlvov
Oct 08 09:45:48 compute-0 ceph-mon[73572]: osdmap e40: 3 total, 3 up, 3 in
Oct 08 09:45:48 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/4157537618' entity='client.rgw.rgw.compute-0.wdkdxi' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 08 09:45:48 compute-0 ceph-mon[73572]: from='client.? ' entity='client.rgw.rgw.compute-1.aaugis' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 08 09:45:48 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/4200026288' entity='client.rgw.rgw.compute-2.pgshil' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 08 09:45:48 compute-0 ceph-mon[73572]: from='client.? ' entity='client.rgw.rgw.compute-2.pgshil' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 08 09:45:48 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/1900470648' entity='client.rgw.rgw.compute-1.aaugis' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 08 09:45:48 compute-0 bash[89449]: Getting image source signatures
Oct 08 09:45:48 compute-0 bash[89449]: Copying blob sha256:324153f2810a9927fcce320af9e4e291e0b6e805cbdd1f338386c756b9defa24
Oct 08 09:45:48 compute-0 bash[89449]: Copying blob sha256:2abcce694348cd2c949c0e98a7400ebdfd8341021bcf6b541bc72033ce982510
Oct 08 09:45:48 compute-0 bash[89449]: Copying blob sha256:455fd88e5221bc1e278ef2d059cd70e4df99a24e5af050ede621534276f6cf9a
Oct 08 09:45:48 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'crash'
Oct 08 09:45:48 compute-0 ceph-mgr[73869]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 08 09:45:48 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'dashboard'
Oct 08 09:45:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:45:48.719+0000 7f359c145140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 08 09:45:49 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Oct 08 09:45:49 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Oct 08 09:45:49 compute-0 bash[89449]: Copying config sha256:72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e
Oct 08 09:45:49 compute-0 bash[89449]: Writing manifest to image destination
Oct 08 09:45:49 compute-0 podman[89449]: 2025-10-08 09:45:49.21486677 +0000 UTC m=+1.097011295 container create 0dbea514cc83cec397480471ade0bebb407738deaf60dfa42fe4b53fba64588f (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:45:49 compute-0 podman[89449]: 2025-10-08 09:45:49.200078129 +0000 UTC m=+1.082222684 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Oct 08 09:45:49 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'devicehealth'
Oct 08 09:45:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54af9510d66390823c3b362131dbb950b9145f4e5b56d1ab94c9e3f0f29ca9ac/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:49 compute-0 podman[89449]: 2025-10-08 09:45:49.270162073 +0000 UTC m=+1.152306618 container init 0dbea514cc83cec397480471ade0bebb407738deaf60dfa42fe4b53fba64588f (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:45:49 compute-0 podman[89449]: 2025-10-08 09:45:49.274628029 +0000 UTC m=+1.156772554 container start 0dbea514cc83cec397480471ade0bebb407738deaf60dfa42fe4b53fba64588f (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:45:49 compute-0 bash[89449]: 0dbea514cc83cec397480471ade0bebb407738deaf60dfa42fe4b53fba64588f
Oct 08 09:45:49 compute-0 systemd[1]: Started Ceph node-exporter.compute-0 for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct 08 09:45:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.287Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Oct 08 09:45:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.287Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Oct 08 09:45:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.287Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Oct 08 09:45:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.287Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Oct 08 09:45:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.288Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Oct 08 09:45:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.288Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Oct 08 09:45:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Oct 08 09:45:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=arp
Oct 08 09:45:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=bcache
Oct 08 09:45:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=bonding
Oct 08 09:45:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=btrfs
Oct 08 09:45:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=conntrack
Oct 08 09:45:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=cpu
Oct 08 09:45:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=cpufreq
Oct 08 09:45:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=diskstats
Oct 08 09:45:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=dmi
Oct 08 09:45:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=edac
Oct 08 09:45:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=entropy
Oct 08 09:45:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=fibrechannel
Oct 08 09:45:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=filefd
Oct 08 09:45:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=filesystem
Oct 08 09:45:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=hwmon
Oct 08 09:45:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=infiniband
Oct 08 09:45:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=ipvs
Oct 08 09:45:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=loadavg
Oct 08 09:45:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=mdadm
Oct 08 09:45:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=meminfo
Oct 08 09:45:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=netclass
Oct 08 09:45:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=netdev
Oct 08 09:45:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=netstat
Oct 08 09:45:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=nfs
Oct 08 09:45:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=nfsd
Oct 08 09:45:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=nvme
Oct 08 09:45:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=os
Oct 08 09:45:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Oct 08 09:45:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=pressure
Oct 08 09:45:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=rapl
Oct 08 09:45:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=schedstat
Oct 08 09:45:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=selinux
Oct 08 09:45:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=sockstat
Oct 08 09:45:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=softnet
Oct 08 09:45:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=stat
Oct 08 09:45:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=tapestats
Oct 08 09:45:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=textfile
Oct 08 09:45:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=thermal_zone
Oct 08 09:45:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=time
Oct 08 09:45:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=udp_queues
Oct 08 09:45:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=uname
Oct 08 09:45:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=vmstat
Oct 08 09:45:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=xfs
Oct 08 09:45:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=zfs
Oct 08 09:45:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Oct 08 09:45:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.290Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Oct 08 09:45:49 compute-0 sudo[89189]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:49 compute-0 ceph-mgr[73869]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 08 09:45:49 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'diskprediction_local'
Oct 08 09:45:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:45:49.324+0000 7f359c145140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 08 09:45:49 compute-0 systemd[1]: session-34.scope: Deactivated successfully.
Oct 08 09:45:49 compute-0 systemd[1]: session-34.scope: Consumed 26.085s CPU time.
Oct 08 09:45:49 compute-0 systemd-logind[798]: Removed session 34.
Oct 08 09:45:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Oct 08 09:45:49 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4157537618' entity='client.rgw.rgw.compute-0.wdkdxi' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 08 09:45:49 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.aaugis' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 08 09:45:49 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.pgshil' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 08 09:45:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Oct 08 09:45:49 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Oct 08 09:45:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 08 09:45:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 08 09:45:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]:   from numpy import show_config as show_numpy_config
Oct 08 09:45:49 compute-0 ceph-mgr[73869]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 08 09:45:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:45:49.488+0000 7f359c145140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 08 09:45:49 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'influx'
Oct 08 09:45:49 compute-0 ceph-mgr[73869]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 08 09:45:49 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'insights'
Oct 08 09:45:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:45:49.557+0000 7f359c145140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 08 09:45:49 compute-0 ceph-mon[73572]: 5.3 scrub starts
Oct 08 09:45:49 compute-0 ceph-mon[73572]: 5.3 scrub ok
Oct 08 09:45:49 compute-0 ceph-mon[73572]: 7.1 scrub starts
Oct 08 09:45:49 compute-0 ceph-mon[73572]: 7.1 scrub ok
Oct 08 09:45:49 compute-0 ceph-mon[73572]: 2.18 scrub starts
Oct 08 09:45:49 compute-0 ceph-mon[73572]: 2.18 scrub ok
Oct 08 09:45:49 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/4157537618' entity='client.rgw.rgw.compute-0.wdkdxi' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 08 09:45:49 compute-0 ceph-mon[73572]: from='client.? ' entity='client.rgw.rgw.compute-1.aaugis' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 08 09:45:49 compute-0 ceph-mon[73572]: from='client.? ' entity='client.rgw.rgw.compute-2.pgshil' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 08 09:45:49 compute-0 ceph-mon[73572]: osdmap e41: 3 total, 3 up, 3 in
Oct 08 09:45:49 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'iostat'
Oct 08 09:45:49 compute-0 ceph-mgr[73869]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 08 09:45:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:45:49.736+0000 7f359c145140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 08 09:45:49 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'k8sevents'
Oct 08 09:45:50 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Oct 08 09:45:50 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Oct 08 09:45:50 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'localpool'
Oct 08 09:45:50 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'mds_autoscaler'
Oct 08 09:45:50 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'mirroring'
Oct 08 09:45:50 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Oct 08 09:45:50 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Oct 08 09:45:50 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Oct 08 09:45:50 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Oct 08 09:45:50 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4157537618' entity='client.rgw.rgw.compute-0.wdkdxi' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 08 09:45:50 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Oct 08 09:45:50 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.aaugis' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 08 09:45:50 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Oct 08 09:45:50 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.pgshil' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 08 09:45:50 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'nfs'
Oct 08 09:45:50 compute-0 ceph-mon[73572]: 3.6 scrub starts
Oct 08 09:45:50 compute-0 ceph-mon[73572]: 3.6 scrub ok
Oct 08 09:45:50 compute-0 ceph-mon[73572]: 7.d deep-scrub starts
Oct 08 09:45:50 compute-0 ceph-mon[73572]: 7.d deep-scrub ok
Oct 08 09:45:50 compute-0 ceph-mon[73572]: 2.13 scrub starts
Oct 08 09:45:50 compute-0 ceph-mon[73572]: 2.13 scrub ok
Oct 08 09:45:50 compute-0 ceph-mon[73572]: osdmap e42: 3 total, 3 up, 3 in
Oct 08 09:45:50 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/4157537618' entity='client.rgw.rgw.compute-0.wdkdxi' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 08 09:45:50 compute-0 ceph-mon[73572]: from='client.? ' entity='client.rgw.rgw.compute-1.aaugis' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 08 09:45:50 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/4200026288' entity='client.rgw.rgw.compute-2.pgshil' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 08 09:45:50 compute-0 ceph-mon[73572]: from='client.? ' entity='client.rgw.rgw.compute-2.pgshil' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 08 09:45:50 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/1900470648' entity='client.rgw.rgw.compute-1.aaugis' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 08 09:45:50 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 42 pg[11.0( empty local-lis/les=0/0 n=0 ec=42/42 lis/c=0/0 les/c/f=0/0/0 sis=42) [1] r=0 lpr=42 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:45:50 compute-0 ceph-mgr[73869]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 08 09:45:50 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'orchestrator'
Oct 08 09:45:50 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:45:50.744+0000 7f359c145140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 08 09:45:50 compute-0 ceph-mgr[73869]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 08 09:45:50 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'osd_perf_query'
Oct 08 09:45:50 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:45:50.971+0000 7f359c145140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 08 09:45:51 compute-0 ceph-mgr[73869]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 08 09:45:51 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:45:51.044+0000 7f359c145140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 08 09:45:51 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'osd_support'
Oct 08 09:45:51 compute-0 ceph-mgr[73869]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 08 09:45:51 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:45:51.110+0000 7f359c145140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 08 09:45:51 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'pg_autoscaler'
Oct 08 09:45:51 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 4.0 scrub starts
Oct 08 09:45:51 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 4.0 scrub ok
Oct 08 09:45:51 compute-0 ceph-mgr[73869]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 08 09:45:51 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:45:51.194+0000 7f359c145140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 08 09:45:51 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'progress'
Oct 08 09:45:51 compute-0 ceph-mgr[73869]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 08 09:45:51 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'prometheus'
Oct 08 09:45:51 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:45:51.272+0000 7f359c145140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 08 09:45:51 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Oct 08 09:45:51 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4157537618' entity='client.rgw.rgw.compute-0.wdkdxi' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 08 09:45:51 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.aaugis' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 08 09:45:51 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.pgshil' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 08 09:45:51 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Oct 08 09:45:51 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Oct 08 09:45:51 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Oct 08 09:45:51 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.aaugis' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 08 09:45:51 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Oct 08 09:45:51 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4157537618' entity='client.rgw.rgw.compute-0.wdkdxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 08 09:45:51 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Oct 08 09:45:51 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.pgshil' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 08 09:45:51 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 43 pg[11.0( empty local-lis/les=42/43 n=0 ec=42/42 lis/c=0/0 les/c/f=0/0/0 sis=42) [1] r=0 lpr=42 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:45:51 compute-0 ceph-mon[73572]: 3.7 scrub starts
Oct 08 09:45:51 compute-0 ceph-mon[73572]: 3.7 scrub ok
Oct 08 09:45:51 compute-0 ceph-mon[73572]: 2.8 scrub starts
Oct 08 09:45:51 compute-0 ceph-mon[73572]: 2.8 scrub ok
Oct 08 09:45:51 compute-0 ceph-mon[73572]: 2.12 scrub starts
Oct 08 09:45:51 compute-0 ceph-mon[73572]: 2.12 scrub ok
Oct 08 09:45:51 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/4157537618' entity='client.rgw.rgw.compute-0.wdkdxi' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 08 09:45:51 compute-0 ceph-mon[73572]: from='client.? ' entity='client.rgw.rgw.compute-1.aaugis' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 08 09:45:51 compute-0 ceph-mon[73572]: from='client.? ' entity='client.rgw.rgw.compute-2.pgshil' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 08 09:45:51 compute-0 ceph-mon[73572]: osdmap e43: 3 total, 3 up, 3 in
Oct 08 09:45:51 compute-0 ceph-mon[73572]: from='client.? ' entity='client.rgw.rgw.compute-1.aaugis' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 08 09:45:51 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/4157537618' entity='client.rgw.rgw.compute-0.wdkdxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 08 09:45:51 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/4200026288' entity='client.rgw.rgw.compute-2.pgshil' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 08 09:45:51 compute-0 ceph-mon[73572]: from='client.? ' entity='client.rgw.rgw.compute-2.pgshil' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 08 09:45:51 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/1900470648' entity='client.rgw.rgw.compute-1.aaugis' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 08 09:45:51 compute-0 ceph-mgr[73869]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 08 09:45:51 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:45:51.642+0000 7f359c145140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 08 09:45:51 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'rbd_support'
Oct 08 09:45:51 compute-0 ceph-mgr[73869]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 08 09:45:51 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:45:51.744+0000 7f359c145140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 08 09:45:51 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'restful'
Oct 08 09:45:51 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'rgw'
Oct 08 09:45:52 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 4.7 deep-scrub starts
Oct 08 09:45:52 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 4.7 deep-scrub ok
Oct 08 09:45:52 compute-0 ceph-mgr[73869]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 08 09:45:52 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'rook'
Oct 08 09:45:52 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:45:52.176+0000 7f359c145140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 08 09:45:52 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Oct 08 09:45:52 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.aaugis' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 08 09:45:52 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4157537618' entity='client.rgw.rgw.compute-0.wdkdxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 08 09:45:52 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.pgshil' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 08 09:45:52 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Oct 08 09:45:52 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Oct 08 09:45:52 compute-0 ceph-mon[73572]: 4.0 scrub starts
Oct 08 09:45:52 compute-0 ceph-mon[73572]: 4.0 scrub ok
Oct 08 09:45:52 compute-0 ceph-mon[73572]: 7.c deep-scrub starts
Oct 08 09:45:52 compute-0 ceph-mon[73572]: 7.c deep-scrub ok
Oct 08 09:45:52 compute-0 ceph-mon[73572]: 2.15 scrub starts
Oct 08 09:45:52 compute-0 ceph-mon[73572]: 2.15 scrub ok
Oct 08 09:45:52 compute-0 ceph-mon[73572]: from='client.? ' entity='client.rgw.rgw.compute-1.aaugis' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 08 09:45:52 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/4157537618' entity='client.rgw.rgw.compute-0.wdkdxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 08 09:45:52 compute-0 ceph-mon[73572]: from='client.? ' entity='client.rgw.rgw.compute-2.pgshil' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 08 09:45:52 compute-0 ceph-mon[73572]: osdmap e44: 3 total, 3 up, 3 in
Oct 08 09:45:52 compute-0 radosgw[88577]: v1 topic migration: starting v1 topic migration..
Oct 08 09:45:52 compute-0 radosgw[88577]: LDAP not started since no server URIs were provided in the configuration.
Oct 08 09:45:52 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-rgw-rgw-compute-0-wdkdxi[88573]: 2025-10-08T09:45:52.670+0000 7f175ed7a980 -1 LDAP not started since no server URIs were provided in the configuration.
Oct 08 09:45:52 compute-0 radosgw[88577]: v1 topic migration: finished v1 topic migration
Oct 08 09:45:52 compute-0 radosgw[88577]: framework: beast
Oct 08 09:45:52 compute-0 radosgw[88577]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Oct 08 09:45:52 compute-0 radosgw[88577]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Oct 08 09:45:52 compute-0 radosgw[88577]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Oct 08 09:45:52 compute-0 radosgw[88577]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Oct 08 09:45:52 compute-0 radosgw[88577]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Oct 08 09:45:52 compute-0 radosgw[88577]: starting handler: beast
Oct 08 09:45:52 compute-0 radosgw[88577]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Oct 08 09:45:52 compute-0 radosgw[88577]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Oct 08 09:45:52 compute-0 radosgw[88577]: set uid:gid to 167:167 (ceph:ceph)
Oct 08 09:45:52 compute-0 radosgw[88577]: mgrc service_daemon_register rgw.14382 metadata {arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.wdkdxi,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025,kernel_version=5.14.0-620.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864104,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=246b4a69-3c1d-47ce-b182-d12a3d96d3e3,zone_name=default,zonegroup_id=3218c688-50d3-4b3d-9517-1c08371b4e2e,zonegroup_name=default}
Oct 08 09:45:52 compute-0 radosgw[88577]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Oct 08 09:45:52 compute-0 ceph-mgr[73869]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 08 09:45:52 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:45:52.752+0000 7f359c145140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 08 09:45:52 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'selftest'
Oct 08 09:45:52 compute-0 radosgw[88577]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Oct 08 09:45:52 compute-0 radosgw[88577]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Oct 08 09:45:52 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:45:52.833+0000 7f359c145140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 08 09:45:52 compute-0 ceph-mgr[73869]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 08 09:45:52 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'snap_schedule'
Oct 08 09:45:52 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:45:52.930+0000 7f359c145140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 08 09:45:52 compute-0 ceph-mgr[73869]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 08 09:45:52 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'stats'
Oct 08 09:45:53 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'status'
Oct 08 09:45:53 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:45:53.094+0000 7f359c145140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 08 09:45:53 compute-0 ceph-mgr[73869]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 08 09:45:53 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'telegraf'
Oct 08 09:45:53 compute-0 ceph-mgr[73869]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 08 09:45:53 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:45:53.164+0000 7f359c145140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 08 09:45:53 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'telemetry'
Oct 08 09:45:53 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Oct 08 09:45:53 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Oct 08 09:45:53 compute-0 ceph-mgr[73869]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 08 09:45:53 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:45:53.315+0000 7f359c145140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 08 09:45:53 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'test_orchestrator'
Oct 08 09:45:53 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:45:53 compute-0 ceph-mgr[73869]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 08 09:45:53 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:45:53.539+0000 7f359c145140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 08 09:45:53 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'volumes'
Oct 08 09:45:53 compute-0 ceph-mon[73572]: 4.7 deep-scrub starts
Oct 08 09:45:53 compute-0 ceph-mon[73572]: 4.7 deep-scrub ok
Oct 08 09:45:53 compute-0 ceph-mon[73572]: 7.19 scrub starts
Oct 08 09:45:53 compute-0 ceph-mon[73572]: 7.19 scrub ok
Oct 08 09:45:53 compute-0 ceph-mon[73572]: 6.12 scrub starts
Oct 08 09:45:53 compute-0 ceph-mon[73572]: 6.12 scrub ok
Oct 08 09:45:53 compute-0 ceph-mgr[73869]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 08 09:45:53 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:45:53.818+0000 7f359c145140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 08 09:45:53 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'zabbix'
Oct 08 09:45:53 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.swlvov restarted
Oct 08 09:45:53 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.swlvov started
Oct 08 09:45:53 compute-0 ceph-mgr[73869]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 08 09:45:53 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:45:53.894+0000 7f359c145140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 08 09:45:53 compute-0 ceph-mon[73572]: log_channel(cluster) log [INF] : Active manager daemon compute-0.ixicfj restarted
Oct 08 09:45:53 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Oct 08 09:45:53 compute-0 ceph-mon[73572]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.ixicfj
Oct 08 09:45:53 compute-0 ceph-mgr[73869]: ms_deliver_dispatch: unhandled message 0x5565e9db7860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Oct 08 09:45:53 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Oct 08 09:45:53 compute-0 ceph-mgr[73869]: mgr handle_mgr_map Activating!
Oct 08 09:45:53 compute-0 ceph-mgr[73869]: mgr handle_mgr_map I am now activating
Oct 08 09:45:53 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Oct 08 09:45:53 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e14: compute-0.ixicfj(active, starting, since 0.046874s), standbys: compute-2.mtagwx, compute-1.swlvov
Oct 08 09:45:53 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Oct 08 09:45:53 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 08 09:45:53 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct 08 09:45:53 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 08 09:45:53 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct 08 09:45:53 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 08 09:45:53 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.ixicfj", "id": "compute-0.ixicfj"} v 0)
Oct 08 09:45:53 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mgr metadata", "who": "compute-0.ixicfj", "id": "compute-0.ixicfj"}]: dispatch
Oct 08 09:45:53 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.mtagwx", "id": "compute-2.mtagwx"} v 0)
Oct 08 09:45:53 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mgr metadata", "who": "compute-2.mtagwx", "id": "compute-2.mtagwx"}]: dispatch
Oct 08 09:45:53 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.swlvov", "id": "compute-1.swlvov"} v 0)
Oct 08 09:45:53 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mgr metadata", "who": "compute-1.swlvov", "id": "compute-1.swlvov"}]: dispatch
Oct 08 09:45:53 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct 08 09:45:53 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 08 09:45:53 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct 08 09:45:53 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 08 09:45:53 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 08 09:45:53 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 08 09:45:53 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Oct 08 09:45:53 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 08 09:45:53 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).mds e1 all = 1
Oct 08 09:45:53 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Oct 08 09:45:53 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 08 09:45:53 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Oct 08 09:45:53 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 08 09:45:53 compute-0 ceph-mgr[73869]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:45:53 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: balancer
Oct 08 09:45:53 compute-0 ceph-mgr[73869]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:45:53 compute-0 ceph-mgr[73869]: [balancer INFO root] Starting
Oct 08 09:45:53 compute-0 ceph-mon[73572]: log_channel(cluster) log [INF] : Manager daemon compute-0.ixicfj is now available
Oct 08 09:45:53 compute-0 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_09:45:53
Oct 08 09:45:53 compute-0 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 08 09:45:53 compute-0 ceph-mgr[73869]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Oct 08 09:45:53 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: cephadm
Oct 08 09:45:53 compute-0 ceph-mgr[73869]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:45:53 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: crash
Oct 08 09:45:53 compute-0 ceph-mgr[73869]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:45:53 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: dashboard
Oct 08 09:45:53 compute-0 ceph-mgr[73869]: [dashboard INFO access_control] Loading user roles DB version=2
Oct 08 09:45:53 compute-0 ceph-mgr[73869]: [dashboard INFO sso] Loading SSO DB version=1
Oct 08 09:45:53 compute-0 ceph-mgr[73869]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Oct 08 09:45:53 compute-0 ceph-mgr[73869]: [dashboard INFO root] Configured CherryPy, starting engine...
Oct 08 09:45:53 compute-0 ceph-mgr[73869]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:45:53 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: devicehealth
Oct 08 09:45:53 compute-0 ceph-mgr[73869]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:45:53 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: iostat
Oct 08 09:45:53 compute-0 ceph-mgr[73869]: [devicehealth INFO root] Starting
Oct 08 09:45:53 compute-0 ceph-mgr[73869]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:45:53 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: nfs
Oct 08 09:45:53 compute-0 ceph-mgr[73869]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:45:53 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: orchestrator
Oct 08 09:45:53 compute-0 ceph-mgr[73869]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:45:53 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: pg_autoscaler
Oct 08 09:45:53 compute-0 ceph-mgr[73869]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:45:53 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: progress
Oct 08 09:45:53 compute-0 ceph-mgr[73869]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:45:53 compute-0 ceph-mgr[73869]: [progress INFO root] Loading...
Oct 08 09:45:53 compute-0 ceph-mgr[73869]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f351e4f4d90>, <progress.module.GhostEvent object at 0x7f351e505040>, <progress.module.GhostEvent object at 0x7f351e505070>, <progress.module.GhostEvent object at 0x7f351e5050a0>, <progress.module.GhostEvent object at 0x7f351e5050d0>, <progress.module.GhostEvent object at 0x7f351e505100>, <progress.module.GhostEvent object at 0x7f351e505130>, <progress.module.GhostEvent object at 0x7f351e505160>, <progress.module.GhostEvent object at 0x7f351e505190>, <progress.module.GhostEvent object at 0x7f351e5051c0>, <progress.module.GhostEvent object at 0x7f351e5051f0>, <progress.module.GhostEvent object at 0x7f351e505220>] historic events
Oct 08 09:45:53 compute-0 ceph-mgr[73869]: [progress INFO root] Loaded OSDMap, ready.
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [rbd_support INFO root] recovery thread starting
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [rbd_support INFO root] starting setup
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: rbd_support
Oct 08 09:45:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ixicfj/mirror_snapshot_schedule"} v 0)
Oct 08 09:45:54 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ixicfj/mirror_snapshot_schedule"}]: dispatch
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: restful
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [restful INFO root] server_addr: :: server_port: 8003
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [restful WARNING root] server not running: no certificate configured
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: status
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: telemetry
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: volumes
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [rbd_support INFO root] PerfHandler: starting
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_task_task: vms, start_after=
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_task_task: volumes, start_after=
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_task_task: backups, start_after=
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_task_task: images, start_after=
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [rbd_support INFO root] TaskHandler: starting
Oct 08 09:45:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ixicfj/trash_purge_schedule"} v 0)
Oct 08 09:45:54 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ixicfj/trash_purge_schedule"}]: dispatch
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [rbd_support INFO root] setup complete
Oct 08 09:45:54 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.mtagwx restarted
Oct 08 09:45:54 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.mtagwx started
Oct 08 09:45:54 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 5.c scrub starts
Oct 08 09:45:54 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 5.c scrub ok
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Oct 08 09:45:54 compute-0 sshd-session[89742]: Accepted publickey for ceph-admin from 192.168.122.100 port 48830 ssh2: RSA SHA256:oltPosKfvcqSfDqAHq+rz23Sj7/sQ0zn4f0i/r7NEZA
Oct 08 09:45:54 compute-0 systemd-logind[798]: New session 35 of user ceph-admin.
Oct 08 09:45:54 compute-0 systemd[1]: Started Session 35 of User ceph-admin.
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.module] Engine started.
Oct 08 09:45:54 compute-0 sshd-session[89742]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 08 09:45:54 compute-0 sudo[89751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:45:54 compute-0 sudo[89751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:54 compute-0 sudo[89751]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:54 compute-0 sudo[89776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Oct 08 09:45:54 compute-0 sudo[89776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:54 compute-0 ceph-mon[73572]: 5.6 scrub starts
Oct 08 09:45:54 compute-0 ceph-mon[73572]: 5.6 scrub ok
Oct 08 09:45:54 compute-0 ceph-mon[73572]: 7.1a deep-scrub starts
Oct 08 09:45:54 compute-0 ceph-mon[73572]: 7.1a deep-scrub ok
Oct 08 09:45:54 compute-0 ceph-mon[73572]: 2.d scrub starts
Oct 08 09:45:54 compute-0 ceph-mon[73572]: 2.d scrub ok
Oct 08 09:45:54 compute-0 ceph-mon[73572]: Standby manager daemon compute-1.swlvov restarted
Oct 08 09:45:54 compute-0 ceph-mon[73572]: Standby manager daemon compute-1.swlvov started
Oct 08 09:45:54 compute-0 ceph-mon[73572]: Active manager daemon compute-0.ixicfj restarted
Oct 08 09:45:54 compute-0 ceph-mon[73572]: Activating manager daemon compute-0.ixicfj
Oct 08 09:45:54 compute-0 ceph-mon[73572]: osdmap e45: 3 total, 3 up, 3 in
Oct 08 09:45:54 compute-0 ceph-mon[73572]: mgrmap e14: compute-0.ixicfj(active, starting, since 0.046874s), standbys: compute-2.mtagwx, compute-1.swlvov
Oct 08 09:45:54 compute-0 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 08 09:45:54 compute-0 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 08 09:45:54 compute-0 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 08 09:45:54 compute-0 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mgr metadata", "who": "compute-0.ixicfj", "id": "compute-0.ixicfj"}]: dispatch
Oct 08 09:45:54 compute-0 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mgr metadata", "who": "compute-2.mtagwx", "id": "compute-2.mtagwx"}]: dispatch
Oct 08 09:45:54 compute-0 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mgr metadata", "who": "compute-1.swlvov", "id": "compute-1.swlvov"}]: dispatch
Oct 08 09:45:54 compute-0 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 08 09:45:54 compute-0 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 08 09:45:54 compute-0 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 08 09:45:54 compute-0 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 08 09:45:54 compute-0 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 08 09:45:54 compute-0 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 08 09:45:54 compute-0 ceph-mon[73572]: Manager daemon compute-0.ixicfj is now available
Oct 08 09:45:54 compute-0 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ixicfj/mirror_snapshot_schedule"}]: dispatch
Oct 08 09:45:54 compute-0 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ixicfj/trash_purge_schedule"}]: dispatch
Oct 08 09:45:54 compute-0 ceph-mon[73572]: Standby manager daemon compute-2.mtagwx restarted
Oct 08 09:45:54 compute-0 ceph-mon[73572]: Standby manager daemon compute-2.mtagwx started
Oct 08 09:45:54 compute-0 ceph-mon[73572]: 3.0 deep-scrub starts
Oct 08 09:45:54 compute-0 ceph-mon[73572]: 3.0 deep-scrub ok
Oct 08 09:45:54 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e15: compute-0.ixicfj(active, since 1.05685s), standbys: compute-1.swlvov, compute-2.mtagwx
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.14394 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-username", "value": "admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 09:45:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_USERNAME}] v 0)
Oct 08 09:45:54 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v3: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:45:54 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:54 compute-0 competent_babbage[89471]: Option GRAFANA_API_USERNAME updated
Oct 08 09:45:54 compute-0 systemd[1]: libpod-b1564ce47afdf437a22dfc4e23b8735ffc224b8264db47d08fe8e7c95e7902b7.scope: Deactivated successfully.
Oct 08 09:45:54 compute-0 podman[89438]: 2025-10-08 09:45:54.995359913 +0000 UTC m=+6.893055302 container died b1564ce47afdf437a22dfc4e23b8735ffc224b8264db47d08fe8e7c95e7902b7 (image=quay.io/ceph/ceph:v19, name=competent_babbage, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 08 09:45:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e8bd965d281b1de97d6090e780ed71c35986901d13a335527fc6961a2bdd0c7-merged.mount: Deactivated successfully.
Oct 08 09:45:55 compute-0 podman[89438]: 2025-10-08 09:45:55.035485805 +0000 UTC m=+6.933181194 container remove b1564ce47afdf437a22dfc4e23b8735ffc224b8264db47d08fe8e7c95e7902b7 (image=quay.io/ceph/ceph:v19, name=competent_babbage, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:45:55 compute-0 systemd[1]: libpod-conmon-b1564ce47afdf437a22dfc4e23b8735ffc224b8264db47d08fe8e7c95e7902b7.scope: Deactivated successfully.
Oct 08 09:45:55 compute-0 sudo[89391]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:55 compute-0 podman[89880]: 2025-10-08 09:45:55.085944371 +0000 UTC m=+0.062545525 container exec 01c666addd8584f02eb7d29040d97e45d8cb7f834fd134029c1f7cca550aeb9d (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:45:55 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 6.f scrub starts
Oct 08 09:45:55 compute-0 podman[89880]: 2025-10-08 09:45:55.195346313 +0000 UTC m=+0.171947467 container exec_died 01c666addd8584f02eb7d29040d97e45d8cb7f834fd134029c1f7cca550aeb9d (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:45:55 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 6.f scrub ok
Oct 08 09:45:55 compute-0 sudo[89926]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psrlwxnzdchffmxhdouayjtwvenkgyog ; /usr/bin/python3'
Oct 08 09:45:55 compute-0 sudo[89926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:45:55 compute-0 python3[89935]: ansible-ansible.legacy.command Invoked with stdin=/home/grafana_password.yml stdin_add_newline=False _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-password -i - _uses_shell=False strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None
Oct 08 09:45:55 compute-0 ceph-mgr[73869]: [cephadm INFO cherrypy.error] [08/Oct/2025:09:45:55] ENGINE Bus STARTING
Oct 08 09:45:55 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : [08/Oct/2025:09:45:55] ENGINE Bus STARTING
Oct 08 09:45:55 compute-0 podman[89980]: 2025-10-08 09:45:55.424940444 +0000 UTC m=+0.037253996 container create ca8693167292375c3a2fd5b5e60343a14c437eb9889b5dc037f7c2fd697dde3b (image=quay.io/ceph/ceph:v19, name=exciting_spence, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 08 09:45:55 compute-0 systemd[1]: Started libpod-conmon-ca8693167292375c3a2fd5b5e60343a14c437eb9889b5dc037f7c2fd697dde3b.scope.
Oct 08 09:45:55 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:45:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d0a83369f082e3143486cc091df260eca6bee5cc94729f4d5021d6df0ffb54a/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d0a83369f082e3143486cc091df260eca6bee5cc94729f4d5021d6df0ffb54a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d0a83369f082e3143486cc091df260eca6bee5cc94729f4d5021d6df0ffb54a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:55 compute-0 ceph-mgr[73869]: [cephadm INFO cherrypy.error] [08/Oct/2025:09:45:55] ENGINE Serving on http://192.168.122.100:8765
Oct 08 09:45:55 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : [08/Oct/2025:09:45:55] ENGINE Serving on http://192.168.122.100:8765
Oct 08 09:45:55 compute-0 podman[89980]: 2025-10-08 09:45:55.408706389 +0000 UTC m=+0.021019961 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:45:55 compute-0 podman[89980]: 2025-10-08 09:45:55.51514849 +0000 UTC m=+0.127462092 container init ca8693167292375c3a2fd5b5e60343a14c437eb9889b5dc037f7c2fd697dde3b (image=quay.io/ceph/ceph:v19, name=exciting_spence, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Oct 08 09:45:55 compute-0 podman[89980]: 2025-10-08 09:45:55.52173017 +0000 UTC m=+0.134043742 container start ca8693167292375c3a2fd5b5e60343a14c437eb9889b5dc037f7c2fd697dde3b (image=quay.io/ceph/ceph:v19, name=exciting_spence, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:45:55 compute-0 podman[89980]: 2025-10-08 09:45:55.52601334 +0000 UTC m=+0.138326902 container attach ca8693167292375c3a2fd5b5e60343a14c437eb9889b5dc037f7c2fd697dde3b (image=quay.io/ceph/ceph:v19, name=exciting_spence, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:45:55 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 08 09:45:55 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:55 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 08 09:45:55 compute-0 ceph-mgr[73869]: [cephadm INFO cherrypy.error] [08/Oct/2025:09:45:55] ENGINE Serving on https://192.168.122.100:7150
Oct 08 09:45:55 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : [08/Oct/2025:09:45:55] ENGINE Serving on https://192.168.122.100:7150
Oct 08 09:45:55 compute-0 ceph-mgr[73869]: [cephadm INFO cherrypy.error] [08/Oct/2025:09:45:55] ENGINE Bus STARTED
Oct 08 09:45:55 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : [08/Oct/2025:09:45:55] ENGINE Bus STARTED
Oct 08 09:45:55 compute-0 ceph-mgr[73869]: [cephadm INFO cherrypy.error] [08/Oct/2025:09:45:55] ENGINE Client ('192.168.122.100', 52474) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 08 09:45:55 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : [08/Oct/2025:09:45:55] ENGINE Client ('192.168.122.100', 52474) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 08 09:45:55 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:55 compute-0 podman[90103]: 2025-10-08 09:45:55.77128902 +0000 UTC m=+0.073032985 container exec 0dbea514cc83cec397480471ade0bebb407738deaf60dfa42fe4b53fba64588f (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:45:55 compute-0 podman[90103]: 2025-10-08 09:45:55.779574282 +0000 UTC m=+0.081318247 container exec_died 0dbea514cc83cec397480471ade0bebb407738deaf60dfa42fe4b53fba64588f (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:45:55 compute-0 sudo[89776]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:55 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 09:45:55 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:55 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 09:45:55 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.14418 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-password", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 09:45:55 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_PASSWORD}] v 0)
Oct 08 09:45:55 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 08 09:45:55 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:55 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:55 compute-0 exciting_spence[90025]: Option GRAFANA_API_PASSWORD updated
Oct 08 09:45:55 compute-0 systemd[1]: libpod-ca8693167292375c3a2fd5b5e60343a14c437eb9889b5dc037f7c2fd697dde3b.scope: Deactivated successfully.
Oct 08 09:45:55 compute-0 podman[89980]: 2025-10-08 09:45:55.918910714 +0000 UTC m=+0.531224296 container died ca8693167292375c3a2fd5b5e60343a14c437eb9889b5dc037f7c2fd697dde3b (image=quay.io/ceph/ceph:v19, name=exciting_spence, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:45:55 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:55 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 08 09:45:55 compute-0 sudo[90140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:45:55 compute-0 sudo[90140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:55 compute-0 sudo[90140]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:55 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v4: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:45:55 compute-0 ceph-mon[73572]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct 08 09:45:55 compute-0 ceph-mon[73572]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct 08 09:45:55 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:56 compute-0 ceph-mon[73572]: 5.c scrub starts
Oct 08 09:45:56 compute-0 ceph-mon[73572]: 5.c scrub ok
Oct 08 09:45:56 compute-0 ceph-mon[73572]: 4.18 scrub starts
Oct 08 09:45:56 compute-0 ceph-mon[73572]: 4.18 scrub ok
Oct 08 09:45:56 compute-0 ceph-mon[73572]: mgrmap e15: compute-0.ixicfj(active, since 1.05685s), standbys: compute-1.swlvov, compute-2.mtagwx
Oct 08 09:45:56 compute-0 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:56 compute-0 ceph-mon[73572]: 6.f scrub starts
Oct 08 09:45:56 compute-0 ceph-mon[73572]: 6.f scrub ok
Oct 08 09:45:56 compute-0 ceph-mon[73572]: [08/Oct/2025:09:45:55] ENGINE Bus STARTING
Oct 08 09:45:56 compute-0 ceph-mon[73572]: [08/Oct/2025:09:45:55] ENGINE Serving on http://192.168.122.100:8765
Oct 08 09:45:56 compute-0 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:56 compute-0 ceph-mon[73572]: [08/Oct/2025:09:45:55] ENGINE Serving on https://192.168.122.100:7150
Oct 08 09:45:56 compute-0 ceph-mon[73572]: [08/Oct/2025:09:45:55] ENGINE Bus STARTED
Oct 08 09:45:56 compute-0 ceph-mon[73572]: [08/Oct/2025:09:45:55] ENGINE Client ('192.168.122.100', 52474) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 08 09:45:56 compute-0 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:56 compute-0 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:56 compute-0 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:56 compute-0 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:56 compute-0 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:56 compute-0 sudo[90174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 08 09:45:56 compute-0 sudo[90174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d0a83369f082e3143486cc091df260eca6bee5cc94729f4d5021d6df0ffb54a-merged.mount: Deactivated successfully.
Oct 08 09:45:56 compute-0 podman[89980]: 2025-10-08 09:45:56.0599849 +0000 UTC m=+0.672298452 container remove ca8693167292375c3a2fd5b5e60343a14c437eb9889b5dc037f7c2fd697dde3b (image=quay.io/ceph/ceph:v19, name=exciting_spence, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct 08 09:45:56 compute-0 systemd[1]: libpod-conmon-ca8693167292375c3a2fd5b5e60343a14c437eb9889b5dc037f7c2fd697dde3b.scope: Deactivated successfully.
Oct 08 09:45:56 compute-0 ceph-mgr[73869]: [devicehealth INFO root] Check health
Oct 08 09:45:56 compute-0 sudo[89926]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:56 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 3.b scrub starts
Oct 08 09:45:56 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 3.b scrub ok
Oct 08 09:45:56 compute-0 sudo[90253]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utrhpaqgrzvrofzshqwtgzettsbftgcn ; /usr/bin/python3'
Oct 08 09:45:56 compute-0 sudo[90253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:45:56 compute-0 python3[90256]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-alertmanager-api-host http://192.168.122.100:9093
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:45:56 compute-0 podman[90259]: 2025-10-08 09:45:56.517685327 +0000 UTC m=+0.039207755 container create abc8cbf5c3539b5c3c2e4e89c8bfe178a08e05834fa8b12b7bf4275aaec9cfb6 (image=quay.io/ceph/ceph:v19, name=dreamy_bose, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid)
Oct 08 09:45:56 compute-0 systemd[1]: Started libpod-conmon-abc8cbf5c3539b5c3c2e4e89c8bfe178a08e05834fa8b12b7bf4275aaec9cfb6.scope.
Oct 08 09:45:56 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:45:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11eaa462f4e9f4bc5bbabe48f78d1a1afbcfc46705f425b52670878ab16e4fc5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11eaa462f4e9f4bc5bbabe48f78d1a1afbcfc46705f425b52670878ab16e4fc5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11eaa462f4e9f4bc5bbabe48f78d1a1afbcfc46705f425b52670878ab16e4fc5/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:56 compute-0 podman[90259]: 2025-10-08 09:45:56.501155643 +0000 UTC m=+0.022678071 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:45:56 compute-0 podman[90259]: 2025-10-08 09:45:56.597493087 +0000 UTC m=+0.119015505 container init abc8cbf5c3539b5c3c2e4e89c8bfe178a08e05834fa8b12b7bf4275aaec9cfb6 (image=quay.io/ceph/ceph:v19, name=dreamy_bose, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 08 09:45:56 compute-0 podman[90259]: 2025-10-08 09:45:56.604534831 +0000 UTC m=+0.126057229 container start abc8cbf5c3539b5c3c2e4e89c8bfe178a08e05834fa8b12b7bf4275aaec9cfb6 (image=quay.io/ceph/ceph:v19, name=dreamy_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct 08 09:45:56 compute-0 podman[90259]: 2025-10-08 09:45:56.609264795 +0000 UTC m=+0.130787203 container attach abc8cbf5c3539b5c3c2e4e89c8bfe178a08e05834fa8b12b7bf4275aaec9cfb6 (image=quay.io/ceph/ceph:v19, name=dreamy_bose, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:45:56 compute-0 sudo[90174]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:56 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 08 09:45:56 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:56 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 08 09:45:56 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:56 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Oct 08 09:45:56 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 08 09:45:56 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Oct 08 09:45:56 compute-0 sudo[90293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:45:56 compute-0 sudo[90293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:56 compute-0 sudo[90293]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:56 compute-0 sudo[90318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Oct 08 09:45:56 compute-0 sudo[90318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:56 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e16: compute-0.ixicfj(active, since 3s), standbys: compute-1.swlvov, compute-2.mtagwx
Oct 08 09:45:56 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.14430 -' entity='client.admin' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.122.100:9093", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 09:45:56 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ALERTMANAGER_API_HOST}] v 0)
Oct 08 09:45:56 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:56 compute-0 dreamy_bose[90279]: Option ALERTMANAGER_API_HOST updated
Oct 08 09:45:56 compute-0 systemd[1]: libpod-abc8cbf5c3539b5c3c2e4e89c8bfe178a08e05834fa8b12b7bf4275aaec9cfb6.scope: Deactivated successfully.
Oct 08 09:45:56 compute-0 conmon[90279]: conmon abc8cbf5c3539b5c3c2e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-abc8cbf5c3539b5c3c2e4e89c8bfe178a08e05834fa8b12b7bf4275aaec9cfb6.scope/container/memory.events
Oct 08 09:45:56 compute-0 podman[90259]: 2025-10-08 09:45:56.978553511 +0000 UTC m=+0.500075909 container died abc8cbf5c3539b5c3c2e4e89c8bfe178a08e05834fa8b12b7bf4275aaec9cfb6 (image=quay.io/ceph/ceph:v19, name=dreamy_bose, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:45:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-11eaa462f4e9f4bc5bbabe48f78d1a1afbcfc46705f425b52670878ab16e4fc5-merged.mount: Deactivated successfully.
Oct 08 09:45:57 compute-0 sudo[90318]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:57 compute-0 ceph-mon[73572]: 6.1a deep-scrub starts
Oct 08 09:45:57 compute-0 ceph-mon[73572]: 6.1a deep-scrub ok
Oct 08 09:45:57 compute-0 ceph-mon[73572]: 7.14 scrub starts
Oct 08 09:45:57 compute-0 ceph-mon[73572]: 7.14 scrub ok
Oct 08 09:45:57 compute-0 ceph-mon[73572]: from='client.14418 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-password", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 09:45:57 compute-0 ceph-mon[73572]: pgmap v4: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:45:57 compute-0 ceph-mon[73572]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct 08 09:45:57 compute-0 ceph-mon[73572]: Cluster is now healthy
Oct 08 09:45:57 compute-0 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:57 compute-0 ceph-mon[73572]: 3.b scrub starts
Oct 08 09:45:57 compute-0 ceph-mon[73572]: 3.b scrub ok
Oct 08 09:45:57 compute-0 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:57 compute-0 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:57 compute-0 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 08 09:45:57 compute-0 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Oct 08 09:45:57 compute-0 ceph-mon[73572]: mgrmap e16: compute-0.ixicfj(active, since 3s), standbys: compute-1.swlvov, compute-2.mtagwx
Oct 08 09:45:57 compute-0 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:57 compute-0 podman[90259]: 2025-10-08 09:45:57.019326062 +0000 UTC m=+0.540848460 container remove abc8cbf5c3539b5c3c2e4e89c8bfe178a08e05834fa8b12b7bf4275aaec9cfb6 (image=quay.io/ceph/ceph:v19, name=dreamy_bose, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Oct 08 09:45:57 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 09:45:57 compute-0 systemd[1]: libpod-conmon-abc8cbf5c3539b5c3c2e4e89c8bfe178a08e05834fa8b12b7bf4275aaec9cfb6.scope: Deactivated successfully.
Oct 08 09:45:57 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:57 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 09:45:57 compute-0 sudo[90253]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:57 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:57 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Oct 08 09:45:57 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 08 09:45:57 compute-0 sudo[90417]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gybretbtjjiczmjclqzdpehvwgifmbrr ; /usr/bin/python3'
Oct 08 09:45:57 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 08 09:45:57 compute-0 sudo[90417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:45:57 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:57 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 08 09:45:57 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:57 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Oct 08 09:45:57 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 08 09:45:57 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:45:57 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:45:57 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 08 09:45:57 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 09:45:57 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Oct 08 09:45:57 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Oct 08 09:45:57 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Oct 08 09:45:57 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Oct 08 09:45:57 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Oct 08 09:45:57 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Oct 08 09:45:57 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 4.b scrub starts
Oct 08 09:45:57 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 4.b scrub ok
Oct 08 09:45:57 compute-0 sudo[90420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 08 09:45:57 compute-0 sudo[90420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:57 compute-0 sudo[90420]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:57 compute-0 sudo[90445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/etc/ceph
Oct 08 09:45:57 compute-0 sudo[90445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:57 compute-0 sudo[90445]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:57 compute-0 python3[90419]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-prometheus-api-host http://192.168.122.100:9092
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:45:57 compute-0 sudo[90470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/etc/ceph/ceph.conf.new
Oct 08 09:45:57 compute-0 sudo[90470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:57 compute-0 sudo[90470]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:57 compute-0 podman[90491]: 2025-10-08 09:45:57.457225896 +0000 UTC m=+0.054299585 container create bafedd0cfa298ad16ae4600d695e19395206614c4dd1b98f4e624fd8a8e325dd (image=quay.io/ceph/ceph:v19, name=serene_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:45:57 compute-0 systemd[1]: Started libpod-conmon-bafedd0cfa298ad16ae4600d695e19395206614c4dd1b98f4e624fd8a8e325dd.scope.
Oct 08 09:45:57 compute-0 sudo[90501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149
Oct 08 09:45:57 compute-0 sudo[90501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:57 compute-0 sudo[90501]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:57 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:45:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20096b8cc0cdcfd83de6ae8f2ef4bd6147a85c4ae22cd53442a955faf79aef42/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20096b8cc0cdcfd83de6ae8f2ef4bd6147a85c4ae22cd53442a955faf79aef42/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20096b8cc0cdcfd83de6ae8f2ef4bd6147a85c4ae22cd53442a955faf79aef42/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:57 compute-0 podman[90491]: 2025-10-08 09:45:57.528767304 +0000 UTC m=+0.125841003 container init bafedd0cfa298ad16ae4600d695e19395206614c4dd1b98f4e624fd8a8e325dd (image=quay.io/ceph/ceph:v19, name=serene_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:45:57 compute-0 podman[90491]: 2025-10-08 09:45:57.44227634 +0000 UTC m=+0.039350059 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:45:57 compute-0 podman[90491]: 2025-10-08 09:45:57.539665165 +0000 UTC m=+0.136738854 container start bafedd0cfa298ad16ae4600d695e19395206614c4dd1b98f4e624fd8a8e325dd (image=quay.io/ceph/ceph:v19, name=serene_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 08 09:45:57 compute-0 podman[90491]: 2025-10-08 09:45:57.542881214 +0000 UTC m=+0.139954903 container attach bafedd0cfa298ad16ae4600d695e19395206614c4dd1b98f4e624fd8a8e325dd (image=quay.io/ceph/ceph:v19, name=serene_dhawan, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:45:57 compute-0 sudo[90539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/etc/ceph/ceph.conf.new
Oct 08 09:45:57 compute-0 sudo[90539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:57 compute-0 sudo[90539]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:57 compute-0 sudo[90589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/etc/ceph/ceph.conf.new
Oct 08 09:45:57 compute-0 sudo[90589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:57 compute-0 sudo[90589]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:57 compute-0 sudo[90632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/etc/ceph/ceph.conf.new
Oct 08 09:45:57 compute-0 sudo[90632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:57 compute-0 sudo[90632]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:57 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct 08 09:45:57 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct 08 09:45:57 compute-0 sudo[90657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Oct 08 09:45:57 compute-0 sudo[90657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:57 compute-0 sudo[90657]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:57 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct 08 09:45:57 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct 08 09:45:57 compute-0 sudo[90682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config
Oct 08 09:45:57 compute-0 sudo[90682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:57 compute-0 sudo[90682]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:57 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct 08 09:45:57 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct 08 09:45:57 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.14436 -' entity='client.admin' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.122.100:9092", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 09:45:57 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/PROMETHEUS_API_HOST}] v 0)
Oct 08 09:45:57 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:57 compute-0 serene_dhawan[90535]: Option PROMETHEUS_API_HOST updated
Oct 08 09:45:57 compute-0 sudo[90707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config
Oct 08 09:45:57 compute-0 sudo[90707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:57 compute-0 sudo[90707]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:57 compute-0 systemd[1]: libpod-bafedd0cfa298ad16ae4600d695e19395206614c4dd1b98f4e624fd8a8e325dd.scope: Deactivated successfully.
Oct 08 09:45:57 compute-0 podman[90491]: 2025-10-08 09:45:57.928916428 +0000 UTC m=+0.525990147 container died bafedd0cfa298ad16ae4600d695e19395206614c4dd1b98f4e624fd8a8e325dd (image=quay.io/ceph/ceph:v19, name=serene_dhawan, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 08 09:45:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-20096b8cc0cdcfd83de6ae8f2ef4bd6147a85c4ae22cd53442a955faf79aef42-merged.mount: Deactivated successfully.
Oct 08 09:45:57 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v5: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:45:57 compute-0 podman[90491]: 2025-10-08 09:45:57.967129521 +0000 UTC m=+0.564203200 container remove bafedd0cfa298ad16ae4600d695e19395206614c4dd1b98f4e624fd8a8e325dd (image=quay.io/ceph/ceph:v19, name=serene_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 08 09:45:57 compute-0 sudo[90734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf.new
Oct 08 09:45:57 compute-0 sudo[90734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:57 compute-0 sudo[90734]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:57 compute-0 systemd[1]: libpod-conmon-bafedd0cfa298ad16ae4600d695e19395206614c4dd1b98f4e624fd8a8e325dd.scope: Deactivated successfully.
Oct 08 09:45:57 compute-0 sudo[90417]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:58 compute-0 sudo[90770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149
Oct 08 09:45:58 compute-0 sudo[90770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:58 compute-0 sudo[90770]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:58 compute-0 ceph-mon[73572]: 4.1a scrub starts
Oct 08 09:45:58 compute-0 ceph-mon[73572]: 4.1a scrub ok
Oct 08 09:45:58 compute-0 ceph-mon[73572]: 4.1f scrub starts
Oct 08 09:45:58 compute-0 ceph-mon[73572]: 4.1f scrub ok
Oct 08 09:45:58 compute-0 ceph-mon[73572]: from='client.14430 -' entity='client.admin' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.122.100:9093", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 09:45:58 compute-0 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:58 compute-0 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:58 compute-0 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 08 09:45:58 compute-0 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:58 compute-0 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:58 compute-0 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 08 09:45:58 compute-0 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:45:58 compute-0 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 09:45:58 compute-0 ceph-mon[73572]: Updating compute-0:/etc/ceph/ceph.conf
Oct 08 09:45:58 compute-0 ceph-mon[73572]: Updating compute-1:/etc/ceph/ceph.conf
Oct 08 09:45:58 compute-0 ceph-mon[73572]: Updating compute-2:/etc/ceph/ceph.conf
Oct 08 09:45:58 compute-0 ceph-mon[73572]: 4.b scrub starts
Oct 08 09:45:58 compute-0 ceph-mon[73572]: 4.b scrub ok
Oct 08 09:45:58 compute-0 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:58 compute-0 sudo[90795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf.new
Oct 08 09:45:58 compute-0 sudo[90795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:58 compute-0 sudo[90795]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:58 compute-0 sudo[90866]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpmrdbprlbondqxcydyyyjklcbgqgvpj ; /usr/bin/python3'
Oct 08 09:45:58 compute-0 sudo[90866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:45:58 compute-0 sudo[90868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf.new
Oct 08 09:45:58 compute-0 sudo[90868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:58 compute-0 sudo[90868]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:58 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 5.a scrub starts
Oct 08 09:45:58 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 5.a scrub ok
Oct 08 09:45:58 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e17: compute-0.ixicfj(active, since 4s), standbys: compute-1.swlvov, compute-2.mtagwx
Oct 08 09:45:58 compute-0 sudo[90894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf.new
Oct 08 09:45:58 compute-0 sudo[90894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:58 compute-0 sudo[90894]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:58 compute-0 python3[90869]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-url http://192.168.122.100:3100
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:45:58 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct 08 09:45:58 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct 08 09:45:58 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:45:58 compute-0 sudo[90920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf.new /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct 08 09:45:58 compute-0 sudo[90920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:58 compute-0 podman[90919]: 2025-10-08 09:45:58.356173418 +0000 UTC m=+0.049634503 container create fed70f8a450b8eef68677930c9ad23f5c827810cd1c2d0b165bb6fc1552b1246 (image=quay.io/ceph/ceph:v19, name=distracted_austin, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 08 09:45:58 compute-0 sudo[90920]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:58 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 08 09:45:58 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 08 09:45:58 compute-0 systemd[1]: Started libpod-conmon-fed70f8a450b8eef68677930c9ad23f5c827810cd1c2d0b165bb6fc1552b1246.scope.
Oct 08 09:45:58 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:45:58 compute-0 sudo[90957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 08 09:45:58 compute-0 sudo[90957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0c2832265005c4a161d9257384ad940178be9d4131f14f5c07a02f8a9aaf2b5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0c2832265005c4a161d9257384ad940178be9d4131f14f5c07a02f8a9aaf2b5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0c2832265005c4a161d9257384ad940178be9d4131f14f5c07a02f8a9aaf2b5/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:58 compute-0 sudo[90957]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:58 compute-0 podman[90919]: 2025-10-08 09:45:58.337457498 +0000 UTC m=+0.030918593 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:45:58 compute-0 podman[90919]: 2025-10-08 09:45:58.439571538 +0000 UTC m=+0.133032623 container init fed70f8a450b8eef68677930c9ad23f5c827810cd1c2d0b165bb6fc1552b1246 (image=quay.io/ceph/ceph:v19, name=distracted_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 08 09:45:58 compute-0 podman[90919]: 2025-10-08 09:45:58.446580591 +0000 UTC m=+0.140041676 container start fed70f8a450b8eef68677930c9ad23f5c827810cd1c2d0b165bb6fc1552b1246 (image=quay.io/ceph/ceph:v19, name=distracted_austin, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 08 09:45:58 compute-0 podman[90919]: 2025-10-08 09:45:58.450625714 +0000 UTC m=+0.144086799 container attach fed70f8a450b8eef68677930c9ad23f5c827810cd1c2d0b165bb6fc1552b1246 (image=quay.io/ceph/ceph:v19, name=distracted_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:45:58 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 08 09:45:58 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 08 09:45:58 compute-0 sudo[90987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/etc/ceph
Oct 08 09:45:58 compute-0 sudo[90987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:58 compute-0 sudo[90987]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:58 compute-0 sudo[91014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/etc/ceph/ceph.client.admin.keyring.new
Oct 08 09:45:58 compute-0 sudo[91014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:58 compute-0 sudo[91014]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:58 compute-0 sudo[91039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149
Oct 08 09:45:58 compute-0 sudo[91039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:58 compute-0 sudo[91039]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:58 compute-0 sudo[91083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/etc/ceph/ceph.client.admin.keyring.new
Oct 08 09:45:58 compute-0 sudo[91083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:58 compute-0 sudo[91083]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:58 compute-0 sudo[91131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/etc/ceph/ceph.client.admin.keyring.new
Oct 08 09:45:58 compute-0 sudo[91131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:58 compute-0 sudo[91131]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:58 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.14442 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "http://192.168.122.100:3100", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 09:45:58 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_URL}] v 0)
Oct 08 09:45:58 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:58 compute-0 distracted_austin[90977]: Option GRAFANA_API_URL updated
Oct 08 09:45:58 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct 08 09:45:58 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct 08 09:45:58 compute-0 systemd[1]: libpod-fed70f8a450b8eef68677930c9ad23f5c827810cd1c2d0b165bb6fc1552b1246.scope: Deactivated successfully.
Oct 08 09:45:58 compute-0 podman[90919]: 2025-10-08 09:45:58.834529774 +0000 UTC m=+0.527990829 container died fed70f8a450b8eef68677930c9ad23f5c827810cd1c2d0b165bb6fc1552b1246 (image=quay.io/ceph/ceph:v19, name=distracted_austin, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct 08 09:45:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0c2832265005c4a161d9257384ad940178be9d4131f14f5c07a02f8a9aaf2b5-merged.mount: Deactivated successfully.
Oct 08 09:45:58 compute-0 sudo[91156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/etc/ceph/ceph.client.admin.keyring.new
Oct 08 09:45:58 compute-0 sudo[91156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:58 compute-0 podman[90919]: 2025-10-08 09:45:58.871447937 +0000 UTC m=+0.564909002 container remove fed70f8a450b8eef68677930c9ad23f5c827810cd1c2d0b165bb6fc1552b1246 (image=quay.io/ceph/ceph:v19, name=distracted_austin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:45:58 compute-0 sudo[91156]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:58 compute-0 systemd[1]: libpod-conmon-fed70f8a450b8eef68677930c9ad23f5c827810cd1c2d0b165bb6fc1552b1246.scope: Deactivated successfully.
Oct 08 09:45:58 compute-0 sudo[90866]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:58 compute-0 sudo[91195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Oct 08 09:45:58 compute-0 sudo[91195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:58 compute-0 sudo[91195]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:58 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct 08 09:45:58 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct 08 09:45:59 compute-0 sudo[91220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config
Oct 08 09:45:59 compute-0 sudo[91220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:59 compute-0 sudo[91220]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:59 compute-0 sudo[91274]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylmoqzjefirqksxhkwscwavfdgnlqayz ; /usr/bin/python3'
Oct 08 09:45:59 compute-0 sudo[91274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:45:59 compute-0 ceph-mon[73572]: 4.1b scrub starts
Oct 08 09:45:59 compute-0 ceph-mon[73572]: 4.1b scrub ok
Oct 08 09:45:59 compute-0 ceph-mon[73572]: 2.f deep-scrub starts
Oct 08 09:45:59 compute-0 ceph-mon[73572]: 2.f deep-scrub ok
Oct 08 09:45:59 compute-0 ceph-mon[73572]: Updating compute-1:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct 08 09:45:59 compute-0 ceph-mon[73572]: Updating compute-0:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct 08 09:45:59 compute-0 ceph-mon[73572]: Updating compute-2:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct 08 09:45:59 compute-0 ceph-mon[73572]: from='client.14436 -' entity='client.admin' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.122.100:9092", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 09:45:59 compute-0 ceph-mon[73572]: pgmap v5: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:45:59 compute-0 ceph-mon[73572]: 5.a scrub starts
Oct 08 09:45:59 compute-0 ceph-mon[73572]: 5.a scrub ok
Oct 08 09:45:59 compute-0 ceph-mon[73572]: mgrmap e17: compute-0.ixicfj(active, since 4s), standbys: compute-1.swlvov, compute-2.mtagwx
Oct 08 09:45:59 compute-0 ceph-mon[73572]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct 08 09:45:59 compute-0 ceph-mon[73572]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 08 09:45:59 compute-0 ceph-mon[73572]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 08 09:45:59 compute-0 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:59 compute-0 sudo[91262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config
Oct 08 09:45:59 compute-0 sudo[91262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:59 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct 08 09:45:59 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct 08 09:45:59 compute-0 sudo[91262]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:59 compute-0 sudo[91296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring.new
Oct 08 09:45:59 compute-0 sudo[91296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:59 compute-0 sudo[91296]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:59 compute-0 python3[91288]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module disable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:45:59 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 6.9 deep-scrub starts
Oct 08 09:45:59 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 6.9 deep-scrub ok
Oct 08 09:45:59 compute-0 sudo[91321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149
Oct 08 09:45:59 compute-0 sudo[91321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:59 compute-0 sudo[91321]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:59 compute-0 podman[91344]: 2025-10-08 09:45:59.25315136 +0000 UTC m=+0.051151128 container create 0da5111c8beba0fdd960690ab036dfc53c9838226722a614e8de0bec5d71d479 (image=quay.io/ceph/ceph:v19, name=boring_dewdney, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:45:59 compute-0 sudo[91354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring.new
Oct 08 09:45:59 compute-0 sudo[91354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:59 compute-0 sudo[91354]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:59 compute-0 systemd[1]: Started libpod-conmon-0da5111c8beba0fdd960690ab036dfc53c9838226722a614e8de0bec5d71d479.scope.
Oct 08 09:45:59 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:45:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7ce874085f0958d14ae936aee77d1004dc53418b22b672d38f40d8e40239d7d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7ce874085f0958d14ae936aee77d1004dc53418b22b672d38f40d8e40239d7d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7ce874085f0958d14ae936aee77d1004dc53418b22b672d38f40d8e40239d7d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 08 09:45:59 compute-0 podman[91344]: 2025-10-08 09:45:59.230202071 +0000 UTC m=+0.028201849 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:45:59 compute-0 podman[91344]: 2025-10-08 09:45:59.331901259 +0000 UTC m=+0.129901057 container init 0da5111c8beba0fdd960690ab036dfc53c9838226722a614e8de0bec5d71d479 (image=quay.io/ceph/ceph:v19, name=boring_dewdney, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:45:59 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 08 09:45:59 compute-0 podman[91344]: 2025-10-08 09:45:59.342418498 +0000 UTC m=+0.140418286 container start 0da5111c8beba0fdd960690ab036dfc53c9838226722a614e8de0bec5d71d479 (image=quay.io/ceph/ceph:v19, name=boring_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:45:59 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:59 compute-0 podman[91344]: 2025-10-08 09:45:59.345784641 +0000 UTC m=+0.143784429 container attach 0da5111c8beba0fdd960690ab036dfc53c9838226722a614e8de0bec5d71d479 (image=quay.io/ceph/ceph:v19, name=boring_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 08 09:45:59 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 08 09:45:59 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:59 compute-0 sudo[91413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring.new
Oct 08 09:45:59 compute-0 sudo[91413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:59 compute-0 sudo[91413]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:59 compute-0 sudo[91438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring.new
Oct 08 09:45:59 compute-0 sudo[91438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:59 compute-0 sudo[91438]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:59 compute-0 sudo[91482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring.new /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct 08 09:45:59 compute-0 sudo[91482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:45:59 compute-0 sudo[91482]: pam_unix(sudo:session): session closed for user root
Oct 08 09:45:59 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 09:45:59 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:59 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 09:45:59 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:59 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 08 09:45:59 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:59 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 08 09:45:59 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:59 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 08 09:45:59 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct 08 09:45:59 compute-0 ceph-mgr[73869]: [progress INFO root] update: starting ev f9bd10ca-c3c1-4645-b329-5c0fc669d3eb (Updating node-exporter deployment (+2 -> 3))
Oct 08 09:45:59 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-1 on compute-1
Oct 08 09:45:59 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-1 on compute-1
Oct 08 09:45:59 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module disable", "module": "dashboard"} v 0)
Oct 08 09:45:59 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1343562250' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Oct 08 09:45:59 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v6: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 14 op/s
Oct 08 09:46:00 compute-0 ceph-mon[73572]: 6.e scrub starts
Oct 08 09:46:00 compute-0 ceph-mon[73572]: 6.e scrub ok
Oct 08 09:46:00 compute-0 ceph-mon[73572]: 7.11 scrub starts
Oct 08 09:46:00 compute-0 ceph-mon[73572]: 7.11 scrub ok
Oct 08 09:46:00 compute-0 ceph-mon[73572]: from='client.14442 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "http://192.168.122.100:3100", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 09:46:00 compute-0 ceph-mon[73572]: Updating compute-1:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct 08 09:46:00 compute-0 ceph-mon[73572]: Updating compute-0:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct 08 09:46:00 compute-0 ceph-mon[73572]: Updating compute-2:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct 08 09:46:00 compute-0 ceph-mon[73572]: 6.9 deep-scrub starts
Oct 08 09:46:00 compute-0 ceph-mon[73572]: 6.9 deep-scrub ok
Oct 08 09:46:00 compute-0 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:00 compute-0 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:00 compute-0 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:00 compute-0 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:00 compute-0 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:00 compute-0 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:00 compute-0 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:00 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/1343562250' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Oct 08 09:46:00 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 6.b scrub starts
Oct 08 09:46:00 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 6.b scrub ok
Oct 08 09:46:00 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1343562250' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Oct 08 09:46:00 compute-0 ceph-mgr[73869]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct 08 09:46:00 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e18: compute-0.ixicfj(active, since 6s), standbys: compute-1.swlvov, compute-2.mtagwx
Oct 08 09:46:00 compute-0 systemd[1]: libpod-0da5111c8beba0fdd960690ab036dfc53c9838226722a614e8de0bec5d71d479.scope: Deactivated successfully.
Oct 08 09:46:00 compute-0 podman[91344]: 2025-10-08 09:46:00.761958362 +0000 UTC m=+1.559958170 container died 0da5111c8beba0fdd960690ab036dfc53c9838226722a614e8de0bec5d71d479 (image=quay.io/ceph/ceph:v19, name=boring_dewdney, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid)
Oct 08 09:46:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7ce874085f0958d14ae936aee77d1004dc53418b22b672d38f40d8e40239d7d-merged.mount: Deactivated successfully.
Oct 08 09:46:00 compute-0 podman[91344]: 2025-10-08 09:46:00.803483507 +0000 UTC m=+1.601483295 container remove 0da5111c8beba0fdd960690ab036dfc53c9838226722a614e8de0bec5d71d479 (image=quay.io/ceph/ceph:v19, name=boring_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325)
Oct 08 09:46:00 compute-0 sshd-session[89750]: Connection closed by 192.168.122.100 port 48830
Oct 08 09:46:00 compute-0 sshd-session[89742]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 08 09:46:00 compute-0 systemd-logind[798]: Session 35 logged out. Waiting for processes to exit.
Oct 08 09:46:00 compute-0 systemd[1]: libpod-conmon-0da5111c8beba0fdd960690ab036dfc53c9838226722a614e8de0bec5d71d479.scope: Deactivated successfully.
Oct 08 09:46:00 compute-0 systemd[1]: session-35.scope: Deactivated successfully.
Oct 08 09:46:00 compute-0 systemd[1]: session-35.scope: Consumed 4.805s CPU time.
Oct 08 09:46:00 compute-0 systemd-logind[798]: Removed session 35.
Oct 08 09:46:00 compute-0 sudo[91274]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:00 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ignoring --setuser ceph since I am not root
Oct 08 09:46:00 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ignoring --setgroup ceph since I am not root
Oct 08 09:46:00 compute-0 ceph-mgr[73869]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct 08 09:46:00 compute-0 ceph-mgr[73869]: pidfile_write: ignore empty --pid-file
Oct 08 09:46:00 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'alerts'
Oct 08 09:46:00 compute-0 sudo[91564]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwtkbumidoabyzdprurwuoljzczxptka ; /usr/bin/python3'
Oct 08 09:46:00 compute-0 ceph-mgr[73869]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 08 09:46:00 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'balancer'
Oct 08 09:46:00 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:00.978+0000 7f67ee4c5140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 08 09:46:00 compute-0 sudo[91564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:46:01 compute-0 ceph-mgr[73869]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 08 09:46:01 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'cephadm'
Oct 08 09:46:01 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:01.060+0000 7f67ee4c5140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 08 09:46:01 compute-0 ceph-mon[73572]: 6.19 scrub starts
Oct 08 09:46:01 compute-0 ceph-mon[73572]: 6.19 scrub ok
Oct 08 09:46:01 compute-0 ceph-mon[73572]: 4.8 scrub starts
Oct 08 09:46:01 compute-0 ceph-mon[73572]: 4.8 scrub ok
Oct 08 09:46:01 compute-0 ceph-mon[73572]: Deploying daemon node-exporter.compute-1 on compute-1
Oct 08 09:46:01 compute-0 ceph-mon[73572]: pgmap v6: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 14 op/s
Oct 08 09:46:01 compute-0 ceph-mon[73572]: 6.b scrub starts
Oct 08 09:46:01 compute-0 ceph-mon[73572]: 6.b scrub ok
Oct 08 09:46:01 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/1343562250' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Oct 08 09:46:01 compute-0 ceph-mon[73572]: mgrmap e18: compute-0.ixicfj(active, since 6s), standbys: compute-1.swlvov, compute-2.mtagwx
Oct 08 09:46:01 compute-0 python3[91566]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module enable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:46:01 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Oct 08 09:46:01 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Oct 08 09:46:01 compute-0 podman[91567]: 2025-10-08 09:46:01.226961652 +0000 UTC m=+0.083620138 container create b7f7f2259363362eb7ab3dde338cc1cfa7ff1bcbbaf589551c76ea1b3c0226a5 (image=quay.io/ceph/ceph:v19, name=dreamy_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:46:01 compute-0 systemd[1]: Started libpod-conmon-b7f7f2259363362eb7ab3dde338cc1cfa7ff1bcbbaf589551c76ea1b3c0226a5.scope.
Oct 08 09:46:01 compute-0 podman[91567]: 2025-10-08 09:46:01.180729214 +0000 UTC m=+0.037387760 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:46:01 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:46:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cdf2017d01e363e54c73eb4446356611017dc9c6b2826887b374758d5aab149/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:46:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cdf2017d01e363e54c73eb4446356611017dc9c6b2826887b374758d5aab149/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 08 09:46:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cdf2017d01e363e54c73eb4446356611017dc9c6b2826887b374758d5aab149/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:46:01 compute-0 podman[91567]: 2025-10-08 09:46:01.297455178 +0000 UTC m=+0.154113664 container init b7f7f2259363362eb7ab3dde338cc1cfa7ff1bcbbaf589551c76ea1b3c0226a5 (image=quay.io/ceph/ceph:v19, name=dreamy_galois, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 08 09:46:01 compute-0 podman[91567]: 2025-10-08 09:46:01.304260215 +0000 UTC m=+0.160918681 container start b7f7f2259363362eb7ab3dde338cc1cfa7ff1bcbbaf589551c76ea1b3c0226a5 (image=quay.io/ceph/ceph:v19, name=dreamy_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Oct 08 09:46:01 compute-0 podman[91567]: 2025-10-08 09:46:01.31163322 +0000 UTC m=+0.168291686 container attach b7f7f2259363362eb7ab3dde338cc1cfa7ff1bcbbaf589551c76ea1b3c0226a5 (image=quay.io/ceph/ceph:v19, name=dreamy_galois, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 08 09:46:01 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0)
Oct 08 09:46:01 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3794820163' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Oct 08 09:46:01 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'crash'
Oct 08 09:46:01 compute-0 ceph-mgr[73869]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 08 09:46:01 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'dashboard'
Oct 08 09:46:01 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:01.861+0000 7f67ee4c5140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 08 09:46:02 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3794820163' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Oct 08 09:46:02 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e19: compute-0.ixicfj(active, since 8s), standbys: compute-1.swlvov, compute-2.mtagwx
Oct 08 09:46:02 compute-0 systemd[1]: libpod-b7f7f2259363362eb7ab3dde338cc1cfa7ff1bcbbaf589551c76ea1b3c0226a5.scope: Deactivated successfully.
Oct 08 09:46:02 compute-0 podman[91567]: 2025-10-08 09:46:02.115883099 +0000 UTC m=+0.972541555 container died b7f7f2259363362eb7ab3dde338cc1cfa7ff1bcbbaf589551c76ea1b3c0226a5 (image=quay.io/ceph/ceph:v19, name=dreamy_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:46:02 compute-0 ceph-mon[73572]: 5.1c scrub starts
Oct 08 09:46:02 compute-0 ceph-mon[73572]: 5.1c scrub ok
Oct 08 09:46:02 compute-0 ceph-mon[73572]: 4.1 scrub starts
Oct 08 09:46:02 compute-0 ceph-mon[73572]: 4.1 scrub ok
Oct 08 09:46:02 compute-0 ceph-mon[73572]: 4.c deep-scrub starts
Oct 08 09:46:02 compute-0 ceph-mon[73572]: 4.17 scrub starts
Oct 08 09:46:02 compute-0 ceph-mon[73572]: 4.c deep-scrub ok
Oct 08 09:46:02 compute-0 ceph-mon[73572]: 4.17 scrub ok
Oct 08 09:46:02 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3794820163' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Oct 08 09:46:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-3cdf2017d01e363e54c73eb4446356611017dc9c6b2826887b374758d5aab149-merged.mount: Deactivated successfully.
Oct 08 09:46:02 compute-0 podman[91567]: 2025-10-08 09:46:02.159573119 +0000 UTC m=+1.016231575 container remove b7f7f2259363362eb7ab3dde338cc1cfa7ff1bcbbaf589551c76ea1b3c0226a5 (image=quay.io/ceph/ceph:v19, name=dreamy_galois, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:46:02 compute-0 systemd[1]: libpod-conmon-b7f7f2259363362eb7ab3dde338cc1cfa7ff1bcbbaf589551c76ea1b3c0226a5.scope: Deactivated successfully.
Oct 08 09:46:02 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Oct 08 09:46:02 compute-0 sudo[91564]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:02 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Oct 08 09:46:02 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'devicehealth'
Oct 08 09:46:02 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:02.567+0000 7f67ee4c5140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 08 09:46:02 compute-0 ceph-mgr[73869]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 08 09:46:02 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'diskprediction_local'
Oct 08 09:46:02 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 08 09:46:02 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 08 09:46:02 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]:   from numpy import show_config as show_numpy_config
Oct 08 09:46:02 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:02.731+0000 7f67ee4c5140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 08 09:46:02 compute-0 ceph-mgr[73869]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 08 09:46:02 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'influx'
Oct 08 09:46:02 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:02.798+0000 7f67ee4c5140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 08 09:46:02 compute-0 ceph-mgr[73869]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 08 09:46:02 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'insights'
Oct 08 09:46:02 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'iostat'
Oct 08 09:46:02 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:02.933+0000 7f67ee4c5140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 08 09:46:02 compute-0 ceph-mgr[73869]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 08 09:46:02 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'k8sevents'
Oct 08 09:46:03 compute-0 python3[91707]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 08 09:46:03 compute-0 ceph-mon[73572]: 4.2 deep-scrub starts
Oct 08 09:46:03 compute-0 ceph-mon[73572]: 4.2 deep-scrub ok
Oct 08 09:46:03 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3794820163' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Oct 08 09:46:03 compute-0 ceph-mon[73572]: mgrmap e19: compute-0.ixicfj(active, since 8s), standbys: compute-1.swlvov, compute-2.mtagwx
Oct 08 09:46:03 compute-0 ceph-mon[73572]: 4.16 scrub starts
Oct 08 09:46:03 compute-0 ceph-mon[73572]: 5.1b deep-scrub starts
Oct 08 09:46:03 compute-0 ceph-mon[73572]: 4.16 scrub ok
Oct 08 09:46:03 compute-0 ceph-mon[73572]: 5.1b deep-scrub ok
Oct 08 09:46:03 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Oct 08 09:46:03 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Oct 08 09:46:03 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'localpool'
Oct 08 09:46:03 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:46:03 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'mds_autoscaler'
Oct 08 09:46:03 compute-0 python3[91778]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759916762.8493073-33846-183084686373290/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=b1f36629bdb347469f4890c95dfdef5abc68c3ae backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:46:03 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'mirroring'
Oct 08 09:46:03 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'nfs'
Oct 08 09:46:03 compute-0 sudo[91826]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsyctnmwiuxenbbpjtotgzhlznpgchun ; /usr/bin/python3'
Oct 08 09:46:03 compute-0 sudo[91826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:46:03 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:03.918+0000 7f67ee4c5140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 08 09:46:03 compute-0 ceph-mgr[73869]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 08 09:46:03 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'orchestrator'
Oct 08 09:46:03 compute-0 python3[91828]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 compute-1 compute-2 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:46:04 compute-0 podman[91829]: 2025-10-08 09:46:04.034960724 +0000 UTC m=+0.058028598 container create dabe41657d799e8f67b8f9a54c7c383817634f645f27fe32a289f73e6e9a1bca (image=quay.io/ceph/ceph:v19, name=exciting_kirch, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:46:04 compute-0 systemd[1]: Started libpod-conmon-dabe41657d799e8f67b8f9a54c7c383817634f645f27fe32a289f73e6e9a1bca.scope.
Oct 08 09:46:04 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:46:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97a33d61aba89fc689df1b2cd2819fd6083f25731ce33ae8ce6a26a47e7a3d4a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:46:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97a33d61aba89fc689df1b2cd2819fd6083f25731ce33ae8ce6a26a47e7a3d4a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:46:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97a33d61aba89fc689df1b2cd2819fd6083f25731ce33ae8ce6a26a47e7a3d4a/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 08 09:46:04 compute-0 podman[91829]: 2025-10-08 09:46:04.01611295 +0000 UTC m=+0.039180804 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:46:04 compute-0 podman[91829]: 2025-10-08 09:46:04.112328149 +0000 UTC m=+0.135396033 container init dabe41657d799e8f67b8f9a54c7c383817634f645f27fe32a289f73e6e9a1bca (image=quay.io/ceph/ceph:v19, name=exciting_kirch, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:46:04 compute-0 podman[91829]: 2025-10-08 09:46:04.118473237 +0000 UTC m=+0.141541071 container start dabe41657d799e8f67b8f9a54c7c383817634f645f27fe32a289f73e6e9a1bca (image=quay.io/ceph/ceph:v19, name=exciting_kirch, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:46:04 compute-0 podman[91829]: 2025-10-08 09:46:04.1215219 +0000 UTC m=+0.144589784 container attach dabe41657d799e8f67b8f9a54c7c383817634f645f27fe32a289f73e6e9a1bca (image=quay.io/ceph/ceph:v19, name=exciting_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 08 09:46:04 compute-0 ceph-mon[73572]: 2.10 scrub starts
Oct 08 09:46:04 compute-0 ceph-mon[73572]: 2.10 scrub ok
Oct 08 09:46:04 compute-0 ceph-mon[73572]: 5.f scrub starts
Oct 08 09:46:04 compute-0 ceph-mon[73572]: 5.f scrub ok
Oct 08 09:46:04 compute-0 ceph-mon[73572]: 5.17 scrub starts
Oct 08 09:46:04 compute-0 ceph-mon[73572]: 5.17 scrub ok
Oct 08 09:46:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:04.141+0000 7f67ee4c5140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 08 09:46:04 compute-0 ceph-mgr[73869]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 08 09:46:04 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'osd_perf_query'
Oct 08 09:46:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:04.213+0000 7f67ee4c5140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 08 09:46:04 compute-0 ceph-mgr[73869]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 08 09:46:04 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'osd_support'
Oct 08 09:46:04 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 6.14 scrub starts
Oct 08 09:46:04 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 6.14 scrub ok
Oct 08 09:46:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:04.277+0000 7f67ee4c5140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 08 09:46:04 compute-0 ceph-mgr[73869]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 08 09:46:04 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'pg_autoscaler'
Oct 08 09:46:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:04.352+0000 7f67ee4c5140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 08 09:46:04 compute-0 ceph-mgr[73869]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 08 09:46:04 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'progress'
Oct 08 09:46:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:04.423+0000 7f67ee4c5140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 08 09:46:04 compute-0 ceph-mgr[73869]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 08 09:46:04 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'prometheus'
Oct 08 09:46:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:04.755+0000 7f67ee4c5140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 08 09:46:04 compute-0 ceph-mgr[73869]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 08 09:46:04 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'rbd_support'
Oct 08 09:46:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:04.851+0000 7f67ee4c5140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 08 09:46:04 compute-0 ceph-mgr[73869]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 08 09:46:04 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'restful'
Oct 08 09:46:05 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'rgw'
Oct 08 09:46:05 compute-0 ceph-mon[73572]: 3.8 scrub starts
Oct 08 09:46:05 compute-0 ceph-mon[73572]: 3.8 scrub ok
Oct 08 09:46:05 compute-0 ceph-mon[73572]: 6.d scrub starts
Oct 08 09:46:05 compute-0 ceph-mon[73572]: 6.d scrub ok
Oct 08 09:46:05 compute-0 ceph-mon[73572]: 6.14 scrub starts
Oct 08 09:46:05 compute-0 ceph-mon[73572]: 6.14 scrub ok
Oct 08 09:46:05 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 3.12 deep-scrub starts
Oct 08 09:46:05 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 3.12 deep-scrub ok
Oct 08 09:46:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:05.290+0000 7f67ee4c5140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 08 09:46:05 compute-0 ceph-mgr[73869]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 08 09:46:05 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'rook'
Oct 08 09:46:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:05.850+0000 7f67ee4c5140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 08 09:46:05 compute-0 ceph-mgr[73869]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 08 09:46:05 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'selftest'
Oct 08 09:46:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:05.922+0000 7f67ee4c5140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 08 09:46:05 compute-0 ceph-mgr[73869]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 08 09:46:05 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'snap_schedule'
Oct 08 09:46:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:06.006+0000 7f67ee4c5140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 08 09:46:06 compute-0 ceph-mgr[73869]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 08 09:46:06 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'stats'
Oct 08 09:46:06 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'status'
Oct 08 09:46:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:06.156+0000 7f67ee4c5140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 08 09:46:06 compute-0 ceph-mgr[73869]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 08 09:46:06 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'telegraf'
Oct 08 09:46:06 compute-0 ceph-mon[73572]: 3.11 scrub starts
Oct 08 09:46:06 compute-0 ceph-mon[73572]: 3.11 scrub ok
Oct 08 09:46:06 compute-0 ceph-mon[73572]: 5.1 scrub starts
Oct 08 09:46:06 compute-0 ceph-mon[73572]: 3.12 deep-scrub starts
Oct 08 09:46:06 compute-0 ceph-mon[73572]: 5.1 scrub ok
Oct 08 09:46:06 compute-0 ceph-mon[73572]: 3.12 deep-scrub ok
Oct 08 09:46:06 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Oct 08 09:46:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:06.226+0000 7f67ee4c5140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 08 09:46:06 compute-0 ceph-mgr[73869]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 08 09:46:06 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'telemetry'
Oct 08 09:46:06 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Oct 08 09:46:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:06.394+0000 7f67ee4c5140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 08 09:46:06 compute-0 ceph-mgr[73869]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 08 09:46:06 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'test_orchestrator'
Oct 08 09:46:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:06.615+0000 7f67ee4c5140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 08 09:46:06 compute-0 ceph-mgr[73869]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 08 09:46:06 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'volumes'
Oct 08 09:46:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:06.882+0000 7f67ee4c5140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 08 09:46:06 compute-0 ceph-mgr[73869]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 08 09:46:06 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'zabbix'
Oct 08 09:46:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:06.951+0000 7f67ee4c5140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 08 09:46:06 compute-0 ceph-mgr[73869]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 08 09:46:06 compute-0 ceph-mon[73572]: log_channel(cluster) log [INF] : Active manager daemon compute-0.ixicfj restarted
Oct 08 09:46:06 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Oct 08 09:46:06 compute-0 ceph-mon[73572]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.ixicfj
Oct 08 09:46:06 compute-0 ceph-mgr[73869]: ms_deliver_dispatch: unhandled message 0x55617f189860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Oct 08 09:46:06 compute-0 ceph-mgr[73869]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct 08 09:46:06 compute-0 ceph-mgr[73869]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct 08 09:46:06 compute-0 ceph-mgr[73869]: mgr respawn  0: '/usr/bin/ceph-mgr'
Oct 08 09:46:06 compute-0 ceph-mgr[73869]: mgr respawn  1: '-n'
Oct 08 09:46:06 compute-0 ceph-mgr[73869]: mgr respawn  2: 'mgr.compute-0.ixicfj'
Oct 08 09:46:06 compute-0 ceph-mgr[73869]: mgr respawn  3: '-f'
Oct 08 09:46:06 compute-0 ceph-mgr[73869]: mgr respawn  4: '--setuser'
Oct 08 09:46:06 compute-0 ceph-mgr[73869]: mgr respawn  5: 'ceph'
Oct 08 09:46:06 compute-0 ceph-mgr[73869]: mgr respawn  6: '--setgroup'
Oct 08 09:46:06 compute-0 ceph-mgr[73869]: mgr respawn  7: 'ceph'
Oct 08 09:46:06 compute-0 ceph-mgr[73869]: mgr respawn  8: '--default-log-to-file=false'
Oct 08 09:46:06 compute-0 ceph-mgr[73869]: mgr respawn  9: '--default-log-to-journald=true'
Oct 08 09:46:06 compute-0 ceph-mgr[73869]: mgr respawn  10: '--default-log-to-stderr=false'
Oct 08 09:46:06 compute-0 ceph-mgr[73869]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Oct 08 09:46:06 compute-0 ceph-mgr[73869]: mgr respawn  exe_path /proc/self/exe
Oct 08 09:46:06 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Oct 08 09:46:06 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Oct 08 09:46:06 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e20: compute-0.ixicfj(active, starting, since 0.0347759s), standbys: compute-1.swlvov, compute-2.mtagwx
Oct 08 09:46:07 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.swlvov restarted
Oct 08 09:46:07 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.swlvov started
Oct 08 09:46:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ignoring --setuser ceph since I am not root
Oct 08 09:46:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ignoring --setgroup ceph since I am not root
Oct 08 09:46:07 compute-0 ceph-mgr[73869]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct 08 09:46:07 compute-0 ceph-mgr[73869]: pidfile_write: ignore empty --pid-file
Oct 08 09:46:07 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'alerts'
Oct 08 09:46:07 compute-0 ceph-mgr[73869]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 08 09:46:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:07.170+0000 7f1a88fc9140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 08 09:46:07 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'balancer'
Oct 08 09:46:07 compute-0 ceph-mon[73572]: 5.12 scrub starts
Oct 08 09:46:07 compute-0 ceph-mon[73572]: 5.12 scrub ok
Oct 08 09:46:07 compute-0 ceph-mon[73572]: 6.3 deep-scrub starts
Oct 08 09:46:07 compute-0 ceph-mon[73572]: 6.3 deep-scrub ok
Oct 08 09:46:07 compute-0 ceph-mon[73572]: 5.14 scrub starts
Oct 08 09:46:07 compute-0 ceph-mon[73572]: 5.14 scrub ok
Oct 08 09:46:07 compute-0 ceph-mon[73572]: Active manager daemon compute-0.ixicfj restarted
Oct 08 09:46:07 compute-0 ceph-mon[73572]: Activating manager daemon compute-0.ixicfj
Oct 08 09:46:07 compute-0 ceph-mon[73572]: osdmap e46: 3 total, 3 up, 3 in
Oct 08 09:46:07 compute-0 ceph-mon[73572]: mgrmap e20: compute-0.ixicfj(active, starting, since 0.0347759s), standbys: compute-1.swlvov, compute-2.mtagwx
Oct 08 09:46:07 compute-0 ceph-mon[73572]: Standby manager daemon compute-1.swlvov restarted
Oct 08 09:46:07 compute-0 ceph-mon[73572]: Standby manager daemon compute-1.swlvov started
Oct 08 09:46:07 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 6.16 scrub starts
Oct 08 09:46:07 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 6.16 scrub ok
Oct 08 09:46:07 compute-0 ceph-mgr[73869]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 08 09:46:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:07.279+0000 7f1a88fc9140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 08 09:46:07 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'cephadm'
Oct 08 09:46:07 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.mtagwx restarted
Oct 08 09:46:07 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.mtagwx started
Oct 08 09:46:07 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e21: compute-0.ixicfj(active, starting, since 1.04309s), standbys: compute-2.mtagwx, compute-1.swlvov
Oct 08 09:46:08 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'crash'
Oct 08 09:46:08 compute-0 ceph-mgr[73869]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 08 09:46:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:08.106+0000 7f1a88fc9140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 08 09:46:08 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'dashboard'
Oct 08 09:46:08 compute-0 ceph-mon[73572]: 5.4 scrub starts
Oct 08 09:46:08 compute-0 ceph-mon[73572]: 5.4 scrub ok
Oct 08 09:46:08 compute-0 ceph-mon[73572]: 4.e scrub starts
Oct 08 09:46:08 compute-0 ceph-mon[73572]: 4.e scrub ok
Oct 08 09:46:08 compute-0 ceph-mon[73572]: 6.16 scrub starts
Oct 08 09:46:08 compute-0 ceph-mon[73572]: 6.16 scrub ok
Oct 08 09:46:08 compute-0 ceph-mon[73572]: Standby manager daemon compute-2.mtagwx restarted
Oct 08 09:46:08 compute-0 ceph-mon[73572]: Standby manager daemon compute-2.mtagwx started
Oct 08 09:46:08 compute-0 ceph-mon[73572]: mgrmap e21: compute-0.ixicfj(active, starting, since 1.04309s), standbys: compute-2.mtagwx, compute-1.swlvov
Oct 08 09:46:08 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 6.11 scrub starts
Oct 08 09:46:08 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 6.11 scrub ok
Oct 08 09:46:08 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:46:08 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'devicehealth'
Oct 08 09:46:08 compute-0 ceph-mgr[73869]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 08 09:46:08 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'diskprediction_local'
Oct 08 09:46:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:08.718+0000 7f1a88fc9140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 08 09:46:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 08 09:46:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 08 09:46:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]:   from numpy import show_config as show_numpy_config
Oct 08 09:46:08 compute-0 ceph-mgr[73869]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 08 09:46:08 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'influx'
Oct 08 09:46:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:08.877+0000 7f1a88fc9140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 08 09:46:08 compute-0 ceph-mgr[73869]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 08 09:46:08 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'insights'
Oct 08 09:46:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:08.949+0000 7f1a88fc9140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 08 09:46:09 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'iostat'
Oct 08 09:46:09 compute-0 ceph-mgr[73869]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 08 09:46:09 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'k8sevents'
Oct 08 09:46:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:09.085+0000 7f1a88fc9140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 08 09:46:09 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 4.12 deep-scrub starts
Oct 08 09:46:09 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 4.12 deep-scrub ok
Oct 08 09:46:09 compute-0 ceph-mon[73572]: 3.15 scrub starts
Oct 08 09:46:09 compute-0 ceph-mon[73572]: 3.15 scrub ok
Oct 08 09:46:09 compute-0 ceph-mon[73572]: 6.11 scrub starts
Oct 08 09:46:09 compute-0 ceph-mon[73572]: 6.11 scrub ok
Oct 08 09:46:09 compute-0 ceph-mon[73572]: 3.1c scrub starts
Oct 08 09:46:09 compute-0 ceph-mon[73572]: 3.1c scrub ok
Oct 08 09:46:09 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'localpool'
Oct 08 09:46:09 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'mds_autoscaler'
Oct 08 09:46:09 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'mirroring'
Oct 08 09:46:09 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'nfs'
Oct 08 09:46:10 compute-0 ceph-mgr[73869]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 08 09:46:10 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'orchestrator'
Oct 08 09:46:10 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:10.091+0000 7f1a88fc9140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 08 09:46:10 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 6.10 scrub starts
Oct 08 09:46:10 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 6.10 scrub ok
Oct 08 09:46:10 compute-0 ceph-mon[73572]: 4.14 scrub starts
Oct 08 09:46:10 compute-0 ceph-mon[73572]: 4.14 scrub ok
Oct 08 09:46:10 compute-0 ceph-mon[73572]: 4.12 deep-scrub starts
Oct 08 09:46:10 compute-0 ceph-mon[73572]: 4.12 deep-scrub ok
Oct 08 09:46:10 compute-0 ceph-mon[73572]: 3.3 scrub starts
Oct 08 09:46:10 compute-0 ceph-mon[73572]: 3.3 scrub ok
Oct 08 09:46:10 compute-0 ceph-mgr[73869]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 08 09:46:10 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'osd_perf_query'
Oct 08 09:46:10 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:10.326+0000 7f1a88fc9140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 08 09:46:10 compute-0 ceph-mgr[73869]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 08 09:46:10 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:10.397+0000 7f1a88fc9140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 08 09:46:10 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'osd_support'
Oct 08 09:46:10 compute-0 ceph-mgr[73869]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 08 09:46:10 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:10.460+0000 7f1a88fc9140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 08 09:46:10 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'pg_autoscaler'
Oct 08 09:46:10 compute-0 ceph-mgr[73869]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 08 09:46:10 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:10.539+0000 7f1a88fc9140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 08 09:46:10 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'progress'
Oct 08 09:46:10 compute-0 ceph-mgr[73869]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 08 09:46:10 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:10.618+0000 7f1a88fc9140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 08 09:46:10 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'prometheus'
Oct 08 09:46:10 compute-0 ceph-mgr[73869]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 08 09:46:10 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:10.970+0000 7f1a88fc9140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 08 09:46:10 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'rbd_support'
Oct 08 09:46:11 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Oct 08 09:46:11 compute-0 systemd[74898]: Activating special unit Exit the Session...
Oct 08 09:46:11 compute-0 systemd[74898]: Stopped target Main User Target.
Oct 08 09:46:11 compute-0 systemd[74898]: Stopped target Basic System.
Oct 08 09:46:11 compute-0 systemd[74898]: Stopped target Paths.
Oct 08 09:46:11 compute-0 systemd[74898]: Stopped target Sockets.
Oct 08 09:46:11 compute-0 systemd[74898]: Stopped target Timers.
Oct 08 09:46:11 compute-0 systemd[74898]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct 08 09:46:11 compute-0 systemd[74898]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 08 09:46:11 compute-0 systemd[74898]: Closed D-Bus User Message Bus Socket.
Oct 08 09:46:11 compute-0 systemd[74898]: Stopped Create User's Volatile Files and Directories.
Oct 08 09:46:11 compute-0 systemd[74898]: Removed slice User Application Slice.
Oct 08 09:46:11 compute-0 systemd[74898]: Reached target Shutdown.
Oct 08 09:46:11 compute-0 systemd[74898]: Finished Exit the Session.
Oct 08 09:46:11 compute-0 systemd[74898]: Reached target Exit the Session.
Oct 08 09:46:11 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Oct 08 09:46:11 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Oct 08 09:46:11 compute-0 ceph-mgr[73869]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 08 09:46:11 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'restful'
Oct 08 09:46:11 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:11.083+0000 7f1a88fc9140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 08 09:46:11 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Oct 08 09:46:11 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Oct 08 09:46:11 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Oct 08 09:46:11 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Oct 08 09:46:11 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Oct 08 09:46:11 compute-0 systemd[1]: user-42477.slice: Consumed 32.471s CPU time.
Oct 08 09:46:11 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Oct 08 09:46:11 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Oct 08 09:46:11 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'rgw'
Oct 08 09:46:11 compute-0 ceph-mon[73572]: 4.15 scrub starts
Oct 08 09:46:11 compute-0 ceph-mon[73572]: 4.15 scrub ok
Oct 08 09:46:11 compute-0 ceph-mon[73572]: 6.10 scrub starts
Oct 08 09:46:11 compute-0 ceph-mon[73572]: 6.10 scrub ok
Oct 08 09:46:11 compute-0 ceph-mon[73572]: 3.5 scrub starts
Oct 08 09:46:11 compute-0 ceph-mon[73572]: 3.5 scrub ok
Oct 08 09:46:11 compute-0 ceph-mgr[73869]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 08 09:46:11 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'rook'
Oct 08 09:46:11 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:11.502+0000 7f1a88fc9140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 08 09:46:12 compute-0 ceph-mgr[73869]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 08 09:46:12 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:12.056+0000 7f1a88fc9140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 08 09:46:12 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'selftest'
Oct 08 09:46:12 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 6.13 scrub starts
Oct 08 09:46:12 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 6.13 scrub ok
Oct 08 09:46:12 compute-0 ceph-mgr[73869]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 08 09:46:12 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:12.127+0000 7f1a88fc9140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 08 09:46:12 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'snap_schedule'
Oct 08 09:46:12 compute-0 ceph-mgr[73869]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 08 09:46:12 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:12.206+0000 7f1a88fc9140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 08 09:46:12 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'stats'
Oct 08 09:46:12 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'status'
Oct 08 09:46:12 compute-0 ceph-mon[73572]: 5.13 deep-scrub starts
Oct 08 09:46:12 compute-0 ceph-mon[73572]: 5.13 deep-scrub ok
Oct 08 09:46:12 compute-0 ceph-mon[73572]: 4.11 scrub starts
Oct 08 09:46:12 compute-0 ceph-mon[73572]: 4.11 scrub ok
Oct 08 09:46:12 compute-0 ceph-mon[73572]: 6.5 scrub starts
Oct 08 09:46:12 compute-0 ceph-mon[73572]: 6.5 scrub ok
Oct 08 09:46:12 compute-0 ceph-mgr[73869]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 08 09:46:12 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:12.356+0000 7f1a88fc9140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 08 09:46:12 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'telegraf'
Oct 08 09:46:12 compute-0 ceph-mgr[73869]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 08 09:46:12 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'telemetry'
Oct 08 09:46:12 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:12.424+0000 7f1a88fc9140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 08 09:46:12 compute-0 ceph-mgr[73869]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 08 09:46:12 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:12.581+0000 7f1a88fc9140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 08 09:46:12 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'test_orchestrator'
Oct 08 09:46:12 compute-0 ceph-mgr[73869]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 08 09:46:12 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'volumes'
Oct 08 09:46:12 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:12.794+0000 7f1a88fc9140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'zabbix'
Oct 08 09:46:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:13.048+0000 7f1a88fc9140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 08 09:46:13 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Oct 08 09:46:13 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 08 09:46:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:13.122+0000 7f1a88fc9140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 08 09:46:13 compute-0 ceph-mon[73572]: log_channel(cluster) log [INF] : Active manager daemon compute-0.ixicfj restarted
Oct 08 09:46:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Oct 08 09:46:13 compute-0 ceph-mon[73572]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.ixicfj
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: ms_deliver_dispatch: unhandled message 0x562632431860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Oct 08 09:46:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Oct 08 09:46:13 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Oct 08 09:46:13 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e22: compute-0.ixicfj(active, starting, since 0.0305495s), standbys: compute-2.mtagwx, compute-1.swlvov
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: mgr handle_mgr_map Activating!
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: mgr handle_mgr_map I am now activating
Oct 08 09:46:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Oct 08 09:46:13 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 08 09:46:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct 08 09:46:13 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 08 09:46:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct 08 09:46:13 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 08 09:46:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.ixicfj", "id": "compute-0.ixicfj"} v 0)
Oct 08 09:46:13 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mgr metadata", "who": "compute-0.ixicfj", "id": "compute-0.ixicfj"}]: dispatch
Oct 08 09:46:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.mtagwx", "id": "compute-2.mtagwx"} v 0)
Oct 08 09:46:13 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mgr metadata", "who": "compute-2.mtagwx", "id": "compute-2.mtagwx"}]: dispatch
Oct 08 09:46:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.swlvov", "id": "compute-1.swlvov"} v 0)
Oct 08 09:46:13 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mgr metadata", "who": "compute-1.swlvov", "id": "compute-1.swlvov"}]: dispatch
Oct 08 09:46:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct 08 09:46:13 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 08 09:46:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct 08 09:46:13 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 08 09:46:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 08 09:46:13 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 08 09:46:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Oct 08 09:46:13 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 08 09:46:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).mds e1 all = 1
Oct 08 09:46:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Oct 08 09:46:13 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 08 09:46:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Oct 08 09:46:13 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: balancer
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [balancer INFO root] Starting
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:46:13 compute-0 ceph-mon[73572]: log_channel(cluster) log [INF] : Manager daemon compute-0.ixicfj is now available
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_09:46:13
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: cephadm
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: crash
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: dashboard
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO access_control] Loading user roles DB version=2
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO sso] Loading SSO DB version=1
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO root] Configured CherryPy, starting engine...
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: devicehealth
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: iostat
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: nfs
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: orchestrator
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [devicehealth INFO root] Starting
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: pg_autoscaler
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: progress
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [progress INFO root] Loading...
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f1a0eb65eb0>, <progress.module.GhostEvent object at 0x7f1a0eb884f0>, <progress.module.GhostEvent object at 0x7f1a0eb88700>, <progress.module.GhostEvent object at 0x7f1a0eb884c0>, <progress.module.GhostEvent object at 0x7f1a0eb886d0>, <progress.module.GhostEvent object at 0x7f1a18431be0>, <progress.module.GhostEvent object at 0x7f1a133b7a00>, <progress.module.GhostEvent object at 0x7f1a0eb940d0>, <progress.module.GhostEvent object at 0x7f1a0eb94a60>, <progress.module.GhostEvent object at 0x7f1a0eb94a90>, <progress.module.GhostEvent object at 0x7f1a0eb94ac0>, <progress.module.GhostEvent object at 0x7f1a0eb94af0>] historic events
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [progress INFO root] Loaded OSDMap, ready.
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [rbd_support INFO root] recovery thread starting
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [rbd_support INFO root] starting setup
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: rbd_support
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: restful
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: status
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: telemetry
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [restful INFO root] server_addr: :: server_port: 8003
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [restful WARNING root] server not running: no certificate configured
Oct 08 09:46:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ixicfj/mirror_snapshot_schedule"} v 0)
Oct 08 09:46:13 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ixicfj/mirror_snapshot_schedule"}]: dispatch
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: volumes
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [rbd_support INFO root] PerfHandler: starting
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_task_task: vms, start_after=
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_task_task: volumes, start_after=
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_task_task: backups, start_after=
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_task_task: images, start_after=
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [rbd_support INFO root] TaskHandler: starting
Oct 08 09:46:13 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.swlvov restarted
Oct 08 09:46:13 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.swlvov started
Oct 08 09:46:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ixicfj/trash_purge_schedule"} v 0)
Oct 08 09:46:13 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ixicfj/trash_purge_schedule"}]: dispatch
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 09:46:13 compute-0 ceph-mon[73572]: 5.0 scrub starts
Oct 08 09:46:13 compute-0 ceph-mon[73572]: 5.0 scrub ok
Oct 08 09:46:13 compute-0 ceph-mon[73572]: 6.13 scrub starts
Oct 08 09:46:13 compute-0 ceph-mon[73572]: 6.13 scrub ok
Oct 08 09:46:13 compute-0 ceph-mon[73572]: 4.5 scrub starts
Oct 08 09:46:13 compute-0 ceph-mon[73572]: 4.5 scrub ok
Oct 08 09:46:13 compute-0 ceph-mon[73572]: 5.8 scrub starts
Oct 08 09:46:13 compute-0 ceph-mon[73572]: 5.8 scrub ok
Oct 08 09:46:13 compute-0 ceph-mon[73572]: Active manager daemon compute-0.ixicfj restarted
Oct 08 09:46:13 compute-0 ceph-mon[73572]: Activating manager daemon compute-0.ixicfj
Oct 08 09:46:13 compute-0 ceph-mon[73572]: osdmap e47: 3 total, 3 up, 3 in
Oct 08 09:46:13 compute-0 ceph-mon[73572]: mgrmap e22: compute-0.ixicfj(active, starting, since 0.0305495s), standbys: compute-2.mtagwx, compute-1.swlvov
Oct 08 09:46:13 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 08 09:46:13 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 08 09:46:13 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 08 09:46:13 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mgr metadata", "who": "compute-0.ixicfj", "id": "compute-0.ixicfj"}]: dispatch
Oct 08 09:46:13 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mgr metadata", "who": "compute-2.mtagwx", "id": "compute-2.mtagwx"}]: dispatch
Oct 08 09:46:13 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mgr metadata", "who": "compute-1.swlvov", "id": "compute-1.swlvov"}]: dispatch
Oct 08 09:46:13 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 08 09:46:13 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 08 09:46:13 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 08 09:46:13 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 08 09:46:13 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 08 09:46:13 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 08 09:46:13 compute-0 ceph-mon[73572]: Manager daemon compute-0.ixicfj is now available
Oct 08 09:46:13 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ixicfj/mirror_snapshot_schedule"}]: dispatch
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [rbd_support INFO root] setup complete
Oct 08 09:46:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Oct 08 09:46:13 compute-0 sshd-session[92017]: Accepted publickey for ceph-admin from 192.168.122.100 port 43490 ssh2: RSA SHA256:oltPosKfvcqSfDqAHq+rz23Sj7/sQ0zn4f0i/r7NEZA
Oct 08 09:46:13 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Oct 08 09:46:13 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct 08 09:46:13 compute-0 systemd-logind[798]: New session 36 of user ceph-admin.
Oct 08 09:46:13 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct 08 09:46:13 compute-0 systemd[1]: Starting User Manager for UID 42477...
Oct 08 09:46:13 compute-0 systemd[92032]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 08 09:46:13 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.module] Engine started.
Oct 08 09:46:13 compute-0 systemd[92032]: Queued start job for default target Main User Target.
Oct 08 09:46:13 compute-0 systemd[92032]: Created slice User Application Slice.
Oct 08 09:46:13 compute-0 systemd[92032]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 08 09:46:13 compute-0 systemd[92032]: Started Daily Cleanup of User's Temporary Directories.
Oct 08 09:46:13 compute-0 systemd[92032]: Reached target Paths.
Oct 08 09:46:13 compute-0 systemd[92032]: Reached target Timers.
Oct 08 09:46:13 compute-0 systemd[92032]: Starting D-Bus User Message Bus Socket...
Oct 08 09:46:13 compute-0 systemd[92032]: Starting Create User's Volatile Files and Directories...
Oct 08 09:46:13 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.mtagwx restarted
Oct 08 09:46:13 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.mtagwx started
Oct 08 09:46:13 compute-0 systemd[92032]: Listening on D-Bus User Message Bus Socket.
Oct 08 09:46:13 compute-0 systemd[92032]: Reached target Sockets.
Oct 08 09:46:13 compute-0 systemd[92032]: Finished Create User's Volatile Files and Directories.
Oct 08 09:46:13 compute-0 systemd[92032]: Reached target Basic System.
Oct 08 09:46:13 compute-0 systemd[92032]: Reached target Main User Target.
Oct 08 09:46:13 compute-0 systemd[92032]: Startup finished in 120ms.
Oct 08 09:46:13 compute-0 systemd[1]: Started User Manager for UID 42477.
Oct 08 09:46:13 compute-0 systemd[1]: Started Session 36 of User ceph-admin.
Oct 08 09:46:13 compute-0 sshd-session[92017]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 08 09:46:13 compute-0 sudo[92049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:46:13 compute-0 sudo[92049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:46:13 compute-0 sudo[92049]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:14 compute-0 sudo[92074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Oct 08 09:46:14 compute-0 sudo[92074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:46:14 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Oct 08 09:46:14 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Oct 08 09:46:14 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e23: compute-0.ixicfj(active, since 1.05266s), standbys: compute-2.mtagwx, compute-1.swlvov
Oct 08 09:46:14 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.14469 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 09:46:14 compute-0 ceph-mgr[73869]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Oct 08 09:46:14 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Oct 08 09:46:14 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Oct 08 09:46:14 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Oct 08 09:46:14 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Oct 08 09:46:14 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Oct 08 09:46:14 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Oct 08 09:46:14 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Oct 08 09:46:14 compute-0 ceph-mon[73572]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct 08 09:46:14 compute-0 ceph-mon[73572]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Oct 08 09:46:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0[73568]: 2025-10-08T09:46:14.190+0000 7f533f3cb640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct 08 09:46:14 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v3: 197 pgs: 1 active+clean+scrubbing, 196 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:46:14 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Oct 08 09:46:14 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).mds e2 new map
Oct 08 09:46:14 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).mds e2 print_map
                                           e2
                                           btime 2025-10-08T09:46:14:191872+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-08T09:46:14.191787+0000
                                           modified        2025-10-08T09:46:14.191787+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
Oct 08 09:46:14 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Oct 08 09:46:14 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Oct 08 09:46:14 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Oct 08 09:46:14 compute-0 ceph-mgr[73869]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct 08 09:46:14 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct 08 09:46:14 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Oct 08 09:46:14 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:14 compute-0 ceph-mgr[73869]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Oct 08 09:46:14 compute-0 systemd[1]: libpod-dabe41657d799e8f67b8f9a54c7c383817634f645f27fe32a289f73e6e9a1bca.scope: Deactivated successfully.
Oct 08 09:46:14 compute-0 podman[91829]: 2025-10-08 09:46:14.258552618 +0000 UTC m=+10.281620462 container died dabe41657d799e8f67b8f9a54c7c383817634f645f27fe32a289f73e6e9a1bca (image=quay.io/ceph/ceph:v19, name=exciting_kirch, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct 08 09:46:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-97a33d61aba89fc689df1b2cd2819fd6083f25731ce33ae8ce6a26a47e7a3d4a-merged.mount: Deactivated successfully.
Oct 08 09:46:14 compute-0 podman[91829]: 2025-10-08 09:46:14.31675807 +0000 UTC m=+10.339825904 container remove dabe41657d799e8f67b8f9a54c7c383817634f645f27fe32a289f73e6e9a1bca (image=quay.io/ceph/ceph:v19, name=exciting_kirch, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:46:14 compute-0 systemd[1]: libpod-conmon-dabe41657d799e8f67b8f9a54c7c383817634f645f27fe32a289f73e6e9a1bca.scope: Deactivated successfully.
Oct 08 09:46:14 compute-0 ceph-mon[73572]: 3.17 scrub starts
Oct 08 09:46:14 compute-0 ceph-mon[73572]: 3.17 scrub ok
Oct 08 09:46:14 compute-0 ceph-mon[73572]: 5.2 deep-scrub starts
Oct 08 09:46:14 compute-0 ceph-mon[73572]: 5.2 deep-scrub ok
Oct 08 09:46:14 compute-0 ceph-mon[73572]: Standby manager daemon compute-1.swlvov restarted
Oct 08 09:46:14 compute-0 ceph-mon[73572]: Standby manager daemon compute-1.swlvov started
Oct 08 09:46:14 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ixicfj/trash_purge_schedule"}]: dispatch
Oct 08 09:46:14 compute-0 ceph-mon[73572]: 2.b scrub starts
Oct 08 09:46:14 compute-0 ceph-mon[73572]: 2.b scrub ok
Oct 08 09:46:14 compute-0 ceph-mon[73572]: Standby manager daemon compute-2.mtagwx restarted
Oct 08 09:46:14 compute-0 ceph-mon[73572]: Standby manager daemon compute-2.mtagwx started
Oct 08 09:46:14 compute-0 ceph-mon[73572]: mgrmap e23: compute-0.ixicfj(active, since 1.05266s), standbys: compute-2.mtagwx, compute-1.swlvov
Oct 08 09:46:14 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Oct 08 09:46:14 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Oct 08 09:46:14 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Oct 08 09:46:14 compute-0 ceph-mon[73572]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct 08 09:46:14 compute-0 ceph-mon[73572]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Oct 08 09:46:14 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Oct 08 09:46:14 compute-0 ceph-mon[73572]: osdmap e48: 3 total, 3 up, 3 in
Oct 08 09:46:14 compute-0 ceph-mon[73572]: fsmap cephfs:0
Oct 08 09:46:14 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:14 compute-0 sudo[91826]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:14 compute-0 sudo[92197]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nuuhzkkheltemtkvntnipxmvazonrjts ; /usr/bin/python3'
Oct 08 09:46:14 compute-0 sudo[92197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:46:14 compute-0 ceph-mgr[73869]: [cephadm INFO cherrypy.error] [08/Oct/2025:09:46:14] ENGINE Bus STARTING
Oct 08 09:46:14 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : [08/Oct/2025:09:46:14] ENGINE Bus STARTING
Oct 08 09:46:14 compute-0 podman[92207]: 2025-10-08 09:46:14.598463438 +0000 UTC m=+0.056868613 container exec 01c666addd8584f02eb7d29040d97e45d8cb7f834fd134029c1f7cca550aeb9d (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid)
Oct 08 09:46:14 compute-0 python3[92206]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:46:14 compute-0 ceph-mgr[73869]: [cephadm INFO cherrypy.error] [08/Oct/2025:09:46:14] ENGINE Serving on https://192.168.122.100:7150
Oct 08 09:46:14 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : [08/Oct/2025:09:46:14] ENGINE Serving on https://192.168.122.100:7150
Oct 08 09:46:14 compute-0 ceph-mgr[73869]: [cephadm INFO cherrypy.error] [08/Oct/2025:09:46:14] ENGINE Client ('192.168.122.100', 46310) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 08 09:46:14 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : [08/Oct/2025:09:46:14] ENGINE Client ('192.168.122.100', 46310) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 08 09:46:14 compute-0 podman[92240]: 2025-10-08 09:46:14.70229179 +0000 UTC m=+0.045805256 container create c2e8bc1b2d0806c75c381c887df5cb4f0ff93d7366a6b62f936da5559412303f (image=quay.io/ceph/ceph:v19, name=nice_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct 08 09:46:14 compute-0 podman[92207]: 2025-10-08 09:46:14.733128269 +0000 UTC m=+0.191533444 container exec_died 01c666addd8584f02eb7d29040d97e45d8cb7f834fd134029c1f7cca550aeb9d (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default)
Oct 08 09:46:14 compute-0 systemd[1]: Started libpod-conmon-c2e8bc1b2d0806c75c381c887df5cb4f0ff93d7366a6b62f936da5559412303f.scope.
Oct 08 09:46:14 compute-0 ceph-mgr[73869]: [cephadm INFO cherrypy.error] [08/Oct/2025:09:46:14] ENGINE Serving on http://192.168.122.100:8765
Oct 08 09:46:14 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : [08/Oct/2025:09:46:14] ENGINE Serving on http://192.168.122.100:8765
Oct 08 09:46:14 compute-0 ceph-mgr[73869]: [cephadm INFO cherrypy.error] [08/Oct/2025:09:46:14] ENGINE Bus STARTED
Oct 08 09:46:14 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : [08/Oct/2025:09:46:14] ENGINE Bus STARTED
Oct 08 09:46:14 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:46:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fdc32ff34607e7de23007d8104f7c4bc37dad9bd2483cbe6d3cdb37a01ac3b7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:46:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fdc32ff34607e7de23007d8104f7c4bc37dad9bd2483cbe6d3cdb37a01ac3b7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:46:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fdc32ff34607e7de23007d8104f7c4bc37dad9bd2483cbe6d3cdb37a01ac3b7/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 08 09:46:14 compute-0 podman[92240]: 2025-10-08 09:46:14.68425131 +0000 UTC m=+0.027764826 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:46:14 compute-0 podman[92240]: 2025-10-08 09:46:14.785574555 +0000 UTC m=+0.129088031 container init c2e8bc1b2d0806c75c381c887df5cb4f0ff93d7366a6b62f936da5559412303f (image=quay.io/ceph/ceph:v19, name=nice_ramanujan, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:46:14 compute-0 podman[92240]: 2025-10-08 09:46:14.791792485 +0000 UTC m=+0.135305961 container start c2e8bc1b2d0806c75c381c887df5cb4f0ff93d7366a6b62f936da5559412303f (image=quay.io/ceph/ceph:v19, name=nice_ramanujan, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct 08 09:46:14 compute-0 podman[92240]: 2025-10-08 09:46:14.79528396 +0000 UTC m=+0.138797476 container attach c2e8bc1b2d0806c75c381c887df5cb4f0ff93d7366a6b62f936da5559412303f (image=quay.io/ceph/ceph:v19, name=nice_ramanujan, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True)
Oct 08 09:46:15 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Oct 08 09:46:15 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Oct 08 09:46:15 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.14502 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 09:46:15 compute-0 ceph-mgr[73869]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct 08 09:46:15 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct 08 09:46:15 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Oct 08 09:46:15 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:15 compute-0 nice_ramanujan[92277]: Scheduled mds.cephfs update...
Oct 08 09:46:15 compute-0 systemd[1]: libpod-c2e8bc1b2d0806c75c381c887df5cb4f0ff93d7366a6b62f936da5559412303f.scope: Deactivated successfully.
Oct 08 09:46:15 compute-0 podman[92240]: 2025-10-08 09:46:15.167748302 +0000 UTC m=+0.511261768 container died c2e8bc1b2d0806c75c381c887df5cb4f0ff93d7366a6b62f936da5559412303f (image=quay.io/ceph/ceph:v19, name=nice_ramanujan, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:46:15 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v5: 197 pgs: 1 active+clean+scrubbing, 196 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:46:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-4fdc32ff34607e7de23007d8104f7c4bc37dad9bd2483cbe6d3cdb37a01ac3b7-merged.mount: Deactivated successfully.
Oct 08 09:46:15 compute-0 podman[92240]: 2025-10-08 09:46:15.207851573 +0000 UTC m=+0.551365039 container remove c2e8bc1b2d0806c75c381c887df5cb4f0ff93d7366a6b62f936da5559412303f (image=quay.io/ceph/ceph:v19, name=nice_ramanujan, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:46:15 compute-0 systemd[1]: libpod-conmon-c2e8bc1b2d0806c75c381c887df5cb4f0ff93d7366a6b62f936da5559412303f.scope: Deactivated successfully.
Oct 08 09:46:15 compute-0 sudo[92197]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:15 compute-0 podman[92412]: 2025-10-08 09:46:15.242064935 +0000 UTC m=+0.046283871 container exec 0dbea514cc83cec397480471ade0bebb407738deaf60dfa42fe4b53fba64588f (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:46:15 compute-0 podman[92412]: 2025-10-08 09:46:15.250278065 +0000 UTC m=+0.054496981 container exec_died 0dbea514cc83cec397480471ade0bebb407738deaf60dfa42fe4b53fba64588f (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:46:15 compute-0 ceph-mgr[73869]: [devicehealth INFO root] Check health
Oct 08 09:46:15 compute-0 sudo[92074]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:15 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 09:46:15 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:15 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 09:46:15 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:15 compute-0 ceph-mon[73572]: 4.10 scrub starts
Oct 08 09:46:15 compute-0 ceph-mon[73572]: 4.10 scrub ok
Oct 08 09:46:15 compute-0 ceph-mon[73572]: 6.2 scrub starts
Oct 08 09:46:15 compute-0 ceph-mon[73572]: 6.2 scrub ok
Oct 08 09:46:15 compute-0 ceph-mon[73572]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct 08 09:46:15 compute-0 ceph-mon[73572]: 3.e scrub starts
Oct 08 09:46:15 compute-0 ceph-mon[73572]: 3.e scrub ok
Oct 08 09:46:15 compute-0 ceph-mon[73572]: [08/Oct/2025:09:46:14] ENGINE Bus STARTING
Oct 08 09:46:15 compute-0 ceph-mon[73572]: [08/Oct/2025:09:46:14] ENGINE Serving on https://192.168.122.100:7150
Oct 08 09:46:15 compute-0 ceph-mon[73572]: [08/Oct/2025:09:46:14] ENGINE Client ('192.168.122.100', 46310) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 08 09:46:15 compute-0 ceph-mon[73572]: [08/Oct/2025:09:46:14] ENGINE Serving on http://192.168.122.100:8765
Oct 08 09:46:15 compute-0 ceph-mon[73572]: [08/Oct/2025:09:46:14] ENGINE Bus STARTED
Oct 08 09:46:15 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:15 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:15 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:15 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 08 09:46:15 compute-0 sudo[92466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:46:15 compute-0 sudo[92511]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdibqsaawmdyydzeawevgcdejxewbpjh ; /usr/bin/python3'
Oct 08 09:46:15 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e24: compute-0.ixicfj(active, since 2s), standbys: compute-2.mtagwx, compute-1.swlvov
Oct 08 09:46:15 compute-0 sudo[92511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:46:15 compute-0 sudo[92466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:46:15 compute-0 sudo[92466]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:15 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:15 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 08 09:46:15 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:15 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 08 09:46:15 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:15 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 08 09:46:15 compute-0 sudo[92516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 08 09:46:15 compute-0 sudo[92516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:46:15 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:15 compute-0 python3[92514]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   nfs cluster create cephfs --ingress --virtual-ip=192.168.122.2/24 --ingress-mode=haproxy-protocol '--placement=compute-0 compute-1 compute-2 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:46:15 compute-0 podman[92541]: 2025-10-08 09:46:15.587685329 +0000 UTC m=+0.061137683 container create 351f425ba01bd7f6ba22abaa69d5b66c000b02436de67e4c0db528c1a09fedb0 (image=quay.io/ceph/ceph:v19, name=laughing_elbakyan, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:46:15 compute-0 systemd[1]: Started libpod-conmon-351f425ba01bd7f6ba22abaa69d5b66c000b02436de67e4c0db528c1a09fedb0.scope.
Oct 08 09:46:15 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:46:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/702428090e6532e49ec22bf0dde632ad0302dd9e75332683e974d41e17512554/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:46:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/702428090e6532e49ec22bf0dde632ad0302dd9e75332683e974d41e17512554/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:46:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/702428090e6532e49ec22bf0dde632ad0302dd9e75332683e974d41e17512554/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 08 09:46:15 compute-0 podman[92541]: 2025-10-08 09:46:15.563441881 +0000 UTC m=+0.036894255 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:46:15 compute-0 podman[92541]: 2025-10-08 09:46:15.690943324 +0000 UTC m=+0.164395678 container init 351f425ba01bd7f6ba22abaa69d5b66c000b02436de67e4c0db528c1a09fedb0 (image=quay.io/ceph/ceph:v19, name=laughing_elbakyan, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 08 09:46:15 compute-0 podman[92541]: 2025-10-08 09:46:15.697655847 +0000 UTC m=+0.171108201 container start 351f425ba01bd7f6ba22abaa69d5b66c000b02436de67e4c0db528c1a09fedb0 (image=quay.io/ceph/ceph:v19, name=laughing_elbakyan, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:46:15 compute-0 podman[92541]: 2025-10-08 09:46:15.70791926 +0000 UTC m=+0.181371644 container attach 351f425ba01bd7f6ba22abaa69d5b66c000b02436de67e4c0db528c1a09fedb0 (image=quay.io/ceph/ceph:v19, name=laughing_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct 08 09:46:15 compute-0 sudo[92516]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:15 compute-0 sudo[92610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:46:15 compute-0 sudo[92610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:46:15 compute-0 sudo[92610]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:16 compute-0 sudo[92635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Oct 08 09:46:16 compute-0 sudo[92635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:46:16 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.14514 -' entity='client.admin' cmd=[{"prefix": "nfs cluster create", "cluster_id": "cephfs", "ingress": true, "virtual_ip": "192.168.122.2/24", "ingress_mode": "haproxy-protocol", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 09:46:16 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true} v 0)
Oct 08 09:46:16 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Oct 08 09:46:16 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Oct 08 09:46:16 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Oct 08 09:46:16 compute-0 sudo[92635]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:16 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 09:46:16 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:16 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 09:46:16 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:16 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Oct 08 09:46:16 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 08 09:46:16 compute-0 ceph-mon[73572]: 3.18 scrub starts
Oct 08 09:46:16 compute-0 ceph-mon[73572]: 3.18 scrub ok
Oct 08 09:46:16 compute-0 ceph-mon[73572]: from='client.14502 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 09:46:16 compute-0 ceph-mon[73572]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct 08 09:46:16 compute-0 ceph-mon[73572]: pgmap v5: 197 pgs: 1 active+clean+scrubbing, 196 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:46:16 compute-0 ceph-mon[73572]: 5.7 scrub starts
Oct 08 09:46:16 compute-0 ceph-mon[73572]: 5.7 scrub ok
Oct 08 09:46:16 compute-0 ceph-mon[73572]: 4.1c scrub starts
Oct 08 09:46:16 compute-0 ceph-mon[73572]: 4.1c scrub ok
Oct 08 09:46:16 compute-0 ceph-mon[73572]: mgrmap e24: compute-0.ixicfj(active, since 2s), standbys: compute-2.mtagwx, compute-1.swlvov
Oct 08 09:46:16 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:16 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:16 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:16 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:16 compute-0 ceph-mon[73572]: from='client.14514 -' entity='client.admin' cmd=[{"prefix": "nfs cluster create", "cluster_id": "cephfs", "ingress": true, "virtual_ip": "192.168.122.2/24", "ingress_mode": "haproxy-protocol", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 09:46:16 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Oct 08 09:46:16 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:16 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:16 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 08 09:46:16 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Oct 08 09:46:16 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Oct 08 09:46:16 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Oct 08 09:46:16 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Oct 08 09:46:16 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"} v 0)
Oct 08 09:46:16 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Oct 08 09:46:16 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 08 09:46:16 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:16 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 08 09:46:16 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:16 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Oct 08 09:46:16 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 08 09:46:16 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 08 09:46:16 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:16 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 08 09:46:16 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:16 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Oct 08 09:46:16 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 08 09:46:16 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:46:16 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:46:16 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 08 09:46:16 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 09:46:16 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Oct 08 09:46:16 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Oct 08 09:46:16 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Oct 08 09:46:16 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Oct 08 09:46:16 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Oct 08 09:46:16 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Oct 08 09:46:16 compute-0 sudo[92681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 08 09:46:16 compute-0 sudo[92681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:46:16 compute-0 sudo[92681]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:16 compute-0 sudo[92706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/etc/ceph
Oct 08 09:46:16 compute-0 sudo[92706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:46:16 compute-0 sudo[92706]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:16 compute-0 sudo[92731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/etc/ceph/ceph.conf.new
Oct 08 09:46:16 compute-0 sudo[92731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:46:16 compute-0 sudo[92731]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:16 compute-0 sudo[92756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149
Oct 08 09:46:16 compute-0 sudo[92756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:46:16 compute-0 sudo[92756]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:16 compute-0 sudo[92781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/etc/ceph/ceph.conf.new
Oct 08 09:46:16 compute-0 sudo[92781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:46:16 compute-0 sudo[92781]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:17 compute-0 sudo[92829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/etc/ceph/ceph.conf.new
Oct 08 09:46:17 compute-0 sudo[92829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:46:17 compute-0 sudo[92829]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:17 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 3.19 deep-scrub starts
Oct 08 09:46:17 compute-0 sudo[92854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/etc/ceph/ceph.conf.new
Oct 08 09:46:17 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 3.19 deep-scrub ok
Oct 08 09:46:17 compute-0 sudo[92854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:46:17 compute-0 sudo[92854]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:17 compute-0 sudo[92879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Oct 08 09:46:17 compute-0 sudo[92879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:46:17 compute-0 sudo[92879]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:17 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct 08 09:46:17 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct 08 09:46:17 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct 08 09:46:17 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct 08 09:46:17 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v7: 198 pgs: 1 unknown, 1 active+clean+scrubbing, 196 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:46:17 compute-0 sudo[92904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config
Oct 08 09:46:17 compute-0 sudo[92904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:46:17 compute-0 sudo[92904]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:17 compute-0 sudo[92929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config
Oct 08 09:46:17 compute-0 sudo[92929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:46:17 compute-0 sudo[92929]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:17 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct 08 09:46:17 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct 08 09:46:17 compute-0 sudo[92954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf.new
Oct 08 09:46:17 compute-0 sudo[92954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:46:17 compute-0 sudo[92954]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:17 compute-0 sudo[92979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149
Oct 08 09:46:17 compute-0 sudo[92979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:46:17 compute-0 sudo[92979]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:17 compute-0 ceph-mon[73572]: 5.1e scrub starts
Oct 08 09:46:17 compute-0 ceph-mon[73572]: 5.1e scrub ok
Oct 08 09:46:17 compute-0 ceph-mon[73572]: 3.a scrub starts
Oct 08 09:46:17 compute-0 ceph-mon[73572]: 3.a scrub ok
Oct 08 09:46:17 compute-0 ceph-mon[73572]: 3.1b scrub starts
Oct 08 09:46:17 compute-0 ceph-mon[73572]: 3.1b scrub ok
Oct 08 09:46:17 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Oct 08 09:46:17 compute-0 ceph-mon[73572]: osdmap e49: 3 total, 3 up, 3 in
Oct 08 09:46:17 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Oct 08 09:46:17 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:17 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:17 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 08 09:46:17 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:17 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:17 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 08 09:46:17 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:46:17 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 09:46:17 compute-0 ceph-mon[73572]: Updating compute-0:/etc/ceph/ceph.conf
Oct 08 09:46:17 compute-0 ceph-mon[73572]: Updating compute-1:/etc/ceph/ceph.conf
Oct 08 09:46:17 compute-0 ceph-mon[73572]: Updating compute-2:/etc/ceph/ceph.conf
Oct 08 09:46:17 compute-0 sudo[93004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf.new
Oct 08 09:46:17 compute-0 sudo[93004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:46:17 compute-0 sudo[93004]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:17 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Oct 08 09:46:17 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Oct 08 09:46:17 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Oct 08 09:46:17 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Oct 08 09:46:17 compute-0 ceph-mgr[73869]: [nfs INFO nfs.cluster] Created empty object:conf-nfs.cephfs
Oct 08 09:46:17 compute-0 ceph-mgr[73869]: [cephadm INFO root] Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Oct 08 09:46:17 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Oct 08 09:46:17 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 08 09:46:17 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:17 compute-0 sudo[93055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf.new
Oct 08 09:46:17 compute-0 sudo[93055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:46:17 compute-0 ceph-mgr[73869]: [cephadm INFO root] Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Oct 08 09:46:17 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Oct 08 09:46:17 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Oct 08 09:46:17 compute-0 sudo[93055]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:17 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:17 compute-0 systemd[1]: libpod-351f425ba01bd7f6ba22abaa69d5b66c000b02436de67e4c0db528c1a09fedb0.scope: Deactivated successfully.
Oct 08 09:46:17 compute-0 podman[92541]: 2025-10-08 09:46:17.55359755 +0000 UTC m=+2.027049914 container died 351f425ba01bd7f6ba22abaa69d5b66c000b02436de67e4c0db528c1a09fedb0 (image=quay.io/ceph/ceph:v19, name=laughing_elbakyan, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:46:17 compute-0 sudo[93087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf.new
Oct 08 09:46:17 compute-0 sudo[93087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:46:17 compute-0 sudo[93087]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-702428090e6532e49ec22bf0dde632ad0302dd9e75332683e974d41e17512554-merged.mount: Deactivated successfully.
Oct 08 09:46:17 compute-0 podman[92541]: 2025-10-08 09:46:17.610639047 +0000 UTC m=+2.084091391 container remove 351f425ba01bd7f6ba22abaa69d5b66c000b02436de67e4c0db528c1a09fedb0 (image=quay.io/ceph/ceph:v19, name=laughing_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:46:17 compute-0 systemd[1]: libpod-conmon-351f425ba01bd7f6ba22abaa69d5b66c000b02436de67e4c0db528c1a09fedb0.scope: Deactivated successfully.
Oct 08 09:46:17 compute-0 sudo[92511]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:17 compute-0 sudo[93124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf.new /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct 08 09:46:17 compute-0 sudo[93124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:46:17 compute-0 sudo[93124]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:17 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 08 09:46:17 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 08 09:46:17 compute-0 ceph-mon[73572]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 08 09:46:17 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e25: compute-0.ixicfj(active, since 4s), standbys: compute-2.mtagwx, compute-1.swlvov
Oct 08 09:46:17 compute-0 sudo[93149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 08 09:46:17 compute-0 sudo[93149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:46:17 compute-0 sudo[93149]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:17 compute-0 sudo[93174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/etc/ceph
Oct 08 09:46:17 compute-0 sudo[93174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:46:17 compute-0 sudo[93174]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:17 compute-0 sudo[93199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/etc/ceph/ceph.client.admin.keyring.new
Oct 08 09:46:17 compute-0 sudo[93199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:46:17 compute-0 sudo[93199]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:17 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct 08 09:46:17 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct 08 09:46:17 compute-0 sudo[93224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149
Oct 08 09:46:17 compute-0 sudo[93224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:46:17 compute-0 sudo[93224]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:17 compute-0 sudo[93249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/etc/ceph/ceph.client.admin.keyring.new
Oct 08 09:46:17 compute-0 sudo[93249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:46:17 compute-0 sudo[93249]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:17 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 08 09:46:17 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 08 09:46:17 compute-0 sudo[93297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/etc/ceph/ceph.client.admin.keyring.new
Oct 08 09:46:18 compute-0 sudo[93297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:46:18 compute-0 sudo[93297]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:18 compute-0 sudo[93345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/etc/ceph/ceph.client.admin.keyring.new
Oct 08 09:46:18 compute-0 sudo[93345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:46:18 compute-0 sudo[93345]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:18 compute-0 sudo[93399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Oct 08 09:46:18 compute-0 sudo[93399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:46:18 compute-0 sudo[93399]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:18 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct 08 09:46:18 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct 08 09:46:18 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Oct 08 09:46:18 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Oct 08 09:46:18 compute-0 sudo[93453]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdtvwgxrxgfvtsngecfichxnipbjhpna ; /usr/bin/python3'
Oct 08 09:46:18 compute-0 sudo[93453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:46:18 compute-0 sudo[93443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config
Oct 08 09:46:18 compute-0 sudo[93443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:46:18 compute-0 sudo[93443]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:18 compute-0 sudo[93475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config
Oct 08 09:46:18 compute-0 sudo[93475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:46:18 compute-0 sudo[93475]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:18 compute-0 python3[93470]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 08 09:46:18 compute-0 sudo[93453]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:18 compute-0 sudo[93500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring.new
Oct 08 09:46:18 compute-0 sudo[93500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:46:18 compute-0 sudo[93500]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:18 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct 08 09:46:18 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct 08 09:46:18 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:46:18 compute-0 sudo[93530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149
Oct 08 09:46:18 compute-0 sudo[93530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:46:18 compute-0 sudo[93530]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:18 compute-0 ceph-mon[73572]: 3.19 deep-scrub starts
Oct 08 09:46:18 compute-0 ceph-mon[73572]: 3.19 deep-scrub ok
Oct 08 09:46:18 compute-0 ceph-mon[73572]: Updating compute-0:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct 08 09:46:18 compute-0 ceph-mon[73572]: Updating compute-1:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct 08 09:46:18 compute-0 ceph-mon[73572]: pgmap v7: 198 pgs: 1 unknown, 1 active+clean+scrubbing, 196 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:46:18 compute-0 ceph-mon[73572]: Updating compute-2:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct 08 09:46:18 compute-0 ceph-mon[73572]: 4.d deep-scrub starts
Oct 08 09:46:18 compute-0 ceph-mon[73572]: 4.d deep-scrub ok
Oct 08 09:46:18 compute-0 ceph-mon[73572]: 5.1a scrub starts
Oct 08 09:46:18 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Oct 08 09:46:18 compute-0 ceph-mon[73572]: osdmap e50: 3 total, 3 up, 3 in
Oct 08 09:46:18 compute-0 ceph-mon[73572]: Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Oct 08 09:46:18 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:18 compute-0 ceph-mon[73572]: Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Oct 08 09:46:18 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:18 compute-0 ceph-mon[73572]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 08 09:46:18 compute-0 ceph-mon[73572]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 08 09:46:18 compute-0 ceph-mon[73572]: mgrmap e25: compute-0.ixicfj(active, since 4s), standbys: compute-2.mtagwx, compute-1.swlvov
Oct 08 09:46:18 compute-0 ceph-mon[73572]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct 08 09:46:18 compute-0 ceph-mon[73572]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 08 09:46:18 compute-0 sudo[93576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring.new
Oct 08 09:46:18 compute-0 sudo[93576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:46:18 compute-0 sudo[93576]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:18 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Oct 08 09:46:18 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Oct 08 09:46:18 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Oct 08 09:46:18 compute-0 sudo[93651]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aevzeogopksajoykhfgglejisadczqyb ; /usr/bin/python3'
Oct 08 09:46:18 compute-0 sudo[93651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:46:18 compute-0 sudo[93671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring.new
Oct 08 09:46:18 compute-0 sudo[93671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:46:18 compute-0 sudo[93671]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:18 compute-0 sudo[93696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring.new
Oct 08 09:46:18 compute-0 sudo[93696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:46:18 compute-0 sudo[93696]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:18 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct 08 09:46:18 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct 08 09:46:18 compute-0 sudo[93721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring.new /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct 08 09:46:18 compute-0 sudo[93721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:46:18 compute-0 python3[93668]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759916778.0132196-33877-171923265403929/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=fbda66f5b6d5a9cd8683861e87e5a427d546a56c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:46:18 compute-0 sudo[93721]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:18 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 09:46:18 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:18 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 09:46:18 compute-0 sudo[93651]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:18 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:18 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 08 09:46:18 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:18 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 08 09:46:18 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:19 compute-0 sudo[93793]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imwdarpojghxdamwtgvzfgfjzdpcaanm ; /usr/bin/python3'
Oct 08 09:46:19 compute-0 sudo[93793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:46:19 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Oct 08 09:46:19 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Oct 08 09:46:19 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v10: 198 pgs: 1 unknown, 1 active+clean+scrubbing, 196 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:46:19 compute-0 python3[93795]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:46:19 compute-0 podman[93796]: 2025-10-08 09:46:19.233375788 +0000 UTC m=+0.035820772 container create a1253b7c4649742bb57e64eb0bf6f3f971978cc00e50c9a5971a53257a867853 (image=quay.io/ceph/ceph:v19, name=cool_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:46:19 compute-0 systemd[1]: Started libpod-conmon-a1253b7c4649742bb57e64eb0bf6f3f971978cc00e50c9a5971a53257a867853.scope.
Oct 08 09:46:19 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:46:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26ad636ab329994b1c4c7907ad9cdc3a74f2596845ac49c969a93033d9e37914/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:46:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26ad636ab329994b1c4c7907ad9cdc3a74f2596845ac49c969a93033d9e37914/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:46:19 compute-0 podman[93796]: 2025-10-08 09:46:19.297265863 +0000 UTC m=+0.099710867 container init a1253b7c4649742bb57e64eb0bf6f3f971978cc00e50c9a5971a53257a867853 (image=quay.io/ceph/ceph:v19, name=cool_mendeleev, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 08 09:46:19 compute-0 podman[93796]: 2025-10-08 09:46:19.303089801 +0000 UTC m=+0.105534795 container start a1253b7c4649742bb57e64eb0bf6f3f971978cc00e50c9a5971a53257a867853 (image=quay.io/ceph/ceph:v19, name=cool_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 08 09:46:19 compute-0 podman[93796]: 2025-10-08 09:46:19.305978428 +0000 UTC m=+0.108423432 container attach a1253b7c4649742bb57e64eb0bf6f3f971978cc00e50c9a5971a53257a867853 (image=quay.io/ceph/ceph:v19, name=cool_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 08 09:46:19 compute-0 podman[93796]: 2025-10-08 09:46:19.217822595 +0000 UTC m=+0.020267599 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:46:19 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 08 09:46:19 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:19 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 08 09:46:19 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:19 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 08 09:46:19 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:19 compute-0 ceph-mgr[73869]: [progress INFO root] update: starting ev fa47b4b8-0b6b-448e-9fbe-e0e5cc5c6311 (Updating node-exporter deployment (+1 -> 3))
Oct 08 09:46:19 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-2 on compute-2
Oct 08 09:46:19 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-2 on compute-2
Oct 08 09:46:19 compute-0 ceph-mon[73572]: 5.1a scrub ok
Oct 08 09:46:19 compute-0 ceph-mon[73572]: Updating compute-0:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct 08 09:46:19 compute-0 ceph-mon[73572]: 4.1e scrub starts
Oct 08 09:46:19 compute-0 ceph-mon[73572]: 4.1e scrub ok
Oct 08 09:46:19 compute-0 ceph-mon[73572]: 4.a scrub starts
Oct 08 09:46:19 compute-0 ceph-mon[73572]: 4.a scrub ok
Oct 08 09:46:19 compute-0 ceph-mon[73572]: Updating compute-1:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct 08 09:46:19 compute-0 ceph-mon[73572]: osdmap e51: 3 total, 3 up, 3 in
Oct 08 09:46:19 compute-0 ceph-mon[73572]: Updating compute-2:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct 08 09:46:19 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:19 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:19 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:19 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:19 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:19 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:19 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:19 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth import"} v 0)
Oct 08 09:46:19 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2482379184' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Oct 08 09:46:19 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2482379184' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Oct 08 09:46:19 compute-0 systemd[1]: libpod-a1253b7c4649742bb57e64eb0bf6f3f971978cc00e50c9a5971a53257a867853.scope: Deactivated successfully.
Oct 08 09:46:19 compute-0 podman[93796]: 2025-10-08 09:46:19.737986733 +0000 UTC m=+0.540431717 container died a1253b7c4649742bb57e64eb0bf6f3f971978cc00e50c9a5971a53257a867853 (image=quay.io/ceph/ceph:v19, name=cool_mendeleev, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 08 09:46:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-26ad636ab329994b1c4c7907ad9cdc3a74f2596845ac49c969a93033d9e37914-merged.mount: Deactivated successfully.
Oct 08 09:46:19 compute-0 podman[93796]: 2025-10-08 09:46:19.769897465 +0000 UTC m=+0.572342449 container remove a1253b7c4649742bb57e64eb0bf6f3f971978cc00e50c9a5971a53257a867853 (image=quay.io/ceph/ceph:v19, name=cool_mendeleev, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:46:19 compute-0 systemd[1]: libpod-conmon-a1253b7c4649742bb57e64eb0bf6f3f971978cc00e50c9a5971a53257a867853.scope: Deactivated successfully.
Oct 08 09:46:19 compute-0 sudo[93793]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:19 compute-0 ceph-mon[73572]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct 08 09:46:20 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e26: compute-0.ixicfj(active, since 6s), standbys: compute-2.mtagwx, compute-1.swlvov
Oct 08 09:46:20 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 2.e scrub starts
Oct 08 09:46:20 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 2.e scrub ok
Oct 08 09:46:20 compute-0 sudo[93871]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnxavhmqbamnbryavnmavsneknstjydo ; /usr/bin/python3'
Oct 08 09:46:20 compute-0 ceph-mon[73572]: 4.9 scrub starts
Oct 08 09:46:20 compute-0 ceph-mon[73572]: 4.9 scrub ok
Oct 08 09:46:20 compute-0 ceph-mon[73572]: 2.1 scrub starts
Oct 08 09:46:20 compute-0 ceph-mon[73572]: 2.1 scrub ok
Oct 08 09:46:20 compute-0 ceph-mon[73572]: pgmap v10: 198 pgs: 1 unknown, 1 active+clean+scrubbing, 196 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:46:20 compute-0 ceph-mon[73572]: 3.d scrub starts
Oct 08 09:46:20 compute-0 ceph-mon[73572]: 3.d scrub ok
Oct 08 09:46:20 compute-0 ceph-mon[73572]: Deploying daemon node-exporter.compute-2 on compute-2
Oct 08 09:46:20 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/2482379184' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Oct 08 09:46:20 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/2482379184' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Oct 08 09:46:20 compute-0 ceph-mon[73572]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct 08 09:46:20 compute-0 ceph-mon[73572]: mgrmap e26: compute-0.ixicfj(active, since 6s), standbys: compute-2.mtagwx, compute-1.swlvov
Oct 08 09:46:20 compute-0 sudo[93871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:46:20 compute-0 python3[93873]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:46:20 compute-0 podman[93875]: 2025-10-08 09:46:20.653204731 +0000 UTC m=+0.035637826 container create 61a06bc8a1dbf0a3687f0a052e491dfba688e1cd9573e0b646abcc94f6b0ee22 (image=quay.io/ceph/ceph:v19, name=exciting_villani, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Oct 08 09:46:20 compute-0 systemd[1]: Started libpod-conmon-61a06bc8a1dbf0a3687f0a052e491dfba688e1cd9573e0b646abcc94f6b0ee22.scope.
Oct 08 09:46:20 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:46:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ab3bf26f101868acfe57a61080a4308d5b933b4ebebc6f84ed90d7e50597080/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:46:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ab3bf26f101868acfe57a61080a4308d5b933b4ebebc6f84ed90d7e50597080/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:46:20 compute-0 podman[93875]: 2025-10-08 09:46:20.73099296 +0000 UTC m=+0.113426065 container init 61a06bc8a1dbf0a3687f0a052e491dfba688e1cd9573e0b646abcc94f6b0ee22 (image=quay.io/ceph/ceph:v19, name=exciting_villani, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct 08 09:46:20 compute-0 podman[93875]: 2025-10-08 09:46:20.639162274 +0000 UTC m=+0.021595389 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:46:20 compute-0 podman[93875]: 2025-10-08 09:46:20.735642922 +0000 UTC m=+0.118076017 container start 61a06bc8a1dbf0a3687f0a052e491dfba688e1cd9573e0b646abcc94f6b0ee22 (image=quay.io/ceph/ceph:v19, name=exciting_villani, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 08 09:46:20 compute-0 podman[93875]: 2025-10-08 09:46:20.739014424 +0000 UTC m=+0.121447539 container attach 61a06bc8a1dbf0a3687f0a052e491dfba688e1cd9573e0b646abcc94f6b0ee22 (image=quay.io/ceph/ceph:v19, name=exciting_villani, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:46:21 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Oct 08 09:46:21 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3095387835' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 08 09:46:21 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 2.19 deep-scrub starts
Oct 08 09:46:21 compute-0 exciting_villani[93891]: 
Oct 08 09:46:21 compute-0 exciting_villani[93891]: {"fsid":"787292cc-8154-50c4-9e00-e9be3e817149","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":69,"monmap":{"epoch":3,"min_mon_release_name":"squid","num_mons":3},"osdmap":{"epoch":51,"num_osds":3,"num_up_osds":3,"osd_up_since":1759916737,"num_in_osds":3,"osd_in_since":1759916717,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":196},{"state_name":"active+clean+scrubbing","count":1},{"state_name":"unknown","count":1}],"num_pgs":198,"num_pools":12,"num_objects":194,"data_bytes":464595,"bytes_used":88862720,"bytes_avail":64323063808,"bytes_total":64411926528,"unknown_pgs_ratio":0.0050505050458014011},"fsmap":{"epoch":2,"btime":"2025-10-08T09:46:14:191872+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","dashboard","iostat","nfs","restful"],"services":{"dashboard":"http://192.168.122.100:8443/"}},"servicemap":{"epoch":5,"modified":"2025-10-08T09:45:54.969307+0000","services":{"mgr":{"daemons":{"summary":"","compute-0.ixicfj":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1.swlvov":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2.mtagwx":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"mon":{"daemons":{"summary":"","compute-0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"rgw":{"daemons":{"summary":"","14382":{"start_epoch":5,"start_stamp":"2025-10-08T09:45:54.959975+0000","gid":14382,"addr":"192.168.122.100:0/4157537618","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-0","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.100:8082","frontend_type#0":"beast","hostname":"compute-0","id":"rgw.compute-0.wdkdxi","kernel_description":"#1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025","kernel_version":"5.14.0-620.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864104","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"246b4a69-3c1d-47ce-b182-d12a3d96d3e3","zone_name":"default","zonegroup_id":"3218c688-50d3-4b3d-9517-1c08371b4e2e","zonegroup_name":"default"},"task_status":{}},"24146":{"start_epoch":5,"start_stamp":"2025-10-08T09:45:54.963319+0000","gid":24146,"addr":"192.168.122.101:0/1900470648","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-1","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.101:8082","frontend_type#0":"beast","hostname":"compute-1","id":"rgw.compute-1.aaugis","kernel_description":"#1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025","kernel_version":"5.14.0-620.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864104","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"246b4a69-3c1d-47ce-b182-d12a3d96d3e3","zone_name":"default","zonegroup_id":"3218c688-50d3-4b3d-9517-1c08371b4e2e","zonegroup_name":"default"},"task_status":{}},"24148":{"start_epoch":5,"start_stamp":"2025-10-08T09:45:54.967024+0000","gid":24148,"addr":"192.168.122.102:0/4200026288","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-2","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.102:8082","frontend_type#0":"beast","hostname":"compute-2","id":"rgw.compute-2.pgshil","kernel_description":"#1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025","kernel_version":"5.14.0-620.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864104","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"246b4a69-3c1d-47ce-b182-d12a3d96d3e3","zone_name":"default","zonegroup_id":"3218c688-50d3-4b3d-9517-1c08371b4e2e","zonegroup_name":"default"},"task_status":{}}}}}},"progress_events":{}}
Oct 08 09:46:21 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 2.19 deep-scrub ok
Oct 08 09:46:21 compute-0 systemd[1]: libpod-61a06bc8a1dbf0a3687f0a052e491dfba688e1cd9573e0b646abcc94f6b0ee22.scope: Deactivated successfully.
Oct 08 09:46:21 compute-0 podman[93875]: 2025-10-08 09:46:21.167309575 +0000 UTC m=+0.549742670 container died 61a06bc8a1dbf0a3687f0a052e491dfba688e1cd9573e0b646abcc94f6b0ee22 (image=quay.io/ceph/ceph:v19, name=exciting_villani, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Oct 08 09:46:21 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v11: 198 pgs: 198 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Oct 08 09:46:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ab3bf26f101868acfe57a61080a4308d5b933b4ebebc6f84ed90d7e50597080-merged.mount: Deactivated successfully.
Oct 08 09:46:21 compute-0 podman[93875]: 2025-10-08 09:46:21.202437935 +0000 UTC m=+0.584871030 container remove 61a06bc8a1dbf0a3687f0a052e491dfba688e1cd9573e0b646abcc94f6b0ee22 (image=quay.io/ceph/ceph:v19, name=exciting_villani, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Oct 08 09:46:21 compute-0 systemd[1]: libpod-conmon-61a06bc8a1dbf0a3687f0a052e491dfba688e1cd9573e0b646abcc94f6b0ee22.scope: Deactivated successfully.
Oct 08 09:46:21 compute-0 sudo[93871]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:21 compute-0 sudo[93952]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xabplfsfkauownklxxnvlxbryrgyrout ; /usr/bin/python3'
Oct 08 09:46:21 compute-0 sudo[93952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:46:21 compute-0 ceph-mon[73572]: 5.e scrub starts
Oct 08 09:46:21 compute-0 ceph-mon[73572]: 5.e scrub ok
Oct 08 09:46:21 compute-0 ceph-mon[73572]: 2.e scrub starts
Oct 08 09:46:21 compute-0 ceph-mon[73572]: 2.e scrub ok
Oct 08 09:46:21 compute-0 ceph-mon[73572]: 5.9 deep-scrub starts
Oct 08 09:46:21 compute-0 ceph-mon[73572]: 5.9 deep-scrub ok
Oct 08 09:46:21 compute-0 ceph-mon[73572]: 4.6 deep-scrub starts
Oct 08 09:46:21 compute-0 ceph-mon[73572]: 4.6 deep-scrub ok
Oct 08 09:46:21 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3095387835' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 08 09:46:21 compute-0 python3[93954]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:46:21 compute-0 podman[93955]: 2025-10-08 09:46:21.634071888 +0000 UTC m=+0.051738917 container create 6dccf6167f7285daa0d05406d52d8c7bb8b0d24fda9dd792dbeffd80ac58e125 (image=quay.io/ceph/ceph:v19, name=nice_mclaren, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:46:21 compute-0 systemd[1]: Started libpod-conmon-6dccf6167f7285daa0d05406d52d8c7bb8b0d24fda9dd792dbeffd80ac58e125.scope.
Oct 08 09:46:21 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:46:21 compute-0 podman[93955]: 2025-10-08 09:46:21.606833158 +0000 UTC m=+0.024500287 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:46:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb1840126ae369fb14f36086211c4a7bf670db675faf25e86afe68c987c6c5c6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:46:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb1840126ae369fb14f36086211c4a7bf670db675faf25e86afe68c987c6c5c6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:46:21 compute-0 podman[93955]: 2025-10-08 09:46:21.710645899 +0000 UTC m=+0.128312928 container init 6dccf6167f7285daa0d05406d52d8c7bb8b0d24fda9dd792dbeffd80ac58e125 (image=quay.io/ceph/ceph:v19, name=nice_mclaren, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct 08 09:46:21 compute-0 podman[93955]: 2025-10-08 09:46:21.717486548 +0000 UTC m=+0.135153577 container start 6dccf6167f7285daa0d05406d52d8c7bb8b0d24fda9dd792dbeffd80ac58e125 (image=quay.io/ceph/ceph:v19, name=nice_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True)
Oct 08 09:46:21 compute-0 podman[93955]: 2025-10-08 09:46:21.720448588 +0000 UTC m=+0.138115637 container attach 6dccf6167f7285daa0d05406d52d8c7bb8b0d24fda9dd792dbeffd80ac58e125 (image=quay.io/ceph/ceph:v19, name=nice_mclaren, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Oct 08 09:46:22 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 08 09:46:22 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/393958427' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 08 09:46:22 compute-0 nice_mclaren[93970]: 
Oct 08 09:46:22 compute-0 nice_mclaren[93970]: {"epoch":3,"fsid":"787292cc-8154-50c4-9e00-e9be3e817149","modified":"2025-10-08T09:45:06.514939Z","created":"2025-10-08T09:42:59.307631Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"compute-2","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.102:3300","nonce":0},{"type":"v1","addr":"192.168.122.102:6789","nonce":0}]},"addr":"192.168.122.102:6789/0","public_addr":"192.168.122.102:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"compute-1","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.101:3300","nonce":0},{"type":"v1","addr":"192.168.122.101:6789","nonce":0}]},"addr":"192.168.122.101:6789/0","public_addr":"192.168.122.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1,2]}
Oct 08 09:46:22 compute-0 nice_mclaren[93970]: dumped monmap epoch 3
Oct 08 09:46:22 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 08 09:46:22 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Oct 08 09:46:22 compute-0 systemd[1]: libpod-6dccf6167f7285daa0d05406d52d8c7bb8b0d24fda9dd792dbeffd80ac58e125.scope: Deactivated successfully.
Oct 08 09:46:22 compute-0 podman[93955]: 2025-10-08 09:46:22.166508441 +0000 UTC m=+0.584175510 container died 6dccf6167f7285daa0d05406d52d8c7bb8b0d24fda9dd792dbeffd80ac58e125 (image=quay.io/ceph/ceph:v19, name=nice_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:46:22 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Oct 08 09:46:22 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:22 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 08 09:46:22 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:22 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Oct 08 09:46:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb1840126ae369fb14f36086211c4a7bf670db675faf25e86afe68c987c6c5c6-merged.mount: Deactivated successfully.
Oct 08 09:46:22 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:22 compute-0 ceph-mgr[73869]: [progress INFO root] complete: finished ev fa47b4b8-0b6b-448e-9fbe-e0e5cc5c6311 (Updating node-exporter deployment (+1 -> 3))
Oct 08 09:46:22 compute-0 ceph-mgr[73869]: [progress INFO root] Completed event fa47b4b8-0b6b-448e-9fbe-e0e5cc5c6311 (Updating node-exporter deployment (+1 -> 3)) in 3 seconds
Oct 08 09:46:22 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Oct 08 09:46:22 compute-0 podman[93955]: 2025-10-08 09:46:22.210861691 +0000 UTC m=+0.628528720 container remove 6dccf6167f7285daa0d05406d52d8c7bb8b0d24fda9dd792dbeffd80ac58e125 (image=quay.io/ceph/ceph:v19, name=nice_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:46:22 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:22 compute-0 systemd[1]: libpod-conmon-6dccf6167f7285daa0d05406d52d8c7bb8b0d24fda9dd792dbeffd80ac58e125.scope: Deactivated successfully.
Oct 08 09:46:22 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 08 09:46:22 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 09:46:22 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 08 09:46:22 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 09:46:22 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:46:22 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:46:22 compute-0 sudo[93952]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:22 compute-0 sudo[94006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:46:22 compute-0 sudo[94006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:46:22 compute-0 sudo[94006]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:22 compute-0 sudo[94031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 08 09:46:22 compute-0 sudo[94031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:46:22 compute-0 ceph-mon[73572]: 2.19 deep-scrub starts
Oct 08 09:46:22 compute-0 ceph-mon[73572]: 2.19 deep-scrub ok
Oct 08 09:46:22 compute-0 ceph-mon[73572]: pgmap v11: 198 pgs: 198 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Oct 08 09:46:22 compute-0 ceph-mon[73572]: 3.c scrub starts
Oct 08 09:46:22 compute-0 ceph-mon[73572]: 3.c scrub ok
Oct 08 09:46:22 compute-0 ceph-mon[73572]: 5.d scrub starts
Oct 08 09:46:22 compute-0 ceph-mon[73572]: 5.d scrub ok
Oct 08 09:46:22 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/393958427' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 08 09:46:22 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:22 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:22 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:22 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:22 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 09:46:22 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 09:46:22 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:46:22 compute-0 sudo[94125]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzzjzxpntelsqjyfljzrvtqgprugqyva ; /usr/bin/python3'
Oct 08 09:46:22 compute-0 sudo[94125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:46:22 compute-0 podman[94102]: 2025-10-08 09:46:22.704189362 +0000 UTC m=+0.042517615 container create c53651af6f462b9915b22b71f83d84e236259766e57295b3be8107c17f4a4b8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_tu, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default)
Oct 08 09:46:22 compute-0 systemd[1]: Started libpod-conmon-c53651af6f462b9915b22b71f83d84e236259766e57295b3be8107c17f4a4b8a.scope.
Oct 08 09:46:22 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:46:22 compute-0 podman[94102]: 2025-10-08 09:46:22.771960556 +0000 UTC m=+0.110288839 container init c53651af6f462b9915b22b71f83d84e236259766e57295b3be8107c17f4a4b8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_tu, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Oct 08 09:46:22 compute-0 podman[94102]: 2025-10-08 09:46:22.777241997 +0000 UTC m=+0.115570290 container start c53651af6f462b9915b22b71f83d84e236259766e57295b3be8107c17f4a4b8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:46:22 compute-0 wonderful_tu[94137]: 167 167
Oct 08 09:46:22 compute-0 podman[94102]: 2025-10-08 09:46:22.780453585 +0000 UTC m=+0.118781858 container attach c53651af6f462b9915b22b71f83d84e236259766e57295b3be8107c17f4a4b8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct 08 09:46:22 compute-0 systemd[1]: libpod-c53651af6f462b9915b22b71f83d84e236259766e57295b3be8107c17f4a4b8a.scope: Deactivated successfully.
Oct 08 09:46:22 compute-0 podman[94102]: 2025-10-08 09:46:22.781120035 +0000 UTC m=+0.119448308 container died c53651af6f462b9915b22b71f83d84e236259766e57295b3be8107c17f4a4b8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_tu, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 08 09:46:22 compute-0 podman[94102]: 2025-10-08 09:46:22.688205466 +0000 UTC m=+0.026533749 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:46:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-a57387f6853435a975617d9204bc9067d3e2ccb2e60056c2b325ba9007614056-merged.mount: Deactivated successfully.
Oct 08 09:46:22 compute-0 python3[94132]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:46:22 compute-0 podman[94102]: 2025-10-08 09:46:22.815531833 +0000 UTC m=+0.153860086 container remove c53651af6f462b9915b22b71f83d84e236259766e57295b3be8107c17f4a4b8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_tu, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:46:22 compute-0 systemd[1]: libpod-conmon-c53651af6f462b9915b22b71f83d84e236259766e57295b3be8107c17f4a4b8a.scope: Deactivated successfully.
Oct 08 09:46:22 compute-0 podman[94152]: 2025-10-08 09:46:22.862318758 +0000 UTC m=+0.034797741 container create cf05b6ed32212d8b03be6b93d58967f1fc48f1661abc911dddeed976b8507fc3 (image=quay.io/ceph/ceph:v19, name=cranky_elgamal, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid)
Oct 08 09:46:22 compute-0 systemd[1]: Started libpod-conmon-cf05b6ed32212d8b03be6b93d58967f1fc48f1661abc911dddeed976b8507fc3.scope.
Oct 08 09:46:22 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:46:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25b7fce022a9552470c7f7f8c0e600debe054529c03bebf28dcd5cb7b83a2dab/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:46:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25b7fce022a9552470c7f7f8c0e600debe054529c03bebf28dcd5cb7b83a2dab/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:46:22 compute-0 podman[94152]: 2025-10-08 09:46:22.912375492 +0000 UTC m=+0.084854485 container init cf05b6ed32212d8b03be6b93d58967f1fc48f1661abc911dddeed976b8507fc3 (image=quay.io/ceph/ceph:v19, name=cranky_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:46:22 compute-0 podman[94152]: 2025-10-08 09:46:22.917309672 +0000 UTC m=+0.089788655 container start cf05b6ed32212d8b03be6b93d58967f1fc48f1661abc911dddeed976b8507fc3 (image=quay.io/ceph/ceph:v19, name=cranky_elgamal, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:46:22 compute-0 podman[94152]: 2025-10-08 09:46:22.920703825 +0000 UTC m=+0.093182838 container attach cf05b6ed32212d8b03be6b93d58967f1fc48f1661abc911dddeed976b8507fc3 (image=quay.io/ceph/ceph:v19, name=cranky_elgamal, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:46:22 compute-0 podman[94152]: 2025-10-08 09:46:22.846652481 +0000 UTC m=+0.019131484 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:46:22 compute-0 podman[94177]: 2025-10-08 09:46:22.959052543 +0000 UTC m=+0.043414943 container create ff95f0533b5987d756f53f975412f6e9e61d33571b46730b113fa567759396e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_nash, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 08 09:46:23 compute-0 systemd[1]: Started libpod-conmon-ff95f0533b5987d756f53f975412f6e9e61d33571b46730b113fa567759396e5.scope.
Oct 08 09:46:23 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:46:23 compute-0 podman[94177]: 2025-10-08 09:46:22.939803137 +0000 UTC m=+0.024165617 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:46:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6051f8f2e9df3927bd438811ea4d45db94e1db45d6e8848f5256697a69b0e46/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 09:46:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6051f8f2e9df3927bd438811ea4d45db94e1db45d6e8848f5256697a69b0e46/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:46:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6051f8f2e9df3927bd438811ea4d45db94e1db45d6e8848f5256697a69b0e46/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:46:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6051f8f2e9df3927bd438811ea4d45db94e1db45d6e8848f5256697a69b0e46/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 09:46:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6051f8f2e9df3927bd438811ea4d45db94e1db45d6e8848f5256697a69b0e46/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 09:46:23 compute-0 podman[94177]: 2025-10-08 09:46:23.050526819 +0000 UTC m=+0.134889239 container init ff95f0533b5987d756f53f975412f6e9e61d33571b46730b113fa567759396e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_nash, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct 08 09:46:23 compute-0 podman[94177]: 2025-10-08 09:46:23.059364518 +0000 UTC m=+0.143726928 container start ff95f0533b5987d756f53f975412f6e9e61d33571b46730b113fa567759396e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:46:23 compute-0 podman[94177]: 2025-10-08 09:46:23.06337718 +0000 UTC m=+0.147739620 container attach ff95f0533b5987d756f53f975412f6e9e61d33571b46730b113fa567759396e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_nash, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct 08 09:46:23 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Oct 08 09:46:23 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Oct 08 09:46:23 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v12: 198 pgs: 198 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 10 op/s
Oct 08 09:46:23 compute-0 ceph-mgr[73869]: [progress INFO root] Writing back 13 completed events
Oct 08 09:46:23 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 08 09:46:23 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:23 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0)
Oct 08 09:46:23 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2282328507' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Oct 08 09:46:23 compute-0 cranky_elgamal[94170]: [client.openstack]
Oct 08 09:46:23 compute-0 cranky_elgamal[94170]:         key = AQADMuZoAAAAABAAatv7Ix+93M4zPKi4UUkwMw==
Oct 08 09:46:23 compute-0 cranky_elgamal[94170]:         caps mgr = "allow *"
Oct 08 09:46:23 compute-0 cranky_elgamal[94170]:         caps mon = "profile rbd"
Oct 08 09:46:23 compute-0 cranky_elgamal[94170]:         caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Oct 08 09:46:23 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:46:23 compute-0 systemd[1]: libpod-cf05b6ed32212d8b03be6b93d58967f1fc48f1661abc911dddeed976b8507fc3.scope: Deactivated successfully.
Oct 08 09:46:23 compute-0 podman[94152]: 2025-10-08 09:46:23.349619206 +0000 UTC m=+0.522098239 container died cf05b6ed32212d8b03be6b93d58967f1fc48f1661abc911dddeed976b8507fc3 (image=quay.io/ceph/ceph:v19, name=cranky_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:46:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-25b7fce022a9552470c7f7f8c0e600debe054529c03bebf28dcd5cb7b83a2dab-merged.mount: Deactivated successfully.
Oct 08 09:46:23 compute-0 clever_nash[94195]: --> passed data devices: 0 physical, 1 LVM
Oct 08 09:46:23 compute-0 clever_nash[94195]: --> All data devices are unavailable
Oct 08 09:46:23 compute-0 podman[94152]: 2025-10-08 09:46:23.398513394 +0000 UTC m=+0.570992387 container remove cf05b6ed32212d8b03be6b93d58967f1fc48f1661abc911dddeed976b8507fc3 (image=quay.io/ceph/ceph:v19, name=cranky_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 08 09:46:23 compute-0 systemd[1]: libpod-conmon-cf05b6ed32212d8b03be6b93d58967f1fc48f1661abc911dddeed976b8507fc3.scope: Deactivated successfully.
Oct 08 09:46:23 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 08 09:46:23 compute-0 systemd[1]: libpod-ff95f0533b5987d756f53f975412f6e9e61d33571b46730b113fa567759396e5.scope: Deactivated successfully.
Oct 08 09:46:23 compute-0 conmon[94195]: conmon ff95f0533b5987d756f5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ff95f0533b5987d756f53f975412f6e9e61d33571b46730b113fa567759396e5.scope/container/memory.events
Oct 08 09:46:23 compute-0 podman[94177]: 2025-10-08 09:46:23.413369267 +0000 UTC m=+0.497731687 container died ff95f0533b5987d756f53f975412f6e9e61d33571b46730b113fa567759396e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_nash, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Oct 08 09:46:23 compute-0 sudo[94125]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6051f8f2e9df3927bd438811ea4d45db94e1db45d6e8848f5256697a69b0e46-merged.mount: Deactivated successfully.
Oct 08 09:46:23 compute-0 podman[94177]: 2025-10-08 09:46:23.457105949 +0000 UTC m=+0.541468349 container remove ff95f0533b5987d756f53f975412f6e9e61d33571b46730b113fa567759396e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_nash, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Oct 08 09:46:23 compute-0 ceph-mon[73572]: 7.1b scrub starts
Oct 08 09:46:23 compute-0 ceph-mon[73572]: 7.1b scrub ok
Oct 08 09:46:23 compute-0 ceph-mon[73572]: 3.10 scrub starts
Oct 08 09:46:23 compute-0 ceph-mon[73572]: 3.10 scrub ok
Oct 08 09:46:23 compute-0 ceph-mon[73572]: 5.b scrub starts
Oct 08 09:46:23 compute-0 ceph-mon[73572]: 5.b scrub ok
Oct 08 09:46:23 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:23 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/2282328507' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Oct 08 09:46:23 compute-0 systemd[1]: libpod-conmon-ff95f0533b5987d756f53f975412f6e9e61d33571b46730b113fa567759396e5.scope: Deactivated successfully.
Oct 08 09:46:23 compute-0 sudo[94031]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:23 compute-0 sudo[94254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:46:23 compute-0 sudo[94254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:46:23 compute-0 sudo[94254]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:23 compute-0 sudo[94279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- lvm list --format json
Oct 08 09:46:23 compute-0 sudo[94279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:46:23 compute-0 podman[94343]: 2025-10-08 09:46:23.972191783 +0000 UTC m=+0.051906802 container create eed6338f80faf4bc6903ab1042a8aa54db1c4a9f139d097f3f5fea990cb78eac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:46:24 compute-0 systemd[1]: Started libpod-conmon-eed6338f80faf4bc6903ab1042a8aa54db1c4a9f139d097f3f5fea990cb78eac.scope.
Oct 08 09:46:24 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:46:24 compute-0 podman[94343]: 2025-10-08 09:46:24.036323776 +0000 UTC m=+0.116038785 container init eed6338f80faf4bc6903ab1042a8aa54db1c4a9f139d097f3f5fea990cb78eac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_wu, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325)
Oct 08 09:46:24 compute-0 podman[94343]: 2025-10-08 09:46:24.041633057 +0000 UTC m=+0.121348046 container start eed6338f80faf4bc6903ab1042a8aa54db1c4a9f139d097f3f5fea990cb78eac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_wu, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct 08 09:46:24 compute-0 podman[94343]: 2025-10-08 09:46:23.947313055 +0000 UTC m=+0.027028104 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:46:24 compute-0 exciting_wu[94359]: 167 167
Oct 08 09:46:24 compute-0 podman[94343]: 2025-10-08 09:46:24.045227646 +0000 UTC m=+0.124942635 container attach eed6338f80faf4bc6903ab1042a8aa54db1c4a9f139d097f3f5fea990cb78eac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_wu, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:46:24 compute-0 systemd[1]: libpod-eed6338f80faf4bc6903ab1042a8aa54db1c4a9f139d097f3f5fea990cb78eac.scope: Deactivated successfully.
Oct 08 09:46:24 compute-0 podman[94343]: 2025-10-08 09:46:24.04631954 +0000 UTC m=+0.126034539 container died eed6338f80faf4bc6903ab1042a8aa54db1c4a9f139d097f3f5fea990cb78eac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:46:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e128e43ed808ced2832ce9d36899627b856d179936ab17b5d67c566bf295bbd-merged.mount: Deactivated successfully.
Oct 08 09:46:24 compute-0 podman[94343]: 2025-10-08 09:46:24.077482969 +0000 UTC m=+0.157197968 container remove eed6338f80faf4bc6903ab1042a8aa54db1c4a9f139d097f3f5fea990cb78eac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 08 09:46:24 compute-0 systemd[1]: libpod-conmon-eed6338f80faf4bc6903ab1042a8aa54db1c4a9f139d097f3f5fea990cb78eac.scope: Deactivated successfully.
Oct 08 09:46:24 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Oct 08 09:46:24 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Oct 08 09:46:24 compute-0 podman[94383]: 2025-10-08 09:46:24.222392941 +0000 UTC m=+0.043753903 container create 4ddd93bb97a19ee95ed16ce443750fc78a6de957e8d2ae78aafdc92822556af5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 08 09:46:24 compute-0 systemd[1]: Started libpod-conmon-4ddd93bb97a19ee95ed16ce443750fc78a6de957e8d2ae78aafdc92822556af5.scope.
Oct 08 09:46:24 compute-0 podman[94383]: 2025-10-08 09:46:24.201066892 +0000 UTC m=+0.022427884 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:46:24 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:46:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a2e991ba19d5eda17944928ab745179957dc405c6df32903b23bd1f7ba6e7be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 09:46:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a2e991ba19d5eda17944928ab745179957dc405c6df32903b23bd1f7ba6e7be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:46:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a2e991ba19d5eda17944928ab745179957dc405c6df32903b23bd1f7ba6e7be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:46:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a2e991ba19d5eda17944928ab745179957dc405c6df32903b23bd1f7ba6e7be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 09:46:24 compute-0 podman[94383]: 2025-10-08 09:46:24.32349683 +0000 UTC m=+0.144857782 container init 4ddd93bb97a19ee95ed16ce443750fc78a6de957e8d2ae78aafdc92822556af5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_babbage, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 08 09:46:24 compute-0 podman[94383]: 2025-10-08 09:46:24.329360018 +0000 UTC m=+0.150720960 container start 4ddd93bb97a19ee95ed16ce443750fc78a6de957e8d2ae78aafdc92822556af5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_babbage, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 08 09:46:24 compute-0 podman[94383]: 2025-10-08 09:46:24.33235498 +0000 UTC m=+0.153715962 container attach 4ddd93bb97a19ee95ed16ce443750fc78a6de957e8d2ae78aafdc92822556af5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:46:24 compute-0 ceph-mon[73572]: 7.18 scrub starts
Oct 08 09:46:24 compute-0 ceph-mon[73572]: 7.18 scrub ok
Oct 08 09:46:24 compute-0 ceph-mon[73572]: pgmap v12: 198 pgs: 198 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 10 op/s
Oct 08 09:46:24 compute-0 ceph-mon[73572]: 3.f scrub starts
Oct 08 09:46:24 compute-0 ceph-mon[73572]: 3.f scrub ok
Oct 08 09:46:24 compute-0 ceph-mon[73572]: 4.3 scrub starts
Oct 08 09:46:24 compute-0 ceph-mon[73572]: 4.3 scrub ok
Oct 08 09:46:24 compute-0 inspiring_babbage[94399]: {
Oct 08 09:46:24 compute-0 inspiring_babbage[94399]:     "1": [
Oct 08 09:46:24 compute-0 inspiring_babbage[94399]:         {
Oct 08 09:46:24 compute-0 inspiring_babbage[94399]:             "devices": [
Oct 08 09:46:24 compute-0 inspiring_babbage[94399]:                 "/dev/loop3"
Oct 08 09:46:24 compute-0 inspiring_babbage[94399]:             ],
Oct 08 09:46:24 compute-0 inspiring_babbage[94399]:             "lv_name": "ceph_lv0",
Oct 08 09:46:24 compute-0 inspiring_babbage[94399]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 09:46:24 compute-0 inspiring_babbage[94399]:             "lv_size": "21470642176",
Oct 08 09:46:24 compute-0 inspiring_babbage[94399]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 08 09:46:24 compute-0 inspiring_babbage[94399]:             "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 09:46:24 compute-0 inspiring_babbage[94399]:             "name": "ceph_lv0",
Oct 08 09:46:24 compute-0 inspiring_babbage[94399]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 09:46:24 compute-0 inspiring_babbage[94399]:             "tags": {
Oct 08 09:46:24 compute-0 inspiring_babbage[94399]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 08 09:46:24 compute-0 inspiring_babbage[94399]:                 "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 09:46:24 compute-0 inspiring_babbage[94399]:                 "ceph.cephx_lockbox_secret": "",
Oct 08 09:46:24 compute-0 inspiring_babbage[94399]:                 "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct 08 09:46:24 compute-0 inspiring_babbage[94399]:                 "ceph.cluster_name": "ceph",
Oct 08 09:46:24 compute-0 inspiring_babbage[94399]:                 "ceph.crush_device_class": "",
Oct 08 09:46:24 compute-0 inspiring_babbage[94399]:                 "ceph.encrypted": "0",
Oct 08 09:46:24 compute-0 inspiring_babbage[94399]:                 "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct 08 09:46:24 compute-0 inspiring_babbage[94399]:                 "ceph.osd_id": "1",
Oct 08 09:46:24 compute-0 inspiring_babbage[94399]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 08 09:46:24 compute-0 inspiring_babbage[94399]:                 "ceph.type": "block",
Oct 08 09:46:24 compute-0 inspiring_babbage[94399]:                 "ceph.vdo": "0",
Oct 08 09:46:24 compute-0 inspiring_babbage[94399]:                 "ceph.with_tpm": "0"
Oct 08 09:46:24 compute-0 inspiring_babbage[94399]:             },
Oct 08 09:46:24 compute-0 inspiring_babbage[94399]:             "type": "block",
Oct 08 09:46:24 compute-0 inspiring_babbage[94399]:             "vg_name": "ceph_vg0"
Oct 08 09:46:24 compute-0 inspiring_babbage[94399]:         }
Oct 08 09:46:24 compute-0 inspiring_babbage[94399]:     ]
Oct 08 09:46:24 compute-0 inspiring_babbage[94399]: }
Oct 08 09:46:24 compute-0 systemd[1]: libpod-4ddd93bb97a19ee95ed16ce443750fc78a6de957e8d2ae78aafdc92822556af5.scope: Deactivated successfully.
Oct 08 09:46:24 compute-0 podman[94383]: 2025-10-08 09:46:24.652175358 +0000 UTC m=+0.473536300 container died 4ddd93bb97a19ee95ed16ce443750fc78a6de957e8d2ae78aafdc92822556af5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_babbage, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:46:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a2e991ba19d5eda17944928ab745179957dc405c6df32903b23bd1f7ba6e7be-merged.mount: Deactivated successfully.
Oct 08 09:46:24 compute-0 podman[94383]: 2025-10-08 09:46:24.698172708 +0000 UTC m=+0.519533650 container remove 4ddd93bb97a19ee95ed16ce443750fc78a6de957e8d2ae78aafdc92822556af5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_babbage, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:46:24 compute-0 systemd[1]: libpod-conmon-4ddd93bb97a19ee95ed16ce443750fc78a6de957e8d2ae78aafdc92822556af5.scope: Deactivated successfully.
Oct 08 09:46:24 compute-0 sudo[94279]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:24 compute-0 sudo[94495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:46:24 compute-0 sudo[94495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:46:24 compute-0 sudo[94495]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:24 compute-0 sudo[94543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- raw list --format json
Oct 08 09:46:24 compute-0 sudo[94543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:46:24 compute-0 sudo[94617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvnqzjfxysnbsmnkwazfpjegcxvfcddc ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759916784.5890815-33949-161336562295642/async_wrapper.py j189820904953 30 /home/zuul/.ansible/tmp/ansible-tmp-1759916784.5890815-33949-161336562295642/AnsiballZ_command.py _'
Oct 08 09:46:24 compute-0 sudo[94617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:46:25 compute-0 ansible-async_wrapper.py[94619]: Invoked with j189820904953 30 /home/zuul/.ansible/tmp/ansible-tmp-1759916784.5890815-33949-161336562295642/AnsiballZ_command.py _
Oct 08 09:46:25 compute-0 ansible-async_wrapper.py[94636]: Starting module and watcher
Oct 08 09:46:25 compute-0 ansible-async_wrapper.py[94636]: Start watching 94637 (30)
Oct 08 09:46:25 compute-0 ansible-async_wrapper.py[94637]: Start module (94637)
Oct 08 09:46:25 compute-0 ansible-async_wrapper.py[94619]: Return async_wrapper task started.
Oct 08 09:46:25 compute-0 sudo[94617]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:25 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v13: 198 pgs: 1 active+clean+scrubbing, 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 9 op/s
Oct 08 09:46:25 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 7.6 deep-scrub starts
Oct 08 09:46:25 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 7.6 deep-scrub ok
Oct 08 09:46:25 compute-0 podman[94665]: 2025-10-08 09:46:25.199611467 +0000 UTC m=+0.033817721 container create 491f4e91f5b64f2f77d72508d92fe79e6f56cf69c2095e699ac054e0a9d818e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_dewdney, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct 08 09:46:25 compute-0 python3[94638]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:46:25 compute-0 systemd[1]: Started libpod-conmon-491f4e91f5b64f2f77d72508d92fe79e6f56cf69c2095e699ac054e0a9d818e9.scope.
Oct 08 09:46:25 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:46:25 compute-0 podman[94680]: 2025-10-08 09:46:25.263809582 +0000 UTC m=+0.038313128 container create da35dd713e333e3b7befb35570e3f7c6ae1f7858402921f1c6dfef48b91b4eed (image=quay.io/ceph/ceph:v19, name=happy_engelbart, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:46:25 compute-0 podman[94665]: 2025-10-08 09:46:25.276845188 +0000 UTC m=+0.111051462 container init 491f4e91f5b64f2f77d72508d92fe79e6f56cf69c2095e699ac054e0a9d818e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_dewdney, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct 08 09:46:25 compute-0 podman[94665]: 2025-10-08 09:46:25.185525548 +0000 UTC m=+0.019731812 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:46:25 compute-0 podman[94665]: 2025-10-08 09:46:25.283063648 +0000 UTC m=+0.117269902 container start 491f4e91f5b64f2f77d72508d92fe79e6f56cf69c2095e699ac054e0a9d818e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Oct 08 09:46:25 compute-0 podman[94665]: 2025-10-08 09:46:25.286006098 +0000 UTC m=+0.120212352 container attach 491f4e91f5b64f2f77d72508d92fe79e6f56cf69c2095e699ac054e0a9d818e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_dewdney, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:46:25 compute-0 stupefied_dewdney[94688]: 167 167
Oct 08 09:46:25 compute-0 podman[94665]: 2025-10-08 09:46:25.287984188 +0000 UTC m=+0.122190442 container died 491f4e91f5b64f2f77d72508d92fe79e6f56cf69c2095e699ac054e0a9d818e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_dewdney, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:46:25 compute-0 systemd[1]: Started libpod-conmon-da35dd713e333e3b7befb35570e3f7c6ae1f7858402921f1c6dfef48b91b4eed.scope.
Oct 08 09:46:25 compute-0 systemd[1]: libpod-491f4e91f5b64f2f77d72508d92fe79e6f56cf69c2095e699ac054e0a9d818e9.scope: Deactivated successfully.
Oct 08 09:46:25 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:46:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b08e02c731950535b5f167d68c9e4809a283983c7a24f02e671f07cce43f97c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:46:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-4cde19863ecff7ac6a73587e4a65b1cb602807ccd626fae4348e0dafb24c9848-merged.mount: Deactivated successfully.
Oct 08 09:46:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b08e02c731950535b5f167d68c9e4809a283983c7a24f02e671f07cce43f97c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:46:25 compute-0 podman[94665]: 2025-10-08 09:46:25.334188545 +0000 UTC m=+0.168394799 container remove 491f4e91f5b64f2f77d72508d92fe79e6f56cf69c2095e699ac054e0a9d818e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_dewdney, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 08 09:46:25 compute-0 systemd[1]: libpod-conmon-491f4e91f5b64f2f77d72508d92fe79e6f56cf69c2095e699ac054e0a9d818e9.scope: Deactivated successfully.
Oct 08 09:46:25 compute-0 podman[94680]: 2025-10-08 09:46:25.247455754 +0000 UTC m=+0.021959330 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:46:25 compute-0 podman[94680]: 2025-10-08 09:46:25.348022946 +0000 UTC m=+0.122526492 container init da35dd713e333e3b7befb35570e3f7c6ae1f7858402921f1c6dfef48b91b4eed (image=quay.io/ceph/ceph:v19, name=happy_engelbart, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:46:25 compute-0 podman[94680]: 2025-10-08 09:46:25.352682318 +0000 UTC m=+0.127185864 container start da35dd713e333e3b7befb35570e3f7c6ae1f7858402921f1c6dfef48b91b4eed (image=quay.io/ceph/ceph:v19, name=happy_engelbart, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct 08 09:46:25 compute-0 podman[94680]: 2025-10-08 09:46:25.355583246 +0000 UTC m=+0.130086792 container attach da35dd713e333e3b7befb35570e3f7c6ae1f7858402921f1c6dfef48b91b4eed (image=quay.io/ceph/ceph:v19, name=happy_engelbart, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:46:25 compute-0 podman[94729]: 2025-10-08 09:46:25.50907555 +0000 UTC m=+0.052677965 container create 1a5a78850054abfdcca30b1eaa484415da28672df8031bf9e9ebd40e0ed33703 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_archimedes, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:46:25 compute-0 ceph-mon[73572]: 7.1e scrub starts
Oct 08 09:46:25 compute-0 ceph-mon[73572]: 7.1e scrub ok
Oct 08 09:46:25 compute-0 ceph-mon[73572]: 3.13 scrub starts
Oct 08 09:46:25 compute-0 ceph-mon[73572]: 3.13 scrub ok
Oct 08 09:46:25 compute-0 ceph-mon[73572]: 4.1d scrub starts
Oct 08 09:46:25 compute-0 ceph-mon[73572]: 4.1d scrub ok
Oct 08 09:46:25 compute-0 systemd[1]: Started libpod-conmon-1a5a78850054abfdcca30b1eaa484415da28672df8031bf9e9ebd40e0ed33703.scope.
Oct 08 09:46:25 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:46:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36feeff8abfca48db6d5287438fa72eb892e47e91815fc11f029fe649a5bb95b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 09:46:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36feeff8abfca48db6d5287438fa72eb892e47e91815fc11f029fe649a5bb95b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:46:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36feeff8abfca48db6d5287438fa72eb892e47e91815fc11f029fe649a5bb95b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:46:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36feeff8abfca48db6d5287438fa72eb892e47e91815fc11f029fe649a5bb95b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 09:46:25 compute-0 podman[94729]: 2025-10-08 09:46:25.48936194 +0000 UTC m=+0.032964395 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:46:25 compute-0 podman[94729]: 2025-10-08 09:46:25.591879501 +0000 UTC m=+0.135481936 container init 1a5a78850054abfdcca30b1eaa484415da28672df8031bf9e9ebd40e0ed33703 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:46:25 compute-0 podman[94729]: 2025-10-08 09:46:25.598843773 +0000 UTC m=+0.142446198 container start 1a5a78850054abfdcca30b1eaa484415da28672df8031bf9e9ebd40e0ed33703 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_archimedes, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:46:25 compute-0 podman[94729]: 2025-10-08 09:46:25.602052981 +0000 UTC m=+0.145655406 container attach 1a5a78850054abfdcca30b1eaa484415da28672df8031bf9e9ebd40e0ed33703 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_archimedes, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:46:25 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.14550 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 08 09:46:25 compute-0 happy_engelbart[94703]: 
Oct 08 09:46:25 compute-0 happy_engelbart[94703]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct 08 09:46:25 compute-0 systemd[1]: libpod-da35dd713e333e3b7befb35570e3f7c6ae1f7858402921f1c6dfef48b91b4eed.scope: Deactivated successfully.
Oct 08 09:46:25 compute-0 podman[94680]: 2025-10-08 09:46:25.787361414 +0000 UTC m=+0.561864980 container died da35dd713e333e3b7befb35570e3f7c6ae1f7858402921f1c6dfef48b91b4eed (image=quay.io/ceph/ceph:v19, name=happy_engelbart, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:46:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b08e02c731950535b5f167d68c9e4809a283983c7a24f02e671f07cce43f97c-merged.mount: Deactivated successfully.
Oct 08 09:46:25 compute-0 podman[94680]: 2025-10-08 09:46:25.827717893 +0000 UTC m=+0.602221439 container remove da35dd713e333e3b7befb35570e3f7c6ae1f7858402921f1c6dfef48b91b4eed (image=quay.io/ceph/ceph:v19, name=happy_engelbart, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 08 09:46:25 compute-0 systemd[1]: libpod-conmon-da35dd713e333e3b7befb35570e3f7c6ae1f7858402921f1c6dfef48b91b4eed.scope: Deactivated successfully.
Oct 08 09:46:25 compute-0 ansible-async_wrapper.py[94637]: Module complete (94637)
Oct 08 09:46:26 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 7.2 deep-scrub starts
Oct 08 09:46:26 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 7.2 deep-scrub ok
Oct 08 09:46:26 compute-0 lvm[94895]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 08 09:46:26 compute-0 lvm[94895]: VG ceph_vg0 finished
Oct 08 09:46:26 compute-0 sudo[94896]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulgpnyehctjfqziqavtavfulfwggiihh ; /usr/bin/python3'
Oct 08 09:46:26 compute-0 sudo[94896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:46:26 compute-0 wizardly_archimedes[94763]: {}
Oct 08 09:46:26 compute-0 systemd[1]: libpod-1a5a78850054abfdcca30b1eaa484415da28672df8031bf9e9ebd40e0ed33703.scope: Deactivated successfully.
Oct 08 09:46:26 compute-0 podman[94729]: 2025-10-08 09:46:26.340081914 +0000 UTC m=+0.883684339 container died 1a5a78850054abfdcca30b1eaa484415da28672df8031bf9e9ebd40e0ed33703 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_archimedes, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:46:26 compute-0 systemd[1]: libpod-1a5a78850054abfdcca30b1eaa484415da28672df8031bf9e9ebd40e0ed33703.scope: Consumed 1.131s CPU time.
Oct 08 09:46:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-36feeff8abfca48db6d5287438fa72eb892e47e91815fc11f029fe649a5bb95b-merged.mount: Deactivated successfully.
Oct 08 09:46:26 compute-0 podman[94729]: 2025-10-08 09:46:26.380424562 +0000 UTC m=+0.924026987 container remove 1a5a78850054abfdcca30b1eaa484415da28672df8031bf9e9ebd40e0ed33703 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:46:26 compute-0 systemd[1]: libpod-conmon-1a5a78850054abfdcca30b1eaa484415da28672df8031bf9e9ebd40e0ed33703.scope: Deactivated successfully.
Oct 08 09:46:26 compute-0 sudo[94543]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:26 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 09:46:26 compute-0 python3[94899]: ansible-ansible.legacy.async_status Invoked with jid=j189820904953.94619 mode=status _async_dir=/root/.ansible_async
Oct 08 09:46:26 compute-0 sudo[94896]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:26 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:26 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 09:46:26 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:26 compute-0 ceph-mgr[73869]: [progress INFO root] update: starting ev affb329f-dae8-4723-a1e4-2bc80680611b (Updating mds.cephfs deployment (+3 -> 3))
Oct 08 09:46:26 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.wfaozr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Oct 08 09:46:26 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.wfaozr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 08 09:46:26 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.wfaozr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 08 09:46:26 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:46:26 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:46:26 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-2.wfaozr on compute-2
Oct 08 09:46:26 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-2.wfaozr on compute-2
Oct 08 09:46:26 compute-0 ceph-mon[73572]: pgmap v13: 198 pgs: 1 active+clean+scrubbing, 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 9 op/s
Oct 08 09:46:26 compute-0 ceph-mon[73572]: 7.6 deep-scrub starts
Oct 08 09:46:26 compute-0 ceph-mon[73572]: 7.6 deep-scrub ok
Oct 08 09:46:26 compute-0 ceph-mon[73572]: 5.16 scrub starts
Oct 08 09:46:26 compute-0 ceph-mon[73572]: 5.16 scrub ok
Oct 08 09:46:26 compute-0 ceph-mon[73572]: 3.9 scrub starts
Oct 08 09:46:26 compute-0 ceph-mon[73572]: 3.9 scrub ok
Oct 08 09:46:26 compute-0 ceph-mon[73572]: from='client.14550 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 08 09:46:26 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:26 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:26 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.wfaozr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 08 09:46:26 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.wfaozr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 08 09:46:26 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:46:26 compute-0 sudo[94959]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cudjhipssmyozidlpgdobowsjkfltexd ; /usr/bin/python3'
Oct 08 09:46:26 compute-0 sudo[94959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:46:26 compute-0 python3[94961]: ansible-ansible.legacy.async_status Invoked with jid=j189820904953.94619 mode=cleanup _async_dir=/root/.ansible_async
Oct 08 09:46:26 compute-0 sudo[94959]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:27 compute-0 sudo[94985]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdsyidwefxvlkipjophudrctpniyqhcl ; /usr/bin/python3'
Oct 08 09:46:27 compute-0 sudo[94985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:46:27 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v14: 198 pgs: 1 active+clean+scrubbing, 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 7 op/s
Oct 08 09:46:27 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Oct 08 09:46:27 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Oct 08 09:46:27 compute-0 python3[94987]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:46:27 compute-0 podman[94988]: 2025-10-08 09:46:27.350704757 +0000 UTC m=+0.046015362 container create b7ec3ed55735ef6b7b299de30db9c20ae70d0ad20c7aec8dfe163a0fa0fecb13 (image=quay.io/ceph/ceph:v19, name=gifted_feynman, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:46:27 compute-0 systemd[1]: Started libpod-conmon-b7ec3ed55735ef6b7b299de30db9c20ae70d0ad20c7aec8dfe163a0fa0fecb13.scope.
Oct 08 09:46:27 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:46:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a144b7ec43109ff5ba632ff48c0e7e024c04b31f59d975fe33011fe35455d318/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:46:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a144b7ec43109ff5ba632ff48c0e7e024c04b31f59d975fe33011fe35455d318/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:46:27 compute-0 podman[94988]: 2025-10-08 09:46:27.331717799 +0000 UTC m=+0.027028424 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:46:27 compute-0 podman[94988]: 2025-10-08 09:46:27.440284355 +0000 UTC m=+0.135594990 container init b7ec3ed55735ef6b7b299de30db9c20ae70d0ad20c7aec8dfe163a0fa0fecb13 (image=quay.io/ceph/ceph:v19, name=gifted_feynman, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 08 09:46:27 compute-0 podman[94988]: 2025-10-08 09:46:27.446540975 +0000 UTC m=+0.141851590 container start b7ec3ed55735ef6b7b299de30db9c20ae70d0ad20c7aec8dfe163a0fa0fecb13 (image=quay.io/ceph/ceph:v19, name=gifted_feynman, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:46:27 compute-0 podman[94988]: 2025-10-08 09:46:27.449323709 +0000 UTC m=+0.144634304 container attach b7ec3ed55735ef6b7b299de30db9c20ae70d0ad20c7aec8dfe163a0fa0fecb13 (image=quay.io/ceph/ceph:v19, name=gifted_feynman, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:46:27 compute-0 ceph-mon[73572]: 7.2 deep-scrub starts
Oct 08 09:46:27 compute-0 ceph-mon[73572]: 7.2 deep-scrub ok
Oct 08 09:46:27 compute-0 ceph-mon[73572]: 5.11 scrub starts
Oct 08 09:46:27 compute-0 ceph-mon[73572]: 5.11 scrub ok
Oct 08 09:46:27 compute-0 ceph-mon[73572]: 3.1a scrub starts
Oct 08 09:46:27 compute-0 ceph-mon[73572]: Deploying daemon mds.cephfs.compute-2.wfaozr on compute-2
Oct 08 09:46:27 compute-0 ceph-mon[73572]: 3.1a scrub ok
Oct 08 09:46:27 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.14556 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 08 09:46:27 compute-0 gifted_feynman[95004]: 
Oct 08 09:46:27 compute-0 gifted_feynman[95004]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct 08 09:46:27 compute-0 systemd[1]: libpod-b7ec3ed55735ef6b7b299de30db9c20ae70d0ad20c7aec8dfe163a0fa0fecb13.scope: Deactivated successfully.
Oct 08 09:46:27 compute-0 podman[94988]: 2025-10-08 09:46:27.819799841 +0000 UTC m=+0.515110456 container died b7ec3ed55735ef6b7b299de30db9c20ae70d0ad20c7aec8dfe163a0fa0fecb13 (image=quay.io/ceph/ceph:v19, name=gifted_feynman, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:46:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-a144b7ec43109ff5ba632ff48c0e7e024c04b31f59d975fe33011fe35455d318-merged.mount: Deactivated successfully.
Oct 08 09:46:27 compute-0 podman[94988]: 2025-10-08 09:46:27.864023137 +0000 UTC m=+0.559333752 container remove b7ec3ed55735ef6b7b299de30db9c20ae70d0ad20c7aec8dfe163a0fa0fecb13 (image=quay.io/ceph/ceph:v19, name=gifted_feynman, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:46:27 compute-0 systemd[1]: libpod-conmon-b7ec3ed55735ef6b7b299de30db9c20ae70d0ad20c7aec8dfe163a0fa0fecb13.scope: Deactivated successfully.
Oct 08 09:46:27 compute-0 sudo[94985]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:28 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Oct 08 09:46:28 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Oct 08 09:46:28 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:46:28 compute-0 sudo[95064]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owyqjlgdnwtmcaxluqjfgqakjuybclul ; /usr/bin/python3'
Oct 08 09:46:28 compute-0 sudo[95064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:46:28 compute-0 ceph-mon[73572]: pgmap v14: 198 pgs: 1 active+clean+scrubbing, 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 7 op/s
Oct 08 09:46:28 compute-0 ceph-mon[73572]: 7.3 scrub starts
Oct 08 09:46:28 compute-0 ceph-mon[73572]: 7.3 scrub ok
Oct 08 09:46:28 compute-0 ceph-mon[73572]: 5.15 scrub starts
Oct 08 09:46:28 compute-0 ceph-mon[73572]: 5.15 scrub ok
Oct 08 09:46:28 compute-0 ceph-mon[73572]: 3.1d scrub starts
Oct 08 09:46:28 compute-0 ceph-mon[73572]: 3.1d scrub ok
Oct 08 09:46:28 compute-0 ceph-mon[73572]: from='client.14556 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 08 09:46:28 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 08 09:46:28 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:28 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 08 09:46:28 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:28 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Oct 08 09:46:28 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:28 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.lphril", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Oct 08 09:46:28 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.lphril", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 08 09:46:28 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.lphril", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 08 09:46:28 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:46:28 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:46:28 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.lphril on compute-0
Oct 08 09:46:28 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.lphril on compute-0
Oct 08 09:46:28 compute-0 python3[95066]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:46:28 compute-0 sudo[95067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:46:28 compute-0 sudo[95067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:46:28 compute-0 sudo[95067]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:28 compute-0 podman[95090]: 2025-10-08 09:46:28.755268645 +0000 UTC m=+0.035476691 container create 2c2cf4ec0c96446c79a7fb211222232ab9c77a3bf4da1a4527cad76d6cb7993d (image=quay.io/ceph/ceph:v19, name=tender_bassi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 08 09:46:28 compute-0 sudo[95098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 787292cc-8154-50c4-9e00-e9be3e817149
Oct 08 09:46:28 compute-0 sudo[95098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:46:28 compute-0 systemd[1]: Started libpod-conmon-2c2cf4ec0c96446c79a7fb211222232ab9c77a3bf4da1a4527cad76d6cb7993d.scope.
Oct 08 09:46:28 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:46:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a63d0bf7d1e549f61de671eac8f6938109748a4ff77f2a4317118f32d79d9a9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:46:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a63d0bf7d1e549f61de671eac8f6938109748a4ff77f2a4317118f32d79d9a9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:46:28 compute-0 podman[95090]: 2025-10-08 09:46:28.818102629 +0000 UTC m=+0.098310715 container init 2c2cf4ec0c96446c79a7fb211222232ab9c77a3bf4da1a4527cad76d6cb7993d (image=quay.io/ceph/ceph:v19, name=tender_bassi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:46:28 compute-0 podman[95090]: 2025-10-08 09:46:28.825402091 +0000 UTC m=+0.105610137 container start 2c2cf4ec0c96446c79a7fb211222232ab9c77a3bf4da1a4527cad76d6cb7993d (image=quay.io/ceph/ceph:v19, name=tender_bassi, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 08 09:46:28 compute-0 podman[95090]: 2025-10-08 09:46:28.82833929 +0000 UTC m=+0.108547366 container attach 2c2cf4ec0c96446c79a7fb211222232ab9c77a3bf4da1a4527cad76d6cb7993d (image=quay.io/ceph/ceph:v19, name=tender_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:46:28 compute-0 podman[95090]: 2025-10-08 09:46:28.742245779 +0000 UTC m=+0.022453845 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:46:29 compute-0 podman[95196]: 2025-10-08 09:46:29.139728702 +0000 UTC m=+0.043124644 container create 0ab24dd2d237ec8353beca4cf4a31739016a318660dd38c2125a53c181c7e66d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_payne, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325)
Oct 08 09:46:29 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 7.f deep-scrub starts
Oct 08 09:46:29 compute-0 systemd[1]: Started libpod-conmon-0ab24dd2d237ec8353beca4cf4a31739016a318660dd38c2125a53c181c7e66d.scope.
Oct 08 09:46:29 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v15: 198 pgs: 1 active+clean+scrubbing, 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 6 op/s
Oct 08 09:46:29 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 7.f deep-scrub ok
Oct 08 09:46:29 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.14562 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 08 09:46:29 compute-0 tender_bassi[95132]: 
Oct 08 09:46:29 compute-0 tender_bassi[95132]: [{"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "alertmanager", "service_type": "alertmanager"}, {"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "nfs.cephfs", "service_name": "ingress.nfs.cephfs", "service_type": "ingress", "spec": {"backend_service": "nfs.cephfs", "enable_haproxy_protocol": true, "first_virtual_router_id": 50, "frontend_port": 2049, "monitor_port": 9049, "virtual_ip": "192.168.122.2/24"}}, {"placement": {"count": 2}, "service_id": "rgw.default", "service_name": "ingress.rgw.default", "service_type": "ingress", "spec": {"backend_service": "rgw.rgw", "first_virtual_router_id": 50, "frontend_port": 8080, "monitor_port": 8999, "virtual_interface_networks": ["192.168.122.0/24"], "virtual_ip": "192.168.122.2/24"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "nfs.cephfs", "service_type": "nfs", "spec": {"enable_haproxy_protocol": true, "port": 12049}}, {"placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "prometheus", "service_type": "prometheus"}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Oct 08 09:46:29 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:46:29 compute-0 systemd[1]: libpod-2c2cf4ec0c96446c79a7fb211222232ab9c77a3bf4da1a4527cad76d6cb7993d.scope: Deactivated successfully.
Oct 08 09:46:29 compute-0 podman[95090]: 2025-10-08 09:46:29.206044931 +0000 UTC m=+0.486252987 container died 2c2cf4ec0c96446c79a7fb211222232ab9c77a3bf4da1a4527cad76d6cb7993d (image=quay.io/ceph/ceph:v19, name=tender_bassi, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True)
Oct 08 09:46:29 compute-0 podman[95196]: 2025-10-08 09:46:29.215905841 +0000 UTC m=+0.119301793 container init 0ab24dd2d237ec8353beca4cf4a31739016a318660dd38c2125a53c181c7e66d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_payne, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:46:29 compute-0 podman[95196]: 2025-10-08 09:46:29.121503467 +0000 UTC m=+0.024899439 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:46:29 compute-0 podman[95196]: 2025-10-08 09:46:29.221661096 +0000 UTC m=+0.125057038 container start 0ab24dd2d237ec8353beca4cf4a31739016a318660dd38c2125a53c181c7e66d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_payne, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Oct 08 09:46:29 compute-0 naughty_payne[95212]: 167 167
Oct 08 09:46:29 compute-0 systemd[1]: libpod-0ab24dd2d237ec8353beca4cf4a31739016a318660dd38c2125a53c181c7e66d.scope: Deactivated successfully.
Oct 08 09:46:29 compute-0 podman[95196]: 2025-10-08 09:46:29.225485623 +0000 UTC m=+0.128881565 container attach 0ab24dd2d237ec8353beca4cf4a31739016a318660dd38c2125a53c181c7e66d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct 08 09:46:29 compute-0 podman[95196]: 2025-10-08 09:46:29.22768213 +0000 UTC m=+0.131078072 container died 0ab24dd2d237ec8353beca4cf4a31739016a318660dd38c2125a53c181c7e66d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_payne, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:46:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-3660d7d06911837baf8b2e85783911cedd80aed8baa6ab0ceb3e8aee4cd70508-merged.mount: Deactivated successfully.
Oct 08 09:46:29 compute-0 podman[95196]: 2025-10-08 09:46:29.260770437 +0000 UTC m=+0.164166379 container remove 0ab24dd2d237ec8353beca4cf4a31739016a318660dd38c2125a53c181c7e66d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:46:29 compute-0 systemd[1]: libpod-conmon-0ab24dd2d237ec8353beca4cf4a31739016a318660dd38c2125a53c181c7e66d.scope: Deactivated successfully.
Oct 08 09:46:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a63d0bf7d1e549f61de671eac8f6938109748a4ff77f2a4317118f32d79d9a9-merged.mount: Deactivated successfully.
Oct 08 09:46:29 compute-0 podman[95090]: 2025-10-08 09:46:29.295537086 +0000 UTC m=+0.575745132 container remove 2c2cf4ec0c96446c79a7fb211222232ab9c77a3bf4da1a4527cad76d6cb7993d (image=quay.io/ceph/ceph:v19, name=tender_bassi, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 08 09:46:29 compute-0 systemd[1]: Reloading.
Oct 08 09:46:29 compute-0 sudo[95064]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:29 compute-0 systemd-sysv-generator[95271]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:46:29 compute-0 systemd-rc-local-generator[95265]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:46:29 compute-0 systemd[1]: libpod-conmon-2c2cf4ec0c96446c79a7fb211222232ab9c77a3bf4da1a4527cad76d6cb7993d.scope: Deactivated successfully.
Oct 08 09:46:29 compute-0 systemd[1]: Reloading.
Oct 08 09:46:29 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).mds e3 new map
Oct 08 09:46:29 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).mds e3 print_map
                                           e3
                                           btime 2025-10-08T09:46:29:578022+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-08T09:46:14.191787+0000
                                           modified        2025-10-08T09:46:14.191787+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-2.wfaozr{-1:24190} state up:standby seq 1 addr [v2:192.168.122.102:6804/8100770,v1:192.168.122.102:6805/8100770] compat {c=[1],r=[1],i=[1fff]}]
Oct 08 09:46:29 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/8100770,v1:192.168.122.102:6805/8100770] up:boot
Oct 08 09:46:29 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.102:6804/8100770,v1:192.168.122.102:6805/8100770] as mds.0
Oct 08 09:46:29 compute-0 ceph-mon[73572]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.wfaozr assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Oct 08 09:46:29 compute-0 ceph-mon[73572]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Oct 08 09:46:29 compute-0 ceph-mon[73572]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Oct 08 09:46:29 compute-0 ceph-mon[73572]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct 08 09:46:29 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Oct 08 09:46:29 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.wfaozr"} v 0)
Oct 08 09:46:29 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.wfaozr"}]: dispatch
Oct 08 09:46:29 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).mds e3 all = 0
Oct 08 09:46:29 compute-0 systemd-sysv-generator[95316]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:46:29 compute-0 systemd-rc-local-generator[95312]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:46:29 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).mds e4 new map
Oct 08 09:46:29 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).mds e4 print_map
                                           e4
                                           btime 2025-10-08T09:46:29:619207+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-08T09:46:14.191787+0000
                                           modified        2025-10-08T09:46:29.619201+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24190}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                           [mds.cephfs.compute-2.wfaozr{0:24190} state up:creating seq 1 addr [v2:192.168.122.102:6804/8100770,v1:192.168.122.102:6805/8100770] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Oct 08 09:46:29 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.wfaozr=up:creating}
Oct 08 09:46:29 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.lphril for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct 08 09:46:29 compute-0 ceph-mon[73572]: 7.4 scrub starts
Oct 08 09:46:29 compute-0 ceph-mon[73572]: 7.4 scrub ok
Oct 08 09:46:29 compute-0 ceph-mon[73572]: 4.13 scrub starts
Oct 08 09:46:29 compute-0 ceph-mon[73572]: 4.13 scrub ok
Oct 08 09:46:29 compute-0 ceph-mon[73572]: 6.17 scrub starts
Oct 08 09:46:29 compute-0 ceph-mon[73572]: 6.17 scrub ok
Oct 08 09:46:29 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:29 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:29 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:29 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.lphril", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 08 09:46:29 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.lphril", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 08 09:46:29 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:46:29 compute-0 ceph-mon[73572]: Deploying daemon mds.cephfs.compute-0.lphril on compute-0
Oct 08 09:46:29 compute-0 ceph-mon[73572]: mds.? [v2:192.168.122.102:6804/8100770,v1:192.168.122.102:6805/8100770] up:boot
Oct 08 09:46:29 compute-0 ceph-mon[73572]: daemon mds.cephfs.compute-2.wfaozr assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Oct 08 09:46:29 compute-0 ceph-mon[73572]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Oct 08 09:46:29 compute-0 ceph-mon[73572]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Oct 08 09:46:29 compute-0 ceph-mon[73572]: Cluster is now healthy
Oct 08 09:46:29 compute-0 ceph-mon[73572]: fsmap cephfs:0 1 up:standby
Oct 08 09:46:29 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.wfaozr"}]: dispatch
Oct 08 09:46:30 compute-0 ceph-mon[73572]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.wfaozr is now active in filesystem cephfs as rank 0
Oct 08 09:46:30 compute-0 podman[95365]: 2025-10-08 09:46:30.047546574 +0000 UTC m=+0.035267904 container create bfe144fae903e2a681026d8a7a90cabe9d3350b5cab2de2a7f7ba7544ed11e76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mds-cephfs-compute-0-lphril, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:46:30 compute-0 ansible-async_wrapper.py[94636]: Done in kid B.
Oct 08 09:46:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da9a539e8e9276f5c905c2ad90382c7acdeecd98bc4075c5381c73c44fb51ba5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:46:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da9a539e8e9276f5c905c2ad90382c7acdeecd98bc4075c5381c73c44fb51ba5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:46:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da9a539e8e9276f5c905c2ad90382c7acdeecd98bc4075c5381c73c44fb51ba5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 09:46:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da9a539e8e9276f5c905c2ad90382c7acdeecd98bc4075c5381c73c44fb51ba5/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.lphril supports timestamps until 2038 (0x7fffffff)
Oct 08 09:46:30 compute-0 podman[95365]: 2025-10-08 09:46:30.10879671 +0000 UTC m=+0.096518060 container init bfe144fae903e2a681026d8a7a90cabe9d3350b5cab2de2a7f7ba7544ed11e76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mds-cephfs-compute-0-lphril, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:46:30 compute-0 podman[95365]: 2025-10-08 09:46:30.113000188 +0000 UTC m=+0.100721518 container start bfe144fae903e2a681026d8a7a90cabe9d3350b5cab2de2a7f7ba7544ed11e76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mds-cephfs-compute-0-lphril, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:46:30 compute-0 bash[95365]: bfe144fae903e2a681026d8a7a90cabe9d3350b5cab2de2a7f7ba7544ed11e76
Oct 08 09:46:30 compute-0 podman[95365]: 2025-10-08 09:46:30.031599699 +0000 UTC m=+0.019321049 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:46:30 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.lphril for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct 08 09:46:30 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Oct 08 09:46:30 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Oct 08 09:46:30 compute-0 ceph-mds[95385]: set uid:gid to 167:167 (ceph:ceph)
Oct 08 09:46:30 compute-0 ceph-mds[95385]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mds, pid 2
Oct 08 09:46:30 compute-0 ceph-mds[95385]: main not setting numa affinity
Oct 08 09:46:30 compute-0 ceph-mds[95385]: pidfile_write: ignore empty --pid-file
Oct 08 09:46:30 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mds-cephfs-compute-0-lphril[95381]: starting mds.cephfs.compute-0.lphril at 
Oct 08 09:46:30 compute-0 ceph-mds[95385]: mds.cephfs.compute-0.lphril Updating MDS map to version 4 from mon.2
Oct 08 09:46:30 compute-0 sudo[95098]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:30 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 09:46:30 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:30 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 09:46:30 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:30 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Oct 08 09:46:30 compute-0 sudo[95427]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jywxkzwzsxevlzkinkdoiyyyrmipikqn ; /usr/bin/python3'
Oct 08 09:46:30 compute-0 sudo[95427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:46:30 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:30 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.bumazt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Oct 08 09:46:30 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.bumazt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 08 09:46:30 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.bumazt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 08 09:46:30 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:46:30 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:46:30 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-1.bumazt on compute-1
Oct 08 09:46:30 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-1.bumazt on compute-1
Oct 08 09:46:30 compute-0 python3[95429]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:46:30 compute-0 podman[95430]: 2025-10-08 09:46:30.409687332 +0000 UTC m=+0.037136613 container create ab7c557f1c302ba662112dbb057803724cf5e163a44422b1fe7098fb130a8b16 (image=quay.io/ceph/ceph:v19, name=youthful_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 08 09:46:30 compute-0 systemd[1]: Started libpod-conmon-ab7c557f1c302ba662112dbb057803724cf5e163a44422b1fe7098fb130a8b16.scope.
Oct 08 09:46:30 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:46:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6757cb89e17999b9f3eafc01a9705ed849b0c111a2248395d8df7004baebb89c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:46:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6757cb89e17999b9f3eafc01a9705ed849b0c111a2248395d8df7004baebb89c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:46:30 compute-0 podman[95430]: 2025-10-08 09:46:30.482507499 +0000 UTC m=+0.109956810 container init ab7c557f1c302ba662112dbb057803724cf5e163a44422b1fe7098fb130a8b16 (image=quay.io/ceph/ceph:v19, name=youthful_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct 08 09:46:30 compute-0 podman[95430]: 2025-10-08 09:46:30.489354488 +0000 UTC m=+0.116803769 container start ab7c557f1c302ba662112dbb057803724cf5e163a44422b1fe7098fb130a8b16 (image=quay.io/ceph/ceph:v19, name=youthful_lederberg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct 08 09:46:30 compute-0 podman[95430]: 2025-10-08 09:46:30.394102097 +0000 UTC m=+0.021551388 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:46:30 compute-0 podman[95430]: 2025-10-08 09:46:30.493000168 +0000 UTC m=+0.120449459 container attach ab7c557f1c302ba662112dbb057803724cf5e163a44422b1fe7098fb130a8b16 (image=quay.io/ceph/ceph:v19, name=youthful_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:46:30 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.14574 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 08 09:46:30 compute-0 youthful_lederberg[95445]: 
Oct 08 09:46:30 compute-0 youthful_lederberg[95445]: [{"container_id": "f2b90c859a73", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.13%", "created": "2025-10-08T09:43:42.269755Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-08T09:46:15.296115Z", "memory_usage": 7795113, "ports": [], "service_name": "crash", "started": "2025-10-08T09:43:42.156989Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-787292cc-8154-50c4-9e00-e9be3e817149@crash.compute-0", "version": "19.2.3"}, {"container_id": "53f09fa290e6", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.38%", "created": "2025-10-08T09:44:18.547121Z", "daemon_id": "compute-1", "daemon_name": "crash.compute-1", "daemon_type": "crash", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-10-08T09:46:15.405671Z", "memory_usage": 7821328, "ports": [], "service_name": "crash", "started": "2025-10-08T09:44:18.459704Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-787292cc-8154-50c4-9e00-e9be3e817149@crash.compute-1", "version": "19.2.3"}, {"container_id": "0965ec386585", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.30%", "created": "2025-10-08T09:45:15.118362Z", "daemon_id": "compute-2", "daemon_name": "crash.compute-2", "daemon_type": "crash", "hostname": "compute-2", "is_active": false, "last_refresh": "2025-10-08T09:46:15.348367Z", "memory_usage": 7821328, "ports": [], "service_name": "crash", "started": "2025-10-08T09:45:15.003335Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-787292cc-8154-50c4-9e00-e9be3e817149@crash.compute-2", "version": "19.2.3"}, {"daemon_id": "cephfs.compute-0.lphril", "daemon_name": "mds.cephfs.compute-0.lphril", "daemon_type": "mds", "events": ["2025-10-08T09:46:30.188417Z daemon:mds.cephfs.compute-0.lphril [INFO] \"Deployed mds.cephfs.compute-0.lphril on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "ports": [], "service_name": "mds.cephfs", "status": 2, "status_desc": "starting"}, {"daemon_id": "cephfs.compute-2.wfaozr", "daemon_name": "mds.cephfs.compute-2.wfaozr", "daemon_type": "mds", "events": ["2025-10-08T09:46:28.648019Z daemon:mds.cephfs.compute-2.wfaozr [INFO] \"Deployed mds.cephfs.compute-2.wfaozr on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "ports": [], "service_name": "mds.cephfs", "status": 2, "status_desc": "starting"}, {"container_id": "507427ceb179", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph:v19", "cpu_percentage": "27.63%", "created": "2025-10-08T09:43:06.346964Z", "daemon_id": "compute-0.ixicfj", "daemon_name": "mgr.compute-0.ixicfj", "daemon_type": "mgr", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-08T09:46:15.296017Z", "memory_usage": 541484646, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-10-08T09:43:04.895223Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-787292cc-8154-50c4-9e00-e9be3e817149@mgr.compute-0.ixicfj", "version": "19.2.3"}, {"container_id": "0003a3387a2b", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "41.13%", "created": "2025-10-08T09:45:13.213552Z", "daemon_id": "compute-1.swlvov", "daemon_name": "mgr.compute-1.swlvov", "daemon_type": "mgr", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-10-08T09:46:15.405937Z", "memory_usage": 504260198, "ports": [8765], "service_name": "mgr", "started": "2025-10-08T09:45:13.123094Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-787292cc-8154-50c4-9e00-e9be3e817149@mgr.compute-1.swlvov", "version": "19.2.3"}, {"container_id": "e85811784b26", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "39.33%", "created": "2025-10-08T09:45:07.513166Z", "daemon_id": "compute-2.mtagwx", "daemon_name": "mgr.compute-2.mtagwx", "daemon_type": "mgr", "hostname": "compute-2", "is_active": false, "last_refresh": "2025-10-08T09:46:15.348193Z", "memory_usage": 504469913, "ports": [8765], "service_name": "mgr", "started": "2025-10-08T09:45:07.403917Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-787292cc-8154-50c4-9e00-e9be3e817149@mgr.compute-2.mtagwx", "version": "19.2.3"}, {"container_id": "01c666addd85", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph:v19", "cpu_percentage": "2.65%", "created": "2025-10-08T09:43:01.297917Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-08T09:46:15.295901Z", "memory_request": 2147483648, "memory_usage": 60597207, "ports": [], "service_name": "mon", "started": "2025-10-08T09:43:03.162430Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-787292cc-8154-50c4-9e00-e9be3e817149@mon.compute-0", "version": "19.2.3"}, {"container_id": "1b83aab6dc82", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "2.10%", "created": "2025-10-08T09:45:02.392269Z", "daemon_id": "compute-1", "daemon_name": "mon.compute-1", "daemon_type": "mon", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-10-08T09:46:15.405865Z", "memory_request": 2147483648, "memory_usage": 49744445, "ports": [], "service_name": "mon", "started": "2025-10-08T09:45:02.298589Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-787292cc-8154-50c4-9e00-e9be3e817149@mon.compute-1", "version": "19.2.3"}, {"container_id": "0af6b66ef837", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "2.94%", "created": "2025-10-08T09:45:00.535986Z", "daemon_id": "compute-2", "daemon_name": "mon.compute-2", "daemon_type": "mon", "hostname": "compute-2", "is_active": false, "last_refresh": "2025-10-08T09:46:15.348104Z", "memory_request": 2147483648, "memory_usage": 47930408, "ports": [], "service_name": "mon", "started": "2025-10-08T09:45:00.428171Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-787292cc-8154-50c4-9e00-e9be3e817149@mon.compute-2", "version": "19.2.3"}, {"container_id": "0dbea514cc83", "container_image_digests": ["quay.io/prometheus/node-exporter@sha256:4cb2b9019f1757be8482419002cb7afe028fdba35d47958829e4cfeaf6246d80", "quay.io/prometheus/node-exporter@sha256:52a6f10ff10238979c365c06dbed8ad5cd1645c41780dc08ff813adacfb2341e"], "container_image_id": "72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e", "container_image_name": "quay.io/prometheus/node-exporter:v1.7.0", "cpu_percentage": "0.11%", "created": "2025-10-08T09:45:49.287328Z", "daemon_id": "compute-0", "daemon_name": "node-exporter.compute-0", "daemon_type": "node-exporter", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-08T09:46:15.296323Z", "memory_usage": 4165992, "ports": [9100], "service_name": "node-exporter", "started": "2025-10-08T09:45:49.204180Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-787292cc-8154-50c4-9e00-e9be3e817149@node-exporter.compute-0", "version": "1.7.0"}, {"container_id": "15effb74d2a3", "container_image_digests": ["quay.io/prometheus/node-exporter@sha256:52a6f10ff10238979c365c06dbed8ad5cd1645c41780dc08ff813adacfb2341e", "quay.io/prometheus/node-exporter@sha256:4cb2b9019f1757be8482419002cb7afe028fdba35d47958829e4cfeaf6246d80"], "container_image_id": "72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e", "container_image_name": "quay.io/prometheus/node-exporter:v1.7.0", "cpu_percentage": "0.13%", "created": "2025-10-08T09:46:02.274832Z", "daemon_id": "compute-1", "daemon_name": "node-exporter.compute-1", "daemon_type": "node-exporter", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-10-08T09:46:15.406152Z", "memory_usage": 5905580, "ports": [9100], "service_name": "node-exporter", "started": "2025-10-08T09:46:02.197375Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-787292cc-8154-50c4-9e00-e9be3e817149@node-exporter.compute-1", "version": "1.7.0"}, {"daemon_id": "compute-2", "daemon_name": "node-exporter.compute-2", "daemon_type": "node-exporter", "events": ["2025-10-08T09:46:22.193532Z daemon:node-exporter.compute-2 [INFO] \"Deployed node-exporter.compute-2 on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "ports": [9100], "service_name": "node-exporter", "status": 2, "status_desc": "starting"}, {"container_id": "7ace3f50e48c", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "2.27%", "created": "2025-10-08T09:44:29.767231Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-08T09:46:15.296185Z", "memory_request": 4294967296, "memory_usage": 78873886, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-10-08T09:44:29.663343Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-787292cc-8154-50c4-9e00-e9be3e817149@osd.1", "version": "19.2.3"}, {"container_id": "24b716d4ce22", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "1.76%", "created": "2025-10-08T09:44:31.075450Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-10-08T09:46:15.405791Z", "memory_request": 4294967296, "memory_usage": 73085747, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-10-08T09:44:30.950703Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-787292cc-8154-50c4-9e00-e9be3e817149@osd.0", "version": "19.2.3"}, {"container_id": "2e239c7c595d", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "2.96%", "created": "2025-10-08T09:45:29.177725Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "hostname": "compute-2", "is_active": false, "last_refresh": "2025-10-08T09:46:15.348436Z", "memory_request": 4294967296, "memory_usage": 66280488, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-10-08T09:45:29.070412Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-787292cc-8154-50c4-9e00-e9be3e817149@osd.2", "version": "19.2.3"}, {"container_id": "c6c7ccd8691d", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "1.73%", "created": "2025-10-08T09:45:46.886255Z", "daemon_id": "rgw.compute-0.wdkdxi", "daemon_name": "rgw.rgw.compute-0.wdkdxi", "daemon_type": "rgw", "hostname": "compute-0", "ip": "192.168.122.100", "is_active": false, "last_refresh": "2025-10-08T09:46:15.296255Z", "memory_usage": 101984501, "ports": [8082], "service_name": "rgw.rgw", "started": "2025-10-08T09:45:46.804390Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-787292cc-8154-50c4-9e00-e9be3e817149@rgw.rgw.compute-0.wdkdxi", "version": "19.2.3"}, {"container_id": "1801c83f2267", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "1.86%", "created": "2025-10-08T09:45:45.194386Z", "daemon_id": "rgw.compute-1.aaugis", "daemon_name": "rgw.rgw.compute-1.aaugis", "daemon_type": "rgw", "hostname": "compute-1", "ip": "192.168.122.101", "is_active": false, "last_refresh": "2025-10-08T09:46:15.406009Z", "memory_usage": 100778639, "ports": [8082], "service_name": "rgw.rgw", "started": "2025-10-08T09:45:45.089673Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-787292cc-8154-50c4-9e00-e9be3e817149@rgw.rgw.compute-1.aaugis", "version": "19.2.3"}, {"container_id": "5733e6b82c90", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "2.25%", "created": "2025-10-08T09:45:43.424093Z", "daemon_id": "rgw.compute-2.pgshil", "daemon_name": "rgw.rgw.compute-2.pgshil", "daemon_type": "rgw", "hostname": "compute-2", "ip": "192.168.122.102", "is_active": false, "last_refresh": "2025-10-08T09:46:15.348503Z", "memory_usage": 104773713, "ports": [8082], "service_name": "rgw.rgw", "started": "2025-10-08T09:45:43.311790Z", "status": 1, "statu
Oct 08 09:46:30 compute-0 youthful_lederberg[95445]: s_desc": "running", "systemd_unit": "ceph-787292cc-8154-50c4-9e00-e9be3e817149@rgw.rgw.compute-2.pgshil", "version": "19.2.3"}]
Oct 08 09:46:30 compute-0 systemd[1]: libpod-ab7c557f1c302ba662112dbb057803724cf5e163a44422b1fe7098fb130a8b16.scope: Deactivated successfully.
Oct 08 09:46:30 compute-0 podman[95430]: 2025-10-08 09:46:30.854263028 +0000 UTC m=+0.481712309 container died ab7c557f1c302ba662112dbb057803724cf5e163a44422b1fe7098fb130a8b16 (image=quay.io/ceph/ceph:v19, name=youthful_lederberg, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325)
Oct 08 09:46:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-6757cb89e17999b9f3eafc01a9705ed849b0c111a2248395d8df7004baebb89c-merged.mount: Deactivated successfully.
Oct 08 09:46:30 compute-0 podman[95430]: 2025-10-08 09:46:30.894615448 +0000 UTC m=+0.522064729 container remove ab7c557f1c302ba662112dbb057803724cf5e163a44422b1fe7098fb130a8b16 (image=quay.io/ceph/ceph:v19, name=youthful_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:46:30 compute-0 systemd[1]: libpod-conmon-ab7c557f1c302ba662112dbb057803724cf5e163a44422b1fe7098fb130a8b16.scope: Deactivated successfully.
Oct 08 09:46:30 compute-0 ceph-mon[73572]: 7.f deep-scrub starts
Oct 08 09:46:30 compute-0 ceph-mon[73572]: pgmap v15: 198 pgs: 1 active+clean+scrubbing, 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 6 op/s
Oct 08 09:46:30 compute-0 ceph-mon[73572]: 7.f deep-scrub ok
Oct 08 09:46:30 compute-0 ceph-mon[73572]: from='client.14562 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 08 09:46:30 compute-0 ceph-mon[73572]: 3.14 deep-scrub starts
Oct 08 09:46:30 compute-0 ceph-mon[73572]: 3.14 deep-scrub ok
Oct 08 09:46:30 compute-0 ceph-mon[73572]: 6.1c scrub starts
Oct 08 09:46:30 compute-0 ceph-mon[73572]: 6.1c scrub ok
Oct 08 09:46:30 compute-0 ceph-mon[73572]: fsmap cephfs:1 {0=cephfs.compute-2.wfaozr=up:creating}
Oct 08 09:46:30 compute-0 ceph-mon[73572]: daemon mds.cephfs.compute-2.wfaozr is now active in filesystem cephfs as rank 0
Oct 08 09:46:30 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:30 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:30 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:30 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.bumazt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 08 09:46:30 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.bumazt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 08 09:46:30 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:46:30 compute-0 sudo[95427]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:30 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).mds e5 new map
Oct 08 09:46:30 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).mds e5 print_map
                                           e5
                                           btime 2025-10-08T09:46:30:899414+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-08T09:46:14.191787+0000
                                           modified        2025-10-08T09:46:30.899412+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24190}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 24190 members: 24190
                                           [mds.cephfs.compute-2.wfaozr{0:24190} state up:active seq 2 addr [v2:192.168.122.102:6804/8100770,v1:192.168.122.102:6805/8100770] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.lphril{-1:24197} state up:standby seq 1 addr [v2:192.168.122.100:6806/2465393949,v1:192.168.122.100:6807/2465393949] compat {c=[1],r=[1],i=[1fff]}]
Oct 08 09:46:30 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/8100770,v1:192.168.122.102:6805/8100770] up:active
Oct 08 09:46:30 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/2465393949,v1:192.168.122.100:6807/2465393949] up:boot
Oct 08 09:46:30 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.wfaozr=up:active} 1 up:standby
Oct 08 09:46:30 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.lphril"} v 0)
Oct 08 09:46:30 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.lphril"}]: dispatch
Oct 08 09:46:30 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).mds e5 all = 0
Oct 08 09:46:30 compute-0 ceph-mds[95385]: mds.cephfs.compute-0.lphril Updating MDS map to version 5 from mon.2
Oct 08 09:46:30 compute-0 ceph-mds[95385]: mds.cephfs.compute-0.lphril Monitors have assigned me to become a standby
Oct 08 09:46:30 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).mds e6 new map
Oct 08 09:46:30 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).mds e6 print_map
                                           e6
                                           btime 2025-10-08T09:46:30:924934+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-08T09:46:14.191787+0000
                                           modified        2025-10-08T09:46:30.899412+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24190}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24190 members: 24190
                                           [mds.cephfs.compute-2.wfaozr{0:24190} state up:active seq 2 addr [v2:192.168.122.102:6804/8100770,v1:192.168.122.102:6805/8100770] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.lphril{-1:24197} state up:standby seq 1 addr [v2:192.168.122.100:6806/2465393949,v1:192.168.122.100:6807/2465393949] compat {c=[1],r=[1],i=[1fff]}]
Oct 08 09:46:30 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.wfaozr=up:active} 1 up:standby
Oct 08 09:46:31 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Oct 08 09:46:31 compute-0 rsyslogd[1005]: message too long (16383) with configured size 8096, begin of message is: [{"container_id": "f2b90c859a73", "container_image_digests": ["quay.io/ceph/ceph [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Oct 08 09:46:31 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Oct 08 09:46:31 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v16: 198 pgs: 198 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s
Oct 08 09:46:31 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 08 09:46:31 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:31 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 08 09:46:31 compute-0 sudo[95505]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btjmnqbqesywuxctdwdrmzznnhkdvvsv ; /usr/bin/python3'
Oct 08 09:46:31 compute-0 sudo[95505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:46:31 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:31 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Oct 08 09:46:31 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:31 compute-0 ceph-mgr[73869]: [progress INFO root] complete: finished ev affb329f-dae8-4723-a1e4-2bc80680611b (Updating mds.cephfs deployment (+3 -> 3))
Oct 08 09:46:31 compute-0 ceph-mgr[73869]: [progress INFO root] Completed event affb329f-dae8-4723-a1e4-2bc80680611b (Updating mds.cephfs deployment (+3 -> 3)) in 5 seconds
Oct 08 09:46:31 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Oct 08 09:46:31 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:31 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Oct 08 09:46:31 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:31 compute-0 ceph-mgr[73869]: [progress INFO root] update: starting ev 14600416-a126-4524-a7b9-d20314f3302e (Updating nfs.cephfs deployment (+3 -> 3))
Oct 08 09:46:31 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 08 09:46:31 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:31 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.0.0.compute-1.lgtqnn
Oct 08 09:46:31 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.0.0.compute-1.lgtqnn
Oct 08 09:46:31 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.lgtqnn", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Oct 08 09:46:31 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.lgtqnn", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Oct 08 09:46:31 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.lgtqnn", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Oct 08 09:46:31 compute-0 ceph-mgr[73869]: [cephadm INFO root] Ensuring nfs.cephfs.0 is in the ganesha grace table
Oct 08 09:46:31 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.0 is in the ganesha grace table
Oct 08 09:46:31 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Oct 08 09:46:31 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Oct 08 09:46:31 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Oct 08 09:46:31 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:46:31 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:46:31 compute-0 ceph-mon[73572]: 7.8 scrub starts
Oct 08 09:46:31 compute-0 ceph-mon[73572]: 7.8 scrub ok
Oct 08 09:46:31 compute-0 ceph-mon[73572]: Deploying daemon mds.cephfs.compute-1.bumazt on compute-1
Oct 08 09:46:31 compute-0 ceph-mon[73572]: 5.1f scrub starts
Oct 08 09:46:31 compute-0 ceph-mon[73572]: 5.1f scrub ok
Oct 08 09:46:31 compute-0 ceph-mon[73572]: 7.1f deep-scrub starts
Oct 08 09:46:31 compute-0 ceph-mon[73572]: 7.1f deep-scrub ok
Oct 08 09:46:31 compute-0 ceph-mon[73572]: from='client.14574 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 08 09:46:31 compute-0 ceph-mon[73572]: mds.? [v2:192.168.122.102:6804/8100770,v1:192.168.122.102:6805/8100770] up:active
Oct 08 09:46:31 compute-0 ceph-mon[73572]: mds.? [v2:192.168.122.100:6806/2465393949,v1:192.168.122.100:6807/2465393949] up:boot
Oct 08 09:46:31 compute-0 ceph-mon[73572]: fsmap cephfs:1 {0=cephfs.compute-2.wfaozr=up:active} 1 up:standby
Oct 08 09:46:31 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.lphril"}]: dispatch
Oct 08 09:46:31 compute-0 ceph-mon[73572]: fsmap cephfs:1 {0=cephfs.compute-2.wfaozr=up:active} 1 up:standby
Oct 08 09:46:31 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:31 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:31 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:31 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:31 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:31 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:31 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.lgtqnn", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Oct 08 09:46:31 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.lgtqnn", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Oct 08 09:46:31 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Oct 08 09:46:31 compute-0 python3[95507]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:46:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Oct 08 09:46:32 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Oct 08 09:46:32 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Oct 08 09:46:32 compute-0 podman[95509]: 2025-10-08 09:46:32.036375753 +0000 UTC m=+0.042061622 container create f29aada63d6b5107c6071756bb3981daa8cfa0cf6307df4ba2efaaa1097631b5 (image=quay.io/ceph/ceph:v19, name=angry_cray, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:46:32 compute-0 systemd[1]: Started libpod-conmon-f29aada63d6b5107c6071756bb3981daa8cfa0cf6307df4ba2efaaa1097631b5.scope.
Oct 08 09:46:32 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:46:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/405028fddd60dd945498a838f0b1de70782f90b15ec83e52a16a3af8d2700a59/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:46:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/405028fddd60dd945498a838f0b1de70782f90b15ec83e52a16a3af8d2700a59/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:46:32 compute-0 podman[95509]: 2025-10-08 09:46:32.02017245 +0000 UTC m=+0.025858269 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:46:32 compute-0 podman[95509]: 2025-10-08 09:46:32.131129898 +0000 UTC m=+0.136815747 container init f29aada63d6b5107c6071756bb3981daa8cfa0cf6307df4ba2efaaa1097631b5 (image=quay.io/ceph/ceph:v19, name=angry_cray, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:46:32 compute-0 podman[95509]: 2025-10-08 09:46:32.139241575 +0000 UTC m=+0.144927374 container start f29aada63d6b5107c6071756bb3981daa8cfa0cf6307df4ba2efaaa1097631b5 (image=quay.io/ceph/ceph:v19, name=angry_cray, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:46:32 compute-0 podman[95509]: 2025-10-08 09:46:32.142595877 +0000 UTC m=+0.148281726 container attach f29aada63d6b5107c6071756bb3981daa8cfa0cf6307df4ba2efaaa1097631b5 (image=quay.io/ceph/ceph:v19, name=angry_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 08 09:46:32 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Oct 08 09:46:32 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Oct 08 09:46:32 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.0.0.compute-1.lgtqnn-rgw
Oct 08 09:46:32 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.0.0.compute-1.lgtqnn-rgw
Oct 08 09:46:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.lgtqnn-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Oct 08 09:46:32 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.lgtqnn-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 08 09:46:32 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.lgtqnn-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 08 09:46:32 compute-0 ceph-mgr[73869]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.0.0.compute-1.lgtqnn's ganesha conf is defaulting to empty
Oct 08 09:46:32 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.0.0.compute-1.lgtqnn's ganesha conf is defaulting to empty
Oct 08 09:46:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:46:32 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:46:32 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 7.b scrub starts
Oct 08 09:46:32 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.0.0.compute-1.lgtqnn on compute-1
Oct 08 09:46:32 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.0.0.compute-1.lgtqnn on compute-1
Oct 08 09:46:32 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 7.b scrub ok
Oct 08 09:46:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Oct 08 09:46:32 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2732719460' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 08 09:46:32 compute-0 angry_cray[95540]: 
Oct 08 09:46:32 compute-0 angry_cray[95540]: {"fsid":"787292cc-8154-50c4-9e00-e9be3e817149","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":81,"monmap":{"epoch":3,"min_mon_release_name":"squid","num_mons":3},"osdmap":{"epoch":51,"num_osds":3,"num_up_osds":3,"osd_up_since":1759916737,"num_in_osds":3,"osd_in_since":1759916717,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":198}],"num_pgs":198,"num_pools":12,"num_objects":195,"data_bytes":464595,"bytes_used":88997888,"bytes_avail":64322928640,"bytes_total":64411926528,"read_bytes_sec":15014,"write_bytes_sec":0,"read_op_per_sec":4,"write_op_per_sec":1},"fsmap":{"epoch":6,"btime":"2025-10-08T09:46:30:924934+0000","id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-2.wfaozr","status":"up:active","gid":24190}],"up:standby":1},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","dashboard","iostat","nfs","restful"],"services":{"dashboard":"http://192.168.122.100:8443/"}},"servicemap":{"epoch":5,"modified":"2025-10-08T09:45:54.969307+0000","services":{"mgr":{"daemons":{"summary":"","compute-0.ixicfj":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1.swlvov":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2.mtagwx":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"mon":{"daemons":{"summary":"","compute-0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"rgw":{"daemons":{"summary":"","14382":{"start_epoch":5,"start_stamp":"2025-10-08T09:45:54.959975+0000","gid":14382,"addr":"192.168.122.100:0/4157537618","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-0","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.100:8082","frontend_type#0":"beast","hostname":"compute-0","id":"rgw.compute-0.wdkdxi","kernel_description":"#1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025","kernel_version":"5.14.0-620.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864104","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"246b4a69-3c1d-47ce-b182-d12a3d96d3e3","zone_name":"default","zonegroup_id":"3218c688-50d3-4b3d-9517-1c08371b4e2e","zonegroup_name":"default"},"task_status":{}},"24146":{"start_epoch":5,"start_stamp":"2025-10-08T09:45:54.963319+0000","gid":24146,"addr":"192.168.122.101:0/1900470648","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-1","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.101:8082","frontend_type#0":"beast","hostname":"compute-1","id":"rgw.compute-1.aaugis","kernel_description":"#1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025","kernel_version":"5.14.0-620.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864104","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"246b4a69-3c1d-47ce-b182-d12a3d96d3e3","zone_name":"default","zonegroup_id":"3218c688-50d3-4b3d-9517-1c08371b4e2e","zonegroup_name":"default"},"task_status":{}},"24148":{"start_epoch":5,"start_stamp":"2025-10-08T09:45:54.967024+0000","gid":24148,"addr":"192.168.122.102:0/4200026288","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-2","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.102:8082","frontend_type#0":"beast","hostname":"compute-2","id":"rgw.compute-2.pgshil","kernel_description":"#1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025","kernel_version":"5.14.0-620.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864104","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"246b4a69-3c1d-47ce-b182-d12a3d96d3e3","zone_name":"default","zonegroup_id":"3218c688-50d3-4b3d-9517-1c08371b4e2e","zonegroup_name":"default"},"task_status":{}}}}}},"progress_events":{"affb329f-dae8-4723-a1e4-2bc80680611b":{"message":"Updating mds.cephfs deployment (+3 -> 3) (3s)\n      [==================..........] (remaining: 1s)","progress":0.66666668653488159,"add_to_ceph_s":true}}}
Oct 08 09:46:32 compute-0 systemd[1]: libpod-f29aada63d6b5107c6071756bb3981daa8cfa0cf6307df4ba2efaaa1097631b5.scope: Deactivated successfully.
Oct 08 09:46:32 compute-0 podman[95584]: 2025-10-08 09:46:32.606707549 +0000 UTC m=+0.021291939 container died f29aada63d6b5107c6071756bb3981daa8cfa0cf6307df4ba2efaaa1097631b5 (image=quay.io/ceph/ceph:v19, name=angry_cray, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:46:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-405028fddd60dd945498a838f0b1de70782f90b15ec83e52a16a3af8d2700a59-merged.mount: Deactivated successfully.
Oct 08 09:46:32 compute-0 podman[95584]: 2025-10-08 09:46:32.638537639 +0000 UTC m=+0.053122009 container remove f29aada63d6b5107c6071756bb3981daa8cfa0cf6307df4ba2efaaa1097631b5 (image=quay.io/ceph/ceph:v19, name=angry_cray, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct 08 09:46:32 compute-0 systemd[1]: libpod-conmon-f29aada63d6b5107c6071756bb3981daa8cfa0cf6307df4ba2efaaa1097631b5.scope: Deactivated successfully.
Oct 08 09:46:32 compute-0 sudo[95505]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).mds e7 new map
Oct 08 09:46:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).mds e7 print_map
                                           e7
                                           btime 2025-10-08T09:46:32:835229+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-08T09:46:14.191787+0000
                                           modified        2025-10-08T09:46:30.899412+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24190}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24190 members: 24190
                                           [mds.cephfs.compute-2.wfaozr{0:24190} state up:active seq 2 addr [v2:192.168.122.102:6804/8100770,v1:192.168.122.102:6805/8100770] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.lphril{-1:24197} state up:standby seq 1 addr [v2:192.168.122.100:6806/2465393949,v1:192.168.122.100:6807/2465393949] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-1.bumazt{-1:24206} state up:standby seq 1 addr [v2:192.168.122.101:6804/2344502191,v1:192.168.122.101:6805/2344502191] compat {c=[1],r=[1],i=[1fff]}]
Oct 08 09:46:32 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/2344502191,v1:192.168.122.101:6805/2344502191] up:boot
Oct 08 09:46:32 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.wfaozr=up:active} 2 up:standby
Oct 08 09:46:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.bumazt"} v 0)
Oct 08 09:46:32 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.bumazt"}]: dispatch
Oct 08 09:46:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).mds e7 all = 0
Oct 08 09:46:32 compute-0 ceph-mon[73572]: 7.9 scrub starts
Oct 08 09:46:32 compute-0 ceph-mon[73572]: 7.9 scrub ok
Oct 08 09:46:32 compute-0 ceph-mon[73572]: pgmap v16: 198 pgs: 198 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s
Oct 08 09:46:32 compute-0 ceph-mon[73572]: 3.16 scrub starts
Oct 08 09:46:32 compute-0 ceph-mon[73572]: 3.16 scrub ok
Oct 08 09:46:32 compute-0 ceph-mon[73572]: 6.1e deep-scrub starts
Oct 08 09:46:32 compute-0 ceph-mon[73572]: 6.1e deep-scrub ok
Oct 08 09:46:32 compute-0 ceph-mon[73572]: Creating key for client.nfs.cephfs.0.0.compute-1.lgtqnn
Oct 08 09:46:32 compute-0 ceph-mon[73572]: Ensuring nfs.cephfs.0 is in the ganesha grace table
Oct 08 09:46:32 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Oct 08 09:46:32 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:46:32 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Oct 08 09:46:32 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Oct 08 09:46:32 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.lgtqnn-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 08 09:46:32 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.lgtqnn-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 08 09:46:32 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:46:32 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/2732719460' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 08 09:46:32 compute-0 ceph-mon[73572]: mds.? [v2:192.168.122.101:6804/2344502191,v1:192.168.122.101:6805/2344502191] up:boot
Oct 08 09:46:32 compute-0 ceph-mon[73572]: fsmap cephfs:1 {0=cephfs.compute-2.wfaozr=up:active} 2 up:standby
Oct 08 09:46:32 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.bumazt"}]: dispatch
Oct 08 09:46:33 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v17: 198 pgs: 198 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:46:33 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Oct 08 09:46:33 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Oct 08 09:46:33 compute-0 ceph-mgr[73869]: [progress INFO root] Writing back 14 completed events
Oct 08 09:46:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 08 09:46:33 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:46:33 compute-0 sudo[95622]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eehoyqecrmxmppxawpzathvhocdnmlym ; /usr/bin/python3'
Oct 08 09:46:33 compute-0 sudo[95622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:46:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 08 09:46:33 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 08 09:46:33 compute-0 python3[95624]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:46:33 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 08 09:46:33 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:33 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.1.0.compute-2.ettfma
Oct 08 09:46:33 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.1.0.compute-2.ettfma
Oct 08 09:46:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.ettfma", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Oct 08 09:46:33 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.ettfma", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Oct 08 09:46:33 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.ettfma", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Oct 08 09:46:33 compute-0 ceph-mgr[73869]: [cephadm INFO root] Ensuring nfs.cephfs.1 is in the ganesha grace table
Oct 08 09:46:33 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.1 is in the ganesha grace table
Oct 08 09:46:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Oct 08 09:46:33 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Oct 08 09:46:33 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Oct 08 09:46:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:46:33 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:46:33 compute-0 podman[95625]: 2025-10-08 09:46:33.748538837 +0000 UTC m=+0.037132571 container create c8e6387019056b13a2e71f4292cda0b4c7ce7e5f1aded7843cabbcfa25a37086 (image=quay.io/ceph/ceph:v19, name=crazy_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:46:33 compute-0 systemd[1]: Started libpod-conmon-c8e6387019056b13a2e71f4292cda0b4c7ce7e5f1aded7843cabbcfa25a37086.scope.
Oct 08 09:46:33 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:46:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dee532264497ed1302464d3b0f684ee155a3bb24fcfdebd2176ace256ab3bd67/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:46:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dee532264497ed1302464d3b0f684ee155a3bb24fcfdebd2176ace256ab3bd67/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:46:33 compute-0 podman[95625]: 2025-10-08 09:46:33.821474318 +0000 UTC m=+0.110068092 container init c8e6387019056b13a2e71f4292cda0b4c7ce7e5f1aded7843cabbcfa25a37086 (image=quay.io/ceph/ceph:v19, name=crazy_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325)
Oct 08 09:46:33 compute-0 podman[95625]: 2025-10-08 09:46:33.732673654 +0000 UTC m=+0.021267408 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:46:33 compute-0 podman[95625]: 2025-10-08 09:46:33.828243814 +0000 UTC m=+0.116837558 container start c8e6387019056b13a2e71f4292cda0b4c7ce7e5f1aded7843cabbcfa25a37086 (image=quay.io/ceph/ceph:v19, name=crazy_blackburn, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:46:33 compute-0 podman[95625]: 2025-10-08 09:46:33.831172033 +0000 UTC m=+0.119765777 container attach c8e6387019056b13a2e71f4292cda0b4c7ce7e5f1aded7843cabbcfa25a37086 (image=quay.io/ceph/ceph:v19, name=crazy_blackburn, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 08 09:46:33 compute-0 ceph-mon[73572]: Rados config object exists: conf-nfs.cephfs
Oct 08 09:46:33 compute-0 ceph-mon[73572]: Creating key for client.nfs.cephfs.0.0.compute-1.lgtqnn-rgw
Oct 08 09:46:33 compute-0 ceph-mon[73572]: Bind address in nfs.cephfs.0.0.compute-1.lgtqnn's ganesha conf is defaulting to empty
Oct 08 09:46:33 compute-0 ceph-mon[73572]: 7.b scrub starts
Oct 08 09:46:33 compute-0 ceph-mon[73572]: Deploying daemon nfs.cephfs.0.0.compute-1.lgtqnn on compute-1
Oct 08 09:46:33 compute-0 ceph-mon[73572]: 7.b scrub ok
Oct 08 09:46:33 compute-0 ceph-mon[73572]: 5.10 scrub starts
Oct 08 09:46:33 compute-0 ceph-mon[73572]: 5.10 scrub ok
Oct 08 09:46:33 compute-0 ceph-mon[73572]: 6.1 scrub starts
Oct 08 09:46:33 compute-0 ceph-mon[73572]: 6.1 scrub ok
Oct 08 09:46:33 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:33 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:33 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:33 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:33 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.ettfma", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Oct 08 09:46:33 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.ettfma", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Oct 08 09:46:33 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Oct 08 09:46:33 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Oct 08 09:46:33 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:46:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Oct 08 09:46:34 compute-0 crazy_blackburn[95641]: 
Oct 08 09:46:34 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3055594015' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 08 09:46:34 compute-0 crazy_blackburn[95641]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"7","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ALERTMANAGER_API_HOST","value":"http://192.168.122.100:9093","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_PASSWORD","value":"/home/grafana_password.yml","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_URL","value":"http://192.168.122.100:3100","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_USERNAME","value":"admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/PROMETHEUS_API_HOST","value":"http://192.168.122.100:9092","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-0.ixicfj/server_addr","value":"192.168.122.100","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-1.swlvov/server_addr","value":"192.168.122.101","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-2.mtagwx/server_addr","value":"192.168.122.102","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/server_port","value":"8443","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ssl","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ssl_server_port","value":"8443","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.wdkdxi","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-1.aaugis","name":"rgw_frontends","value":"beast endpoint=192.168.122.101:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-2.pgshil","name":"rgw_frontends","value":"beast endpoint=192.168.122.102:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Oct 08 09:46:34 compute-0 systemd[1]: libpod-c8e6387019056b13a2e71f4292cda0b4c7ce7e5f1aded7843cabbcfa25a37086.scope: Deactivated successfully.
Oct 08 09:46:34 compute-0 podman[95625]: 2025-10-08 09:46:34.194425874 +0000 UTC m=+0.483019608 container died c8e6387019056b13a2e71f4292cda0b4c7ce7e5f1aded7843cabbcfa25a37086 (image=quay.io/ceph/ceph:v19, name=crazy_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct 08 09:46:34 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Oct 08 09:46:34 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Oct 08 09:46:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-dee532264497ed1302464d3b0f684ee155a3bb24fcfdebd2176ace256ab3bd67-merged.mount: Deactivated successfully.
Oct 08 09:46:34 compute-0 podman[95625]: 2025-10-08 09:46:34.516832731 +0000 UTC m=+0.805426465 container remove c8e6387019056b13a2e71f4292cda0b4c7ce7e5f1aded7843cabbcfa25a37086 (image=quay.io/ceph/ceph:v19, name=crazy_blackburn, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct 08 09:46:34 compute-0 systemd[1]: libpod-conmon-c8e6387019056b13a2e71f4292cda0b4c7ce7e5f1aded7843cabbcfa25a37086.scope: Deactivated successfully.
Oct 08 09:46:34 compute-0 sudo[95622]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:34 compute-0 ceph-mon[73572]: pgmap v17: 198 pgs: 198 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:46:34 compute-0 ceph-mon[73572]: 7.10 scrub starts
Oct 08 09:46:34 compute-0 ceph-mon[73572]: 7.10 scrub ok
Oct 08 09:46:34 compute-0 ceph-mon[73572]: 6.15 scrub starts
Oct 08 09:46:34 compute-0 ceph-mon[73572]: 6.15 scrub ok
Oct 08 09:46:34 compute-0 ceph-mon[73572]: 6.1b scrub starts
Oct 08 09:46:34 compute-0 ceph-mon[73572]: 6.1b scrub ok
Oct 08 09:46:34 compute-0 ceph-mon[73572]: Creating key for client.nfs.cephfs.1.0.compute-2.ettfma
Oct 08 09:46:34 compute-0 ceph-mon[73572]: Ensuring nfs.cephfs.1 is in the ganesha grace table
Oct 08 09:46:34 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3055594015' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 08 09:46:34 compute-0 ceph-mon[73572]: 7.13 scrub starts
Oct 08 09:46:34 compute-0 ceph-mon[73572]: 7.13 scrub ok
Oct 08 09:46:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).mds e8 new map
Oct 08 09:46:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).mds e8 print_map
                                           e8
                                           btime 2025-10-08T09:46:34:982221+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        8
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-08T09:46:14.191787+0000
                                           modified        2025-10-08T09:46:34.011128+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24190}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24190 members: 24190
                                           [mds.cephfs.compute-2.wfaozr{0:24190} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/8100770,v1:192.168.122.102:6805/8100770] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.lphril{-1:24197} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/2465393949,v1:192.168.122.100:6807/2465393949] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-1.bumazt{-1:24206} state up:standby seq 1 addr [v2:192.168.122.101:6804/2344502191,v1:192.168.122.101:6805/2344502191] compat {c=[1],r=[1],i=[1fff]}]
Oct 08 09:46:34 compute-0 ceph-mds[95385]: mds.cephfs.compute-0.lphril Updating MDS map to version 8 from mon.2
Oct 08 09:46:34 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/8100770,v1:192.168.122.102:6805/8100770] up:active
Oct 08 09:46:34 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/2465393949,v1:192.168.122.100:6807/2465393949] up:standby
Oct 08 09:46:34 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.wfaozr=up:active} 2 up:standby
Oct 08 09:46:35 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v18: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 1.8 KiB/s wr, 5 op/s
Oct 08 09:46:35 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 6.1d scrub starts
Oct 08 09:46:35 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 6.1d scrub ok
Oct 08 09:46:35 compute-0 sudo[95716]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obwhstjtcdoggsaktmjzhgalmrlpyeqr ; /usr/bin/python3'
Oct 08 09:46:35 compute-0 sudo[95716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:46:35 compute-0 python3[95718]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:46:35 compute-0 podman[95719]: 2025-10-08 09:46:35.714703056 +0000 UTC m=+0.043067593 container create 48dbdc81eb3f06b5c101de6a0cf0b7c49c31b855f7570adfbd6584cd293d9516 (image=quay.io/ceph/ceph:v19, name=charming_dhawan, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Oct 08 09:46:35 compute-0 systemd[1]: Started libpod-conmon-48dbdc81eb3f06b5c101de6a0cf0b7c49c31b855f7570adfbd6584cd293d9516.scope.
Oct 08 09:46:35 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:46:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e52d31dee7817cf12675cf4d4aea194511bb430398f9d0e8288f40296ce8cb85/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:46:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e52d31dee7817cf12675cf4d4aea194511bb430398f9d0e8288f40296ce8cb85/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:46:35 compute-0 podman[95719]: 2025-10-08 09:46:35.776848538 +0000 UTC m=+0.105213085 container init 48dbdc81eb3f06b5c101de6a0cf0b7c49c31b855f7570adfbd6584cd293d9516 (image=quay.io/ceph/ceph:v19, name=charming_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:46:35 compute-0 podman[95719]: 2025-10-08 09:46:35.782620504 +0000 UTC m=+0.110985041 container start 48dbdc81eb3f06b5c101de6a0cf0b7c49c31b855f7570adfbd6584cd293d9516 (image=quay.io/ceph/ceph:v19, name=charming_dhawan, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:46:35 compute-0 podman[95719]: 2025-10-08 09:46:35.785691288 +0000 UTC m=+0.114055825 container attach 48dbdc81eb3f06b5c101de6a0cf0b7c49c31b855f7570adfbd6584cd293d9516 (image=quay.io/ceph/ceph:v19, name=charming_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:46:35 compute-0 podman[95719]: 2025-10-08 09:46:35.696203113 +0000 UTC m=+0.024567670 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:46:35 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).mds e9 new map
Oct 08 09:46:35 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).mds e9 print_map
                                           e9
                                           btime 2025-10-08T09:46:35:988720+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        8
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-08T09:46:14.191787+0000
                                           modified        2025-10-08T09:46:34.011128+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24190}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24190 members: 24190
                                           [mds.cephfs.compute-2.wfaozr{0:24190} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/8100770,v1:192.168.122.102:6805/8100770] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.lphril{-1:24197} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/2465393949,v1:192.168.122.100:6807/2465393949] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-1.bumazt{-1:24206} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/2344502191,v1:192.168.122.101:6805/2344502191] compat {c=[1],r=[1],i=[1fff]}]
Oct 08 09:46:36 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/2344502191,v1:192.168.122.101:6805/2344502191] up:standby
Oct 08 09:46:36 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.wfaozr=up:active} 2 up:standby
Oct 08 09:46:36 compute-0 ceph-mon[73572]: 6.a scrub starts
Oct 08 09:46:36 compute-0 ceph-mon[73572]: 6.a scrub ok
Oct 08 09:46:36 compute-0 ceph-mon[73572]: mds.? [v2:192.168.122.102:6804/8100770,v1:192.168.122.102:6805/8100770] up:active
Oct 08 09:46:36 compute-0 ceph-mon[73572]: mds.? [v2:192.168.122.100:6806/2465393949,v1:192.168.122.100:6807/2465393949] up:standby
Oct 08 09:46:36 compute-0 ceph-mon[73572]: fsmap cephfs:1 {0=cephfs.compute-2.wfaozr=up:active} 2 up:standby
Oct 08 09:46:36 compute-0 ceph-mon[73572]: 6.1d scrub starts
Oct 08 09:46:36 compute-0 ceph-mon[73572]: 6.1d scrub ok
Oct 08 09:46:36 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0)
Oct 08 09:46:36 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3522787505' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Oct 08 09:46:36 compute-0 charming_dhawan[95734]: mimic
Oct 08 09:46:36 compute-0 systemd[1]: libpod-48dbdc81eb3f06b5c101de6a0cf0b7c49c31b855f7570adfbd6584cd293d9516.scope: Deactivated successfully.
Oct 08 09:46:36 compute-0 podman[95719]: 2025-10-08 09:46:36.173829966 +0000 UTC m=+0.502194503 container died 48dbdc81eb3f06b5c101de6a0cf0b7c49c31b855f7570adfbd6584cd293d9516 (image=quay.io/ceph/ceph:v19, name=charming_dhawan, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:46:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-e52d31dee7817cf12675cf4d4aea194511bb430398f9d0e8288f40296ce8cb85-merged.mount: Deactivated successfully.
Oct 08 09:46:36 compute-0 podman[95719]: 2025-10-08 09:46:36.205891813 +0000 UTC m=+0.534256350 container remove 48dbdc81eb3f06b5c101de6a0cf0b7c49c31b855f7570adfbd6584cd293d9516 (image=quay.io/ceph/ceph:v19, name=charming_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:46:36 compute-0 systemd[1]: libpod-conmon-48dbdc81eb3f06b5c101de6a0cf0b7c49c31b855f7570adfbd6584cd293d9516.scope: Deactivated successfully.
Oct 08 09:46:36 compute-0 sudo[95716]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:36 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 7.e scrub starts
Oct 08 09:46:36 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 7.e scrub ok
Oct 08 09:46:36 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Oct 08 09:46:36 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Oct 08 09:46:36 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Oct 08 09:46:36 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Oct 08 09:46:36 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Oct 08 09:46:36 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.1.0.compute-2.ettfma-rgw
Oct 08 09:46:36 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.1.0.compute-2.ettfma-rgw
Oct 08 09:46:36 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.ettfma-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Oct 08 09:46:36 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.ettfma-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 08 09:46:37 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.ettfma-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 08 09:46:37 compute-0 ceph-mgr[73869]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.1.0.compute-2.ettfma's ganesha conf is defaulting to empty
Oct 08 09:46:37 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.1.0.compute-2.ettfma's ganesha conf is defaulting to empty
Oct 08 09:46:37 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:46:37 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:46:37 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.1.0.compute-2.ettfma on compute-2
Oct 08 09:46:37 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.1.0.compute-2.ettfma on compute-2
Oct 08 09:46:37 compute-0 ceph-mon[73572]: pgmap v18: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 1.8 KiB/s wr, 5 op/s
Oct 08 09:46:37 compute-0 ceph-mon[73572]: 6.7 scrub starts
Oct 08 09:46:37 compute-0 ceph-mon[73572]: 6.7 scrub ok
Oct 08 09:46:37 compute-0 ceph-mon[73572]: mds.? [v2:192.168.122.101:6804/2344502191,v1:192.168.122.101:6805/2344502191] up:standby
Oct 08 09:46:37 compute-0 ceph-mon[73572]: fsmap cephfs:1 {0=cephfs.compute-2.wfaozr=up:active} 2 up:standby
Oct 08 09:46:37 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3522787505' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Oct 08 09:46:37 compute-0 ceph-mon[73572]: 7.e scrub starts
Oct 08 09:46:37 compute-0 ceph-mon[73572]: 7.e scrub ok
Oct 08 09:46:37 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Oct 08 09:46:37 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Oct 08 09:46:37 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.ettfma-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 08 09:46:37 compute-0 sudo[95812]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmslnaufmkpstzmgzmzbajytlidngmlo ; /usr/bin/python3'
Oct 08 09:46:37 compute-0 sudo[95812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:46:37 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v19: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 1.8 KiB/s wr, 5 op/s
Oct 08 09:46:37 compute-0 python3[95814]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:46:37 compute-0 podman[95815]: 2025-10-08 09:46:37.356381985 +0000 UTC m=+0.055772459 container create d410df99f33c1beca74fed458576db8fb4ff41491a02ed72b08d3230a2ed46a0 (image=quay.io/ceph/ceph:v19, name=naughty_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:46:37 compute-0 systemd[1]: Started libpod-conmon-d410df99f33c1beca74fed458576db8fb4ff41491a02ed72b08d3230a2ed46a0.scope.
Oct 08 09:46:37 compute-0 podman[95815]: 2025-10-08 09:46:37.325899987 +0000 UTC m=+0.025290481 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:46:37 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:46:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/669e0ba714431ae0590734bb9575a5e0686a3fa89e15a558f918b626ed2ef2ea/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:46:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/669e0ba714431ae0590734bb9575a5e0686a3fa89e15a558f918b626ed2ef2ea/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:46:37 compute-0 podman[95815]: 2025-10-08 09:46:37.453251484 +0000 UTC m=+0.152641978 container init d410df99f33c1beca74fed458576db8fb4ff41491a02ed72b08d3230a2ed46a0 (image=quay.io/ceph/ceph:v19, name=naughty_diffie, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:46:37 compute-0 podman[95815]: 2025-10-08 09:46:37.458260487 +0000 UTC m=+0.157650961 container start d410df99f33c1beca74fed458576db8fb4ff41491a02ed72b08d3230a2ed46a0 (image=quay.io/ceph/ceph:v19, name=naughty_diffie, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:46:37 compute-0 podman[95815]: 2025-10-08 09:46:37.473016766 +0000 UTC m=+0.172407290 container attach d410df99f33c1beca74fed458576db8fb4ff41491a02ed72b08d3230a2ed46a0 (image=quay.io/ceph/ceph:v19, name=naughty_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct 08 09:46:37 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions", "format": "json"} v 0)
Oct 08 09:46:37 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1901290417' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Oct 08 09:46:37 compute-0 naughty_diffie[95830]: 
Oct 08 09:46:37 compute-0 systemd[1]: libpod-d410df99f33c1beca74fed458576db8fb4ff41491a02ed72b08d3230a2ed46a0.scope: Deactivated successfully.
Oct 08 09:46:37 compute-0 naughty_diffie[95830]: {"mon":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"mgr":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"osd":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"mds":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"rgw":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"overall":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":15}}
Oct 08 09:46:37 compute-0 podman[95815]: 2025-10-08 09:46:37.90663963 +0000 UTC m=+0.606030104 container died d410df99f33c1beca74fed458576db8fb4ff41491a02ed72b08d3230a2ed46a0 (image=quay.io/ceph/ceph:v19, name=naughty_diffie, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Oct 08 09:46:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-669e0ba714431ae0590734bb9575a5e0686a3fa89e15a558f918b626ed2ef2ea-merged.mount: Deactivated successfully.
Oct 08 09:46:37 compute-0 podman[95815]: 2025-10-08 09:46:37.986361618 +0000 UTC m=+0.685752092 container remove d410df99f33c1beca74fed458576db8fb4ff41491a02ed72b08d3230a2ed46a0 (image=quay.io/ceph/ceph:v19, name=naughty_diffie, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:46:37 compute-0 systemd[1]: libpod-conmon-d410df99f33c1beca74fed458576db8fb4ff41491a02ed72b08d3230a2ed46a0.scope: Deactivated successfully.
Oct 08 09:46:38 compute-0 sudo[95812]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:38 compute-0 ceph-mon[73572]: 6.8 deep-scrub starts
Oct 08 09:46:38 compute-0 ceph-mon[73572]: 6.8 deep-scrub ok
Oct 08 09:46:38 compute-0 ceph-mon[73572]: Rados config object exists: conf-nfs.cephfs
Oct 08 09:46:38 compute-0 ceph-mon[73572]: Creating key for client.nfs.cephfs.1.0.compute-2.ettfma-rgw
Oct 08 09:46:38 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.ettfma-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 08 09:46:38 compute-0 ceph-mon[73572]: Bind address in nfs.cephfs.1.0.compute-2.ettfma's ganesha conf is defaulting to empty
Oct 08 09:46:38 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:46:38 compute-0 ceph-mon[73572]: Deploying daemon nfs.cephfs.1.0.compute-2.ettfma on compute-2
Oct 08 09:46:38 compute-0 ceph-mon[73572]: pgmap v19: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 1.8 KiB/s wr, 5 op/s
Oct 08 09:46:38 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/1901290417' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Oct 08 09:46:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:46:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 08 09:46:38 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 08 09:46:38 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 08 09:46:38 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:38 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.2.0.compute-0.uynkmx
Oct 08 09:46:38 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.2.0.compute-0.uynkmx
Oct 08 09:46:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.uynkmx", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Oct 08 09:46:38 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.uynkmx", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Oct 08 09:46:38 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.uynkmx", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Oct 08 09:46:38 compute-0 ceph-mgr[73869]: [cephadm INFO root] Ensuring nfs.cephfs.2 is in the ganesha grace table
Oct 08 09:46:38 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.2 is in the ganesha grace table
Oct 08 09:46:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Oct 08 09:46:38 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Oct 08 09:46:38 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Oct 08 09:46:39 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:46:39 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:46:39 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v20: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 1.8 KiB/s wr, 5 op/s
Oct 08 09:46:39 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:39 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:39 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:39 compute-0 ceph-mon[73572]: Creating key for client.nfs.cephfs.2.0.compute-0.uynkmx
Oct 08 09:46:39 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.uynkmx", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Oct 08 09:46:39 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.uynkmx", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Oct 08 09:46:39 compute-0 ceph-mon[73572]: Ensuring nfs.cephfs.2 is in the ganesha grace table
Oct 08 09:46:39 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Oct 08 09:46:39 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Oct 08 09:46:39 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:46:41 compute-0 ceph-mon[73572]: pgmap v20: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 1.8 KiB/s wr, 5 op/s
Oct 08 09:46:41 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v21: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 2.6 KiB/s wr, 7 op/s
Oct 08 09:46:42 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Oct 08 09:46:42 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Oct 08 09:46:42 compute-0 ceph-mon[73572]: pgmap v21: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 2.6 KiB/s wr, 7 op/s
Oct 08 09:46:42 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Oct 08 09:46:42 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Oct 08 09:46:42 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Oct 08 09:46:42 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.2.0.compute-0.uynkmx-rgw
Oct 08 09:46:42 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.2.0.compute-0.uynkmx-rgw
Oct 08 09:46:42 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.uynkmx-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Oct 08 09:46:42 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.uynkmx-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 08 09:46:42 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.uynkmx-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 08 09:46:42 compute-0 ceph-mgr[73869]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.2.0.compute-0.uynkmx's ganesha conf is defaulting to empty
Oct 08 09:46:42 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.2.0.compute-0.uynkmx's ganesha conf is defaulting to empty
Oct 08 09:46:42 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:46:42 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:46:42 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.2.0.compute-0.uynkmx on compute-0
Oct 08 09:46:42 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.2.0.compute-0.uynkmx on compute-0
Oct 08 09:46:42 compute-0 sudo[95903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:46:42 compute-0 sudo[95903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:46:42 compute-0 sudo[95903]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:42 compute-0 sudo[95928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 787292cc-8154-50c4-9e00-e9be3e817149
Oct 08 09:46:42 compute-0 sudo[95928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:46:42 compute-0 podman[95995]: 2025-10-08 09:46:42.976558377 +0000 UTC m=+0.042664730 container create 7747f35209568b363e9b36dfd9fe1745105c66ff3c683c2b49686912cf457dbd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_wiles, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:46:43 compute-0 systemd[1]: Started libpod-conmon-7747f35209568b363e9b36dfd9fe1745105c66ff3c683c2b49686912cf457dbd.scope.
Oct 08 09:46:43 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:46:43 compute-0 podman[95995]: 2025-10-08 09:46:43.032939224 +0000 UTC m=+0.099045587 container init 7747f35209568b363e9b36dfd9fe1745105c66ff3c683c2b49686912cf457dbd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_wiles, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct 08 09:46:43 compute-0 podman[95995]: 2025-10-08 09:46:43.039383861 +0000 UTC m=+0.105490224 container start 7747f35209568b363e9b36dfd9fe1745105c66ff3c683c2b49686912cf457dbd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Oct 08 09:46:43 compute-0 podman[95995]: 2025-10-08 09:46:43.042944819 +0000 UTC m=+0.109051182 container attach 7747f35209568b363e9b36dfd9fe1745105c66ff3c683c2b49686912cf457dbd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_wiles, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True)
Oct 08 09:46:43 compute-0 sweet_wiles[96011]: 167 167
Oct 08 09:46:43 compute-0 systemd[1]: libpod-7747f35209568b363e9b36dfd9fe1745105c66ff3c683c2b49686912cf457dbd.scope: Deactivated successfully.
Oct 08 09:46:43 compute-0 podman[95995]: 2025-10-08 09:46:43.044160786 +0000 UTC m=+0.110267139 container died 7747f35209568b363e9b36dfd9fe1745105c66ff3c683c2b49686912cf457dbd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_wiles, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:46:43 compute-0 podman[95995]: 2025-10-08 09:46:42.953721442 +0000 UTC m=+0.019827885 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:46:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-6bde36b4e8d9c05736ee9178d6e5f42caf96d46a0bd18b8bfaacd0c30668d9f3-merged.mount: Deactivated successfully.
Oct 08 09:46:43 compute-0 podman[95995]: 2025-10-08 09:46:43.080394779 +0000 UTC m=+0.146501152 container remove 7747f35209568b363e9b36dfd9fe1745105c66ff3c683c2b49686912cf457dbd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_wiles, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct 08 09:46:43 compute-0 systemd[1]: libpod-conmon-7747f35209568b363e9b36dfd9fe1745105c66ff3c683c2b49686912cf457dbd.scope: Deactivated successfully.
Oct 08 09:46:43 compute-0 systemd[1]: Reloading.
Oct 08 09:46:43 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v22: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 2.6 KiB/s wr, 7 op/s
Oct 08 09:46:43 compute-0 systemd-rc-local-generator[96050]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:46:43 compute-0 systemd-sysv-generator[96054]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:46:43 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:46:43 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:46:43 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:46:43 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:46:43 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:46:43 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:46:43 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Oct 08 09:46:43 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Oct 08 09:46:43 compute-0 ceph-mon[73572]: Rados config object exists: conf-nfs.cephfs
Oct 08 09:46:43 compute-0 ceph-mon[73572]: Creating key for client.nfs.cephfs.2.0.compute-0.uynkmx-rgw
Oct 08 09:46:43 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.uynkmx-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 08 09:46:43 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.uynkmx-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 08 09:46:43 compute-0 ceph-mon[73572]: Bind address in nfs.cephfs.2.0.compute-0.uynkmx's ganesha conf is defaulting to empty
Oct 08 09:46:43 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:46:43 compute-0 ceph-mon[73572]: Deploying daemon nfs.cephfs.2.0.compute-0.uynkmx on compute-0
Oct 08 09:46:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:46:43 compute-0 systemd[1]: Reloading.
Oct 08 09:46:43 compute-0 systemd-rc-local-generator[96091]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:46:43 compute-0 systemd-sysv-generator[96095]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:46:43 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct 08 09:46:43 compute-0 podman[96152]: 2025-10-08 09:46:43.889779075 +0000 UTC m=+0.052667565 container create c5f79ead609d5155b918e60a1470581982785e4aac0e1c185a19093b2bdf84dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:46:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f542fbc76345914e50b0a692320404ddade2bba14cf57cdbb4a6cefc867b9d7e/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 08 09:46:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f542fbc76345914e50b0a692320404ddade2bba14cf57cdbb4a6cefc867b9d7e/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 09:46:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f542fbc76345914e50b0a692320404ddade2bba14cf57cdbb4a6cefc867b9d7e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:46:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f542fbc76345914e50b0a692320404ddade2bba14cf57cdbb4a6cefc867b9d7e/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.uynkmx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 09:46:43 compute-0 podman[96152]: 2025-10-08 09:46:43.939765497 +0000 UTC m=+0.102653977 container init c5f79ead609d5155b918e60a1470581982785e4aac0e1c185a19093b2bdf84dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:46:43 compute-0 podman[96152]: 2025-10-08 09:46:43.861809453 +0000 UTC m=+0.024698043 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:46:43 compute-0 podman[96152]: 2025-10-08 09:46:43.956352342 +0000 UTC m=+0.119240812 container start c5f79ead609d5155b918e60a1470581982785e4aac0e1c185a19093b2bdf84dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:46:43 compute-0 bash[96152]: c5f79ead609d5155b918e60a1470581982785e4aac0e1c185a19093b2bdf84dc
Oct 08 09:46:43 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:43 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 08 09:46:43 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:43 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 08 09:46:43 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct 08 09:46:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:44 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 08 09:46:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:44 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 08 09:46:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:44 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 08 09:46:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:44 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 08 09:46:44 compute-0 sudo[95928]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:44 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 08 09:46:44 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 09:46:44 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:44 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 09:46:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:44 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 09:46:44 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:44 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 08 09:46:44 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:44 compute-0 ceph-mgr[73869]: [progress INFO root] complete: finished ev 14600416-a126-4524-a7b9-d20314f3302e (Updating nfs.cephfs deployment (+3 -> 3))
Oct 08 09:46:44 compute-0 ceph-mgr[73869]: [progress INFO root] Completed event 14600416-a126-4524-a7b9-d20314f3302e (Updating nfs.cephfs deployment (+3 -> 3)) in 12 seconds
Oct 08 09:46:44 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 08 09:46:44 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:44 compute-0 ceph-mgr[73869]: [progress INFO root] update: starting ev 7ba10d6d-35d7-417a-acf8-1cda7124e4f2 (Updating ingress.nfs.cephfs deployment (+6 -> 6))
Oct 08 09:46:44 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.nfs.cephfs/monitor_password}] v 0)
Oct 08 09:46:44 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:44 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-1.mmphxo on compute-1
Oct 08 09:46:44 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-1.mmphxo on compute-1
Oct 08 09:46:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:44 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Oct 08 09:46:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:44 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Oct 08 09:46:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:44 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 09:46:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:44 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 09:46:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:44 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 09:46:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:44 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 09:46:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:44 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 09:46:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:44 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 09:46:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:44 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 09:46:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 09:46:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 09:46:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 09:46:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000003:nfs.cephfs.2: -2
Oct 08 09:46:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 08 09:46:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 08 09:46:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 08 09:46:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 08 09:46:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 08 09:46:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 08 09:46:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 08 09:46:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 08 09:46:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 08 09:46:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 08 09:46:45 compute-0 ceph-mon[73572]: pgmap v22: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 2.6 KiB/s wr, 7 op/s
Oct 08 09:46:45 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:45 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:45 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:45 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:45 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 08 09:46:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 08 09:46:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 08 09:46:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 08 09:46:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 08 09:46:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 08 09:46:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 08 09:46:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 08 09:46:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 08 09:46:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 08 09:46:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 08 09:46:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 08 09:46:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 08 09:46:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 08 09:46:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 08 09:46:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 08 09:46:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 08 09:46:45 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v23: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 2.7 KiB/s wr, 7 op/s
Oct 08 09:46:46 compute-0 ceph-mon[73572]: Deploying daemon haproxy.nfs.cephfs.compute-1.mmphxo on compute-1
Oct 08 09:46:47 compute-0 ceph-mon[73572]: pgmap v23: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 2.7 KiB/s wr, 7 op/s
Oct 08 09:46:47 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v24: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Oct 08 09:46:48 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 08 09:46:48 compute-0 ceph-mon[73572]: pgmap v24: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Oct 08 09:46:48 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:48 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 08 09:46:48 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:48 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Oct 08 09:46:48 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:48 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-0.cwhopp on compute-0
Oct 08 09:46:48 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-0.cwhopp on compute-0
Oct 08 09:46:48 compute-0 sudo[96222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:46:48 compute-0 sudo[96222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:46:48 compute-0 sudo[96222]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:48 compute-0 ceph-mgr[73869]: [progress INFO root] Writing back 15 completed events
Oct 08 09:46:48 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 08 09:46:48 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:48 compute-0 sudo[96247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/haproxy:2.3 --timeout 895 _orch deploy --fsid 787292cc-8154-50c4-9e00-e9be3e817149
Oct 08 09:46:48 compute-0 sudo[96247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:46:48 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:46:49 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:49 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:49 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:49 compute-0 ceph-mon[73572]: Deploying daemon haproxy.nfs.cephfs.compute-0.cwhopp on compute-0
Oct 08 09:46:49 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:49 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v25: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Oct 08 09:46:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:49 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6630000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:46:50 compute-0 ceph-mon[73572]: pgmap v25: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Oct 08 09:46:50 compute-0 podman[96312]: 2025-10-08 09:46:50.819916373 +0000 UTC m=+2.157150295 container create f5918308668a45e3fa22452b7c04f539c9749d85acbc4cf563aaa484bc35bfac (image=quay.io/ceph/haproxy:2.3, name=cranky_black)
Oct 08 09:46:50 compute-0 systemd[1]: Started libpod-conmon-f5918308668a45e3fa22452b7c04f539c9749d85acbc4cf563aaa484bc35bfac.scope.
Oct 08 09:46:50 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:46:50 compute-0 podman[96312]: 2025-10-08 09:46:50.805411145 +0000 UTC m=+2.142645087 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Oct 08 09:46:50 compute-0 podman[96312]: 2025-10-08 09:46:50.889944172 +0000 UTC m=+2.227178164 container init f5918308668a45e3fa22452b7c04f539c9749d85acbc4cf563aaa484bc35bfac (image=quay.io/ceph/haproxy:2.3, name=cranky_black)
Oct 08 09:46:50 compute-0 podman[96312]: 2025-10-08 09:46:50.896909551 +0000 UTC m=+2.234143473 container start f5918308668a45e3fa22452b7c04f539c9749d85acbc4cf563aaa484bc35bfac (image=quay.io/ceph/haproxy:2.3, name=cranky_black)
Oct 08 09:46:50 compute-0 podman[96312]: 2025-10-08 09:46:50.899818892 +0000 UTC m=+2.237052875 container attach f5918308668a45e3fa22452b7c04f539c9749d85acbc4cf563aaa484bc35bfac (image=quay.io/ceph/haproxy:2.3, name=cranky_black)
Oct 08 09:46:50 compute-0 cranky_black[96429]: 0 0
Oct 08 09:46:50 compute-0 systemd[1]: libpod-f5918308668a45e3fa22452b7c04f539c9749d85acbc4cf563aaa484bc35bfac.scope: Deactivated successfully.
Oct 08 09:46:50 compute-0 conmon[96429]: conmon f5918308668a45e3fa22 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f5918308668a45e3fa22452b7c04f539c9749d85acbc4cf563aaa484bc35bfac.scope/container/memory.events
Oct 08 09:46:50 compute-0 podman[96312]: 2025-10-08 09:46:50.90353047 +0000 UTC m=+2.240764422 container died f5918308668a45e3fa22452b7c04f539c9749d85acbc4cf563aaa484bc35bfac (image=quay.io/ceph/haproxy:2.3, name=cranky_black)
Oct 08 09:46:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-b09d3b0c91e05dd0434d2289ae81e908858322c91af69cd400ecb3ef743548fc-merged.mount: Deactivated successfully.
Oct 08 09:46:50 compute-0 podman[96312]: 2025-10-08 09:46:50.947513647 +0000 UTC m=+2.284747569 container remove f5918308668a45e3fa22452b7c04f539c9749d85acbc4cf563aaa484bc35bfac (image=quay.io/ceph/haproxy:2.3, name=cranky_black)
Oct 08 09:46:50 compute-0 systemd[1]: libpod-conmon-f5918308668a45e3fa22452b7c04f539c9749d85acbc4cf563aaa484bc35bfac.scope: Deactivated successfully.
Oct 08 09:46:51 compute-0 systemd[1]: Reloading.
Oct 08 09:46:51 compute-0 systemd-rc-local-generator[96476]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:46:51 compute-0 systemd-sysv-generator[96479]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:46:51 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v26: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 4.8 KiB/s rd, 1.8 KiB/s wr, 6 op/s
Oct 08 09:46:51 compute-0 systemd[1]: Reloading.
Oct 08 09:46:51 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:51 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6628001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:46:51 compute-0 systemd-rc-local-generator[96517]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:46:51 compute-0 systemd-sysv-generator[96521]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:46:51 compute-0 systemd[1]: Starting Ceph haproxy.nfs.cephfs.compute-0.cwhopp for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct 08 09:46:51 compute-0 podman[96573]: 2025-10-08 09:46:51.813141102 +0000 UTC m=+0.040536630 container create 1c113ffcb0d41c6337c256a4892f13e82bdea6ca8062b10324516aa631301ea5 (image=quay.io/ceph/haproxy:2.3, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp)
Oct 08 09:46:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd9206325a51650bd28386027213364efa621af0c7b19bb1a2c2c16eac6fec86/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Oct 08 09:46:51 compute-0 podman[96573]: 2025-10-08 09:46:51.865189744 +0000 UTC m=+0.092585272 container init 1c113ffcb0d41c6337c256a4892f13e82bdea6ca8062b10324516aa631301ea5 (image=quay.io/ceph/haproxy:2.3, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp)
Oct 08 09:46:51 compute-0 podman[96573]: 2025-10-08 09:46:51.871668298 +0000 UTC m=+0.099063816 container start 1c113ffcb0d41c6337c256a4892f13e82bdea6ca8062b10324516aa631301ea5 (image=quay.io/ceph/haproxy:2.3, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp)
Oct 08 09:46:51 compute-0 bash[96573]: 1c113ffcb0d41c6337c256a4892f13e82bdea6ca8062b10324516aa631301ea5
Oct 08 09:46:51 compute-0 podman[96573]: 2025-10-08 09:46:51.79151934 +0000 UTC m=+0.018914928 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Oct 08 09:46:51 compute-0 systemd[1]: Started Ceph haproxy.nfs.cephfs.compute-0.cwhopp for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct 08 09:46:51 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [NOTICE] 280/094651 (2) : New worker #1 (4) forked
Oct 08 09:46:51 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/094651 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 08 09:46:51 compute-0 sudo[96247]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:51 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 09:46:51 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:51 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 09:46:52 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:52 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Oct 08 09:46:52 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:52 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-2.jzsqfr on compute-2
Oct 08 09:46:52 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-2.jzsqfr on compute-2
Oct 08 09:46:52 compute-0 ceph-mon[73572]: pgmap v26: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 4.8 KiB/s rd, 1.8 KiB/s wr, 6 op/s
Oct 08 09:46:52 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:52 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:52 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:52 compute-0 ceph-mon[73572]: Deploying daemon haproxy.nfs.cephfs.compute-2.jzsqfr on compute-2
Oct 08 09:46:53 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v27: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Oct 08 09:46:53 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:53 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:46:53 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:46:53 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:53 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:46:55 compute-0 ceph-mon[73572]: pgmap v27: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Oct 08 09:46:55 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v28: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Oct 08 09:46:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:55 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6614000fa0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:46:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:55 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:46:56 compute-0 ceph-mon[73572]: pgmap v28: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Oct 08 09:46:57 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v29: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 09:46:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:57 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c0016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:46:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:57 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66080016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:46:57 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 08 09:46:57 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:57 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 08 09:46:57 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:57 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Oct 08 09:46:57 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:57 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.nfs.cephfs/keepalived_password}] v 0)
Oct 08 09:46:58 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:58 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 08 09:46:58 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 08 09:46:58 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct 08 09:46:58 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct 08 09:46:58 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 08 09:46:58 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 08 09:46:58 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-0.ekerbw on compute-0
Oct 08 09:46:58 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-0.ekerbw on compute-0
Oct 08 09:46:58 compute-0 sudo[96602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:46:58 compute-0 sudo[96602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:46:58 compute-0 sudo[96602]: pam_unix(sudo:session): session closed for user root
Oct 08 09:46:58 compute-0 sudo[96627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/keepalived:2.2.4 --timeout 895 _orch deploy --fsid 787292cc-8154-50c4-9e00-e9be3e817149
Oct 08 09:46:58 compute-0 sudo[96627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:46:58 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:46:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:58 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6614001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:46:58 compute-0 ceph-mon[73572]: pgmap v29: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 09:46:58 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:58 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:58 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:58 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:46:58 compute-0 ceph-mon[73572]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 08 09:46:58 compute-0 ceph-mon[73572]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct 08 09:46:58 compute-0 ceph-mon[73572]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 08 09:46:58 compute-0 ceph-mon[73572]: Deploying daemon keepalived.nfs.cephfs.compute-0.ekerbw on compute-0
Oct 08 09:46:59 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v30: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 09:46:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:59 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:46:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:59 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c0016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:00 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:00 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:01 compute-0 ceph-mon[73572]: pgmap v30: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 09:47:01 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v31: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 938 B/s wr, 4 op/s
Oct 08 09:47:01 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:01 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66080016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:01 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:01 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66140023e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:01 compute-0 podman[96690]: 2025-10-08 09:47:01.834269282 +0000 UTC m=+3.301915514 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Oct 08 09:47:01 compute-0 podman[96690]: 2025-10-08 09:47:01.865340872 +0000 UTC m=+3.332987054 container create c2bd802e74e67419ffaaea5616c7da5fe7dcac4b6d09fb4741e073959b03646c (image=quay.io/ceph/keepalived:2.2.4, name=objective_curran, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, version=2.2.4, io.buildah.version=1.28.2, release=1793, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., io.openshift.expose-services=, name=keepalived, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, distribution-scope=public)
Oct 08 09:47:01 compute-0 systemd[1]: Started libpod-conmon-c2bd802e74e67419ffaaea5616c7da5fe7dcac4b6d09fb4741e073959b03646c.scope.
Oct 08 09:47:01 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:47:01 compute-0 podman[96690]: 2025-10-08 09:47:01.992523294 +0000 UTC m=+3.460169526 container init c2bd802e74e67419ffaaea5616c7da5fe7dcac4b6d09fb4741e073959b03646c (image=quay.io/ceph/keepalived:2.2.4, name=objective_curran, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public, name=keepalived, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20)
Oct 08 09:47:02 compute-0 podman[96690]: 2025-10-08 09:47:02.004016707 +0000 UTC m=+3.471662859 container start c2bd802e74e67419ffaaea5616c7da5fe7dcac4b6d09fb4741e073959b03646c (image=quay.io/ceph/keepalived:2.2.4, name=objective_curran, vcs-type=git, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, version=2.2.4, io.buildah.version=1.28.2, description=keepalived for Ceph)
Oct 08 09:47:02 compute-0 podman[96690]: 2025-10-08 09:47:02.008221969 +0000 UTC m=+3.475868211 container attach c2bd802e74e67419ffaaea5616c7da5fe7dcac4b6d09fb4741e073959b03646c (image=quay.io/ceph/keepalived:2.2.4, name=objective_curran, com.redhat.component=keepalived-container, name=keepalived, io.buildah.version=1.28.2, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, version=2.2.4, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=)
Oct 08 09:47:02 compute-0 objective_curran[96785]: 0 0
Oct 08 09:47:02 compute-0 systemd[1]: libpod-c2bd802e74e67419ffaaea5616c7da5fe7dcac4b6d09fb4741e073959b03646c.scope: Deactivated successfully.
Oct 08 09:47:02 compute-0 podman[96690]: 2025-10-08 09:47:02.014924091 +0000 UTC m=+3.482570263 container died c2bd802e74e67419ffaaea5616c7da5fe7dcac4b6d09fb4741e073959b03646c (image=quay.io/ceph/keepalived:2.2.4, name=objective_curran, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, release=1793, description=keepalived for Ceph, architecture=x86_64, vcs-type=git, distribution-scope=public, name=keepalived, version=2.2.4, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9)
Oct 08 09:47:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f7be7913524650d53c677bb2242b3058408f5bcd1bfaaea024d8d313d34c73f-merged.mount: Deactivated successfully.
Oct 08 09:47:02 compute-0 podman[96690]: 2025-10-08 09:47:02.054941172 +0000 UTC m=+3.522587324 container remove c2bd802e74e67419ffaaea5616c7da5fe7dcac4b6d09fb4741e073959b03646c (image=quay.io/ceph/keepalived:2.2.4, name=objective_curran, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, version=2.2.4, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container)
Oct 08 09:47:02 compute-0 systemd[1]: libpod-conmon-c2bd802e74e67419ffaaea5616c7da5fe7dcac4b6d09fb4741e073959b03646c.scope: Deactivated successfully.
Oct 08 09:47:02 compute-0 systemd[1]: Reloading.
Oct 08 09:47:02 compute-0 systemd-rc-local-generator[96837]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:47:02 compute-0 systemd-sysv-generator[96840]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:47:02 compute-0 systemd[1]: Reloading.
Oct 08 09:47:02 compute-0 systemd-rc-local-generator[96877]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:47:02 compute-0 systemd-sysv-generator[96881]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:47:02 compute-0 ceph-mon[73572]: pgmap v31: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 938 B/s wr, 4 op/s
Oct 08 09:47:02 compute-0 systemd[1]: Starting Ceph keepalived.nfs.cephfs.compute-0.ekerbw for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct 08 09:47:02 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:02 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c0016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:02 compute-0 podman[96933]: 2025-10-08 09:47:02.917270603 +0000 UTC m=+0.038658780 container create 5814788fb4b63196d097e0c60082efdca22825a3ba337ddec5c99170a1bfd89d (image=quay.io/ceph/keepalived:2.2.4, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw, io.buildah.version=1.28.2, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, version=2.2.4, name=keepalived, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, vendor=Red Hat, Inc., description=keepalived for Ceph)
Oct 08 09:47:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf608840db6bd57f42ef4334d86738f1b72c5b69cdf5dc1a5e13b649cc13a302/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:47:02 compute-0 podman[96933]: 2025-10-08 09:47:02.971864896 +0000 UTC m=+0.093253093 container init 5814788fb4b63196d097e0c60082efdca22825a3ba337ddec5c99170a1bfd89d (image=quay.io/ceph/keepalived:2.2.4, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., description=keepalived for Ceph, com.redhat.component=keepalived-container, release=1793, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, architecture=x86_64, io.openshift.tags=Ceph keepalived, distribution-scope=public, name=keepalived, io.buildah.version=1.28.2)
Oct 08 09:47:02 compute-0 podman[96933]: 2025-10-08 09:47:02.976434769 +0000 UTC m=+0.097822946 container start 5814788fb4b63196d097e0c60082efdca22825a3ba337ddec5c99170a1bfd89d (image=quay.io/ceph/keepalived:2.2.4, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., description=keepalived for Ceph, release=1793, version=2.2.4, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, io.openshift.tags=Ceph keepalived)
Oct 08 09:47:02 compute-0 bash[96933]: 5814788fb4b63196d097e0c60082efdca22825a3ba337ddec5c99170a1bfd89d
Oct 08 09:47:02 compute-0 podman[96933]: 2025-10-08 09:47:02.901967171 +0000 UTC m=+0.023355368 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Oct 08 09:47:02 compute-0 systemd[1]: Started Ceph keepalived.nfs.cephfs.compute-0.ekerbw for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct 08 09:47:02 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw[96948]: Wed Oct  8 09:47:02 2025: Starting Keepalived v2.2.4 (08/21,2021)
Oct 08 09:47:02 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw[96948]: Wed Oct  8 09:47:02 2025: Running on Linux 5.14.0-620.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025 (built for Linux 5.14.0)
Oct 08 09:47:02 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw[96948]: Wed Oct  8 09:47:02 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Oct 08 09:47:02 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw[96948]: Wed Oct  8 09:47:02 2025: Configuration file /etc/keepalived/keepalived.conf
Oct 08 09:47:02 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw[96948]: Wed Oct  8 09:47:02 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Oct 08 09:47:02 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw[96948]: Wed Oct  8 09:47:02 2025: Starting VRRP child process, pid=4
Oct 08 09:47:03 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw[96948]: Wed Oct  8 09:47:02 2025: Startup complete
Oct 08 09:47:03 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:03 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 09:47:03 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw[96948]: Wed Oct  8 09:47:03 2025: (VI_0) Entering BACKUP STATE (init)
Oct 08 09:47:03 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw[96948]: Wed Oct  8 09:47:03 2025: VRRP_Script(check_backend) succeeded
Oct 08 09:47:03 compute-0 sudo[96627]: pam_unix(sudo:session): session closed for user root
Oct 08 09:47:03 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 09:47:03 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:03 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 09:47:03 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:03 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Oct 08 09:47:03 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:03 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 08 09:47:03 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 08 09:47:03 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 08 09:47:03 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 08 09:47:03 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct 08 09:47:03 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct 08 09:47:03 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-2.bmcbib on compute-2
Oct 08 09:47:03 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-2.bmcbib on compute-2
Oct 08 09:47:03 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v32: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 09:47:03 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:03 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:03 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:47:03 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:03 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66140023e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:04 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:04 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:04 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:04 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66140023e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:05 compute-0 ceph-mon[73572]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 08 09:47:05 compute-0 ceph-mon[73572]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 08 09:47:05 compute-0 ceph-mon[73572]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct 08 09:47:05 compute-0 ceph-mon[73572]: Deploying daemon keepalived.nfs.cephfs.compute-2.bmcbib on compute-2
Oct 08 09:47:05 compute-0 ceph-mon[73572]: pgmap v32: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 09:47:05 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v33: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 08 09:47:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:05 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:05 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:06 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 09:47:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:06 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 09:47:06 compute-0 ceph-mon[73572]: pgmap v33: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 08 09:47:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw[96948]: Wed Oct  8 09:47:06 2025: (VI_0) Entering MASTER STATE
Oct 08 09:47:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:06 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66140023e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:07 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v34: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 08 09:47:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:07 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66140023e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:07 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:07 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 08 09:47:07 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:07 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 08 09:47:07 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:07 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Oct 08 09:47:07 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:07 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct 08 09:47:07 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct 08 09:47:07 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 08 09:47:07 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 08 09:47:07 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 08 09:47:07 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 08 09:47:07 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-1.sbjzmp on compute-1
Oct 08 09:47:07 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-1.sbjzmp on compute-1
Oct 08 09:47:08 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:47:08 compute-0 ceph-mon[73572]: pgmap v34: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 08 09:47:08 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:08 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:08 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:08 compute-0 ceph-mon[73572]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct 08 09:47:08 compute-0 ceph-mon[73572]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 08 09:47:08 compute-0 ceph-mon[73572]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 08 09:47:08 compute-0 ceph-mon[73572]: Deploying daemon keepalived.nfs.cephfs.compute-1.sbjzmp on compute-1
Oct 08 09:47:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:08 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:09 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 08 09:47:09 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v35: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 08 09:47:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:09 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:09 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608002a20 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:10 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:10 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:10 compute-0 ceph-mon[73572]: pgmap v35: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 08 09:47:11 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v36: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 08 09:47:11 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:11 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66140023e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:11 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw[96948]: Wed Oct  8 09:47:11 2025: (VI_0) Received advert from 192.168.122.102 with lower priority 90, ours 100, forcing new election
Oct 08 09:47:11 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:11 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:12 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 08 09:47:12 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:12 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 08 09:47:12 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:12 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Oct 08 09:47:12 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:12 compute-0 ceph-mgr[73869]: [progress INFO root] complete: finished ev 7ba10d6d-35d7-417a-acf8-1cda7124e4f2 (Updating ingress.nfs.cephfs deployment (+6 -> 6))
Oct 08 09:47:12 compute-0 ceph-mgr[73869]: [progress INFO root] Completed event 7ba10d6d-35d7-417a-acf8-1cda7124e4f2 (Updating ingress.nfs.cephfs deployment (+6 -> 6)) in 29 seconds
Oct 08 09:47:12 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Oct 08 09:47:12 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:12 compute-0 ceph-mgr[73869]: [progress INFO root] update: starting ev c79faeab-2ee3-4aba-a667-4c696cb5984a (Updating alertmanager deployment (+1 -> 1))
Oct 08 09:47:12 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon alertmanager.compute-0 on compute-0
Oct 08 09:47:12 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon alertmanager.compute-0 on compute-0
Oct 08 09:47:12 compute-0 sudo[96960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:47:12 compute-0 sudo[96960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:47:12 compute-0 sudo[96960]: pam_unix(sudo:session): session closed for user root
Oct 08 09:47:12 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:12 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003340 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:12 compute-0 sudo[96985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/alertmanager:v0.25.0 --timeout 895 _orch deploy --fsid 787292cc-8154-50c4-9e00-e9be3e817149
Oct 08 09:47:12 compute-0 sudo[96985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:47:13 compute-0 ceph-mon[73572]: pgmap v36: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 08 09:47:13 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:13 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:13 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:13 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:13 compute-0 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_09:47:13
Oct 08 09:47:13 compute-0 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 08 09:47:13 compute-0 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct 08 09:47:13 compute-0 ceph-mgr[73869]: [balancer INFO root] pools ['volumes', '.mgr', 'vms', 'images', 'default.rgw.control', 'cephfs.cephfs.meta', '.nfs', 'backups', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.log', 'default.rgw.meta']
Oct 08 09:47:13 compute-0 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct 08 09:47:13 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v37: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 08 09:47:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:13 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:13 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct 08 09:47:13 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:47:13 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 08 09:47:13 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:47:13 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:47:13 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:47:13 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:47:13 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:47:13 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:47:13 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:47:13 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:47:13 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:47:13 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 08 09:47:13 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:47:13 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:47:13 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:47:13 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 1)
Oct 08 09:47:13 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:47:13 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 1)
Oct 08 09:47:13 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:47:13 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 08 09:47:13 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:47:13 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Oct 08 09:47:13 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:47:13 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 1)
Oct 08 09:47:13 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:47:13 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:47:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0)
Oct 08 09:47:13 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Oct 08 09:47:13 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:47:13 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:47:13 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:47:13 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:47:13 compute-0 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 08 09:47:13 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 09:47:13 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 09:47:13 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 09:47:13 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 09:47:13 compute-0 ceph-mgr[73869]: [progress INFO root] Writing back 16 completed events
Oct 08 09:47:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 08 09:47:13 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:13 compute-0 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 08 09:47:13 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 09:47:13 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 09:47:13 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 09:47:13 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 09:47:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:47:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:13 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6614003f10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:14 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Oct 08 09:47:14 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Oct 08 09:47:14 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Oct 08 09:47:14 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Oct 08 09:47:14 compute-0 ceph-mgr[73869]: [progress INFO root] update: starting ev f81b6bbc-4070-4d6d-ab15-864f1e35b4da (PG autoscaler increasing pool 8 PGs from 1 to 32)
Oct 08 09:47:14 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0)
Oct 08 09:47:14 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Oct 08 09:47:14 compute-0 ceph-mon[73572]: Deploying daemon alertmanager.compute-0 on compute-0
Oct 08 09:47:14 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Oct 08 09:47:14 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:14 compute-0 podman[97052]: 2025-10-08 09:47:14.71350008 +0000 UTC m=+1.463676270 volume create e7fbce31307d52020c8fa218d057146ec835c7fd69c2b223d3901ba1f837055e
Oct 08 09:47:14 compute-0 podman[97052]: 2025-10-08 09:47:14.728644447 +0000 UTC m=+1.478820627 container create ac58a3f1b2b9240bb8dba2f8ee7872dcfa9f74ae201642a7603b9964337a235e (image=quay.io/prometheus/alertmanager:v0.25.0, name=loving_elion, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:47:14 compute-0 systemd[1]: Started libpod-conmon-ac58a3f1b2b9240bb8dba2f8ee7872dcfa9f74ae201642a7603b9964337a235e.scope.
Oct 08 09:47:14 compute-0 podman[97052]: 2025-10-08 09:47:14.693821219 +0000 UTC m=+1.443997479 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Oct 08 09:47:14 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:47:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/248d61c1de96cfce7a7e31ac5dae9b37d220e52aa8fd494e3bf9011ae3941936/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Oct 08 09:47:14 compute-0 podman[97052]: 2025-10-08 09:47:14.808322241 +0000 UTC m=+1.558498411 container init ac58a3f1b2b9240bb8dba2f8ee7872dcfa9f74ae201642a7603b9964337a235e (image=quay.io/prometheus/alertmanager:v0.25.0, name=loving_elion, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:47:14 compute-0 podman[97052]: 2025-10-08 09:47:14.814490726 +0000 UTC m=+1.564666876 container start ac58a3f1b2b9240bb8dba2f8ee7872dcfa9f74ae201642a7603b9964337a235e (image=quay.io/prometheus/alertmanager:v0.25.0, name=loving_elion, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:47:14 compute-0 loving_elion[97188]: 65534 65534
Oct 08 09:47:14 compute-0 systemd[1]: libpod-ac58a3f1b2b9240bb8dba2f8ee7872dcfa9f74ae201642a7603b9964337a235e.scope: Deactivated successfully.
Oct 08 09:47:14 compute-0 podman[97052]: 2025-10-08 09:47:14.823859891 +0000 UTC m=+1.574036131 container attach ac58a3f1b2b9240bb8dba2f8ee7872dcfa9f74ae201642a7603b9964337a235e (image=quay.io/prometheus/alertmanager:v0.25.0, name=loving_elion, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:47:14 compute-0 podman[97052]: 2025-10-08 09:47:14.824604134 +0000 UTC m=+1.574780294 container died ac58a3f1b2b9240bb8dba2f8ee7872dcfa9f74ae201642a7603b9964337a235e (image=quay.io/prometheus/alertmanager:v0.25.0, name=loving_elion, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:47:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-248d61c1de96cfce7a7e31ac5dae9b37d220e52aa8fd494e3bf9011ae3941936-merged.mount: Deactivated successfully.
Oct 08 09:47:14 compute-0 podman[97052]: 2025-10-08 09:47:14.873294631 +0000 UTC m=+1.623470781 container remove ac58a3f1b2b9240bb8dba2f8ee7872dcfa9f74ae201642a7603b9964337a235e (image=quay.io/prometheus/alertmanager:v0.25.0, name=loving_elion, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:47:14 compute-0 podman[97052]: 2025-10-08 09:47:14.87742509 +0000 UTC m=+1.627601260 volume remove e7fbce31307d52020c8fa218d057146ec835c7fd69c2b223d3901ba1f837055e
Oct 08 09:47:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:14 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:14 compute-0 systemd[1]: libpod-conmon-ac58a3f1b2b9240bb8dba2f8ee7872dcfa9f74ae201642a7603b9964337a235e.scope: Deactivated successfully.
Oct 08 09:47:14 compute-0 podman[97207]: 2025-10-08 09:47:14.939333153 +0000 UTC m=+0.037903766 volume create 7c56196c125ab8ddf6545850be572c44c3507a2ce4af7c71a9194b008fa1e728
Oct 08 09:47:14 compute-0 podman[97207]: 2025-10-08 09:47:14.948206083 +0000 UTC m=+0.046776696 container create adad2e4012eca1a7b05f9ee512eb07823fdf41746639e0e08f99947de7b43999 (image=quay.io/prometheus/alertmanager:v0.25.0, name=intelligent_dubinsky, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:47:14 compute-0 systemd[1]: Started libpod-conmon-adad2e4012eca1a7b05f9ee512eb07823fdf41746639e0e08f99947de7b43999.scope.
Oct 08 09:47:15 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:47:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6698b89fd6504ea1cbd0075637e16204bd04b9c7acaa20998acd79970229f323/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Oct 08 09:47:15 compute-0 podman[97207]: 2025-10-08 09:47:14.922934846 +0000 UTC m=+0.021505479 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Oct 08 09:47:15 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Oct 08 09:47:15 compute-0 podman[97207]: 2025-10-08 09:47:15.022961651 +0000 UTC m=+0.121532294 container init adad2e4012eca1a7b05f9ee512eb07823fdf41746639e0e08f99947de7b43999 (image=quay.io/prometheus/alertmanager:v0.25.0, name=intelligent_dubinsky, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:47:15 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Oct 08 09:47:15 compute-0 podman[97207]: 2025-10-08 09:47:15.02895348 +0000 UTC m=+0.127524103 container start adad2e4012eca1a7b05f9ee512eb07823fdf41746639e0e08f99947de7b43999 (image=quay.io/prometheus/alertmanager:v0.25.0, name=intelligent_dubinsky, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:47:15 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Oct 08 09:47:15 compute-0 intelligent_dubinsky[97223]: 65534 65534
Oct 08 09:47:15 compute-0 systemd[1]: libpod-adad2e4012eca1a7b05f9ee512eb07823fdf41746639e0e08f99947de7b43999.scope: Deactivated successfully.
Oct 08 09:47:15 compute-0 podman[97207]: 2025-10-08 09:47:15.032340587 +0000 UTC m=+0.130911220 container attach adad2e4012eca1a7b05f9ee512eb07823fdf41746639e0e08f99947de7b43999 (image=quay.io/prometheus/alertmanager:v0.25.0, name=intelligent_dubinsky, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:47:15 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Oct 08 09:47:15 compute-0 podman[97207]: 2025-10-08 09:47:15.033544016 +0000 UTC m=+0.132114639 container died adad2e4012eca1a7b05f9ee512eb07823fdf41746639e0e08f99947de7b43999 (image=quay.io/prometheus/alertmanager:v0.25.0, name=intelligent_dubinsky, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:47:15 compute-0 ceph-mgr[73869]: [progress INFO root] update: starting ev e7286b65-9033-43ec-a2dd-3b3dd3094fdb (PG autoscaler increasing pool 9 PGs from 1 to 32)
Oct 08 09:47:15 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0)
Oct 08 09:47:15 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Oct 08 09:47:15 compute-0 ceph-mon[73572]: pgmap v37: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 08 09:47:15 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Oct 08 09:47:15 compute-0 ceph-mon[73572]: osdmap e52: 3 total, 3 up, 3 in
Oct 08 09:47:15 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Oct 08 09:47:15 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Oct 08 09:47:15 compute-0 ceph-mon[73572]: osdmap e53: 3 total, 3 up, 3 in
Oct 08 09:47:15 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Oct 08 09:47:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-6698b89fd6504ea1cbd0075637e16204bd04b9c7acaa20998acd79970229f323-merged.mount: Deactivated successfully.
Oct 08 09:47:15 compute-0 podman[97207]: 2025-10-08 09:47:15.074822177 +0000 UTC m=+0.173392790 container remove adad2e4012eca1a7b05f9ee512eb07823fdf41746639e0e08f99947de7b43999 (image=quay.io/prometheus/alertmanager:v0.25.0, name=intelligent_dubinsky, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:47:15 compute-0 podman[97207]: 2025-10-08 09:47:15.078797082 +0000 UTC m=+0.177367695 volume remove 7c56196c125ab8ddf6545850be572c44c3507a2ce4af7c71a9194b008fa1e728
Oct 08 09:47:15 compute-0 systemd[1]: libpod-conmon-adad2e4012eca1a7b05f9ee512eb07823fdf41746639e0e08f99947de7b43999.scope: Deactivated successfully.
Oct 08 09:47:15 compute-0 systemd[1]: Reloading.
Oct 08 09:47:15 compute-0 systemd-sysv-generator[97275]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:47:15 compute-0 systemd-rc-local-generator[97270]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:47:15 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v40: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 639 B/s wr, 2 op/s
Oct 08 09:47:15 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0)
Oct 08 09:47:15 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 08 09:47:15 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0)
Oct 08 09:47:15 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 08 09:47:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:15 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003340 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:15 compute-0 systemd[1]: Reloading.
Oct 08 09:47:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:15 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:15 compute-0 systemd-rc-local-generator[97308]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:47:15 compute-0 systemd-sysv-generator[97312]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:47:15 compute-0 systemd[1]: Starting Ceph alertmanager.compute-0 for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct 08 09:47:15 compute-0 podman[97366]: 2025-10-08 09:47:15.880098898 +0000 UTC m=+0.033910030 volume create 00310bf376a0b175ca8d85fb11d168f2f95f64f3756abaadb6e57846efdbc0ea
Oct 08 09:47:15 compute-0 podman[97366]: 2025-10-08 09:47:15.88995522 +0000 UTC m=+0.043766352 container create 8d342ba820880b4150da0520ca2c2bd15e2dae173214d906ae46d290c18c1a1e (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:47:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/094715 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 08 09:47:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56ce96f5b36afca03959d3dd28785acc44bc98ac7848532a544c80c3ee2cbbf3/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Oct 08 09:47:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56ce96f5b36afca03959d3dd28785acc44bc98ac7848532a544c80c3ee2cbbf3/merged/etc/alertmanager supports timestamps until 2038 (0x7fffffff)
Oct 08 09:47:15 compute-0 podman[97366]: 2025-10-08 09:47:15.948799726 +0000 UTC m=+0.102610948 container init 8d342ba820880b4150da0520ca2c2bd15e2dae173214d906ae46d290c18c1a1e (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:47:15 compute-0 podman[97366]: 2025-10-08 09:47:15.953509874 +0000 UTC m=+0.107321046 container start 8d342ba820880b4150da0520ca2c2bd15e2dae173214d906ae46d290c18c1a1e (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:47:15 compute-0 bash[97366]: 8d342ba820880b4150da0520ca2c2bd15e2dae173214d906ae46d290c18c1a1e
Oct 08 09:47:15 compute-0 podman[97366]: 2025-10-08 09:47:15.868402379 +0000 UTC m=+0.022213531 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Oct 08 09:47:15 compute-0 systemd[1]: Started Ceph alertmanager.compute-0 for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct 08 09:47:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[97382]: ts=2025-10-08T09:47:15.980Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)"
Oct 08 09:47:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[97382]: ts=2025-10-08T09:47:15.980Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)"
Oct 08 09:47:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[97382]: ts=2025-10-08T09:47:15.993Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.122.100 port=9094
Oct 08 09:47:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[97382]: ts=2025-10-08T09:47:15.995Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s
Oct 08 09:47:16 compute-0 sudo[96985]: pam_unix(sudo:session): session closed for user root
Oct 08 09:47:16 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 09:47:16 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:16 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 09:47:16 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Oct 08 09:47:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[97382]: ts=2025-10-08T09:47:16.040Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml
Oct 08 09:47:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[97382]: ts=2025-10-08T09:47:16.041Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml
Oct 08 09:47:16 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:16 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Oct 08 09:47:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[97382]: ts=2025-10-08T09:47:16.047Z caller=tls_config.go:232 level=info msg="Listening on" address=192.168.122.100:9093
Oct 08 09:47:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[97382]: ts=2025-10-08T09:47:16.047Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=192.168.122.100:9093
Oct 08 09:47:16 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Oct 08 09:47:16 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Oct 08 09:47:16 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Oct 08 09:47:16 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Oct 08 09:47:16 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Oct 08 09:47:16 compute-0 ceph-mgr[73869]: [progress INFO root] update: starting ev 93b10f5d-f027-4d2d-852f-db5ecd9fbce7 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Oct 08 09:47:16 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0)
Oct 08 09:47:16 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct 08 09:47:16 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 08 09:47:16 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 08 09:47:16 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:16 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:16 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:16 compute-0 ceph-mgr[73869]: [progress INFO root] complete: finished ev c79faeab-2ee3-4aba-a667-4c696cb5984a (Updating alertmanager deployment (+1 -> 1))
Oct 08 09:47:16 compute-0 ceph-mgr[73869]: [progress INFO root] Completed event c79faeab-2ee3-4aba-a667-4c696cb5984a (Updating alertmanager deployment (+1 -> 1)) in 3 seconds
Oct 08 09:47:16 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Oct 08 09:47:16 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:16 compute-0 ceph-mgr[73869]: [progress INFO root] update: starting ev 9550db9d-3c92-4760-9334-11f23ea86e6f (Updating grafana deployment (+1 -> 1))
Oct 08 09:47:16 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.services.monitoring] Regenerating cephadm self-signed grafana TLS certificates
Oct 08 09:47:16 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Regenerating cephadm self-signed grafana TLS certificates
Oct 08 09:47:16 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.grafana_cert}] v 0)
Oct 08 09:47:16 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:16 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.grafana_key}] v 0)
Oct 08 09:47:16 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:16 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"} v 0)
Oct 08 09:47:16 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Oct 08 09:47:16 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Oct 08 09:47:16 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_SSL_VERIFY}] v 0)
Oct 08 09:47:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw[96948]: Wed Oct  8 09:47:16 2025: (VI_0) Received advert from 192.168.122.101 with lower priority 90, ours 100, forcing new election
Oct 08 09:47:16 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:16 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon grafana.compute-0 on compute-0
Oct 08 09:47:16 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon grafana.compute-0 on compute-0
Oct 08 09:47:16 compute-0 sudo[97404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:47:16 compute-0 sudo[97404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:47:16 compute-0 sudo[97404]: pam_unix(sudo:session): session closed for user root
Oct 08 09:47:16 compute-0 sudo[97429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/grafana:10.4.0 --timeout 895 _orch deploy --fsid 787292cc-8154-50c4-9e00-e9be3e817149
Oct 08 09:47:16 compute-0 sudo[97429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:47:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:16 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6614003f10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:17 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Oct 08 09:47:17 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v42: 260 pgs: 62 unknown, 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Oct 08 09:47:17 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0)
Oct 08 09:47:17 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 08 09:47:17 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Oct 08 09:47:17 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Oct 08 09:47:17 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 54 pg[9.0( v 45'1018 (0'0,45'1018] local-lis/les=38/39 n=178 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=54 pruub=14.250038147s) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 45'1017 mlcod 45'1017 active pruub 179.526794434s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 54 pg[8.0( v 37'12 (0'0,37'12] local-lis/les=36/37 n=6 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=54 pruub=12.293242455s) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 37'11 mlcod 37'11 active pruub 177.570617676s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:17 compute-0 ceph-mgr[73869]: [progress INFO root] update: starting ev 5180560c-0a09-4c25-9066-0eb3d77771f3 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Oct 08 09:47:17 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"} v 0)
Oct 08 09:47:17 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.0( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=54 pruub=12.293242455s) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 37'11 mlcod 0'0 unknown pruub 177.570617676s@ mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(8.0_head 0x559f2c6cefc0) operator()   moving buffer(0x559f2b2c85c8 space 0x559f2b24a1b0 0x0~1000 clean)
Oct 08 09:47:17 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(8.0_head 0x559f2c6cefc0) operator()   moving buffer(0x559f2b2e3a68 space 0x559f2b3261b0 0x0~1000 clean)
Oct 08 09:47:17 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(8.0_head 0x559f2c6cefc0) operator()   moving buffer(0x559f2b2e3388 space 0x559f2b326420 0x0~1000 clean)
Oct 08 09:47:17 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(8.0_head 0x559f2c6cefc0) operator()   moving buffer(0x559f2b2e2f28 space 0x559f2b3265c0 0x0~1000 clean)
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.4( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=1 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.2( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=1 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.1a( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.15( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.b( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.e( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.14( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.8( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.9( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.7( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.c( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.18( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.1e( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.3( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=1 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.17( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.1b( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.5( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=1 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.19( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.1f( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.1( v 37'12 (0'0,37'12] local-lis/les=36/37 n=1 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-mon[73572]: pgmap v40: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 639 B/s wr, 2 op/s
Oct 08 09:47:17 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Oct 08 09:47:17 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Oct 08 09:47:17 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Oct 08 09:47:17 compute-0 ceph-mon[73572]: osdmap e54: 3 total, 3 up, 3 in
Oct 08 09:47:17 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct 08 09:47:17 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:17 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:17 compute-0 ceph-mon[73572]: Regenerating cephadm self-signed grafana TLS certificates
Oct 08 09:47:17 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.11( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.16( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Oct 08 09:47:17 compute-0 ceph-mon[73572]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Oct 08 09:47:17 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:17 compute-0 ceph-mon[73572]: Deploying daemon grafana.compute-0 on compute-0
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.10( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.12( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.1d( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.f( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.13( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.1c( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.6( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=1 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.a( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.d( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.0( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=5 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=54 pruub=14.250038147s) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 45'1017 mlcod 0'0 unknown pruub 179.526794434s@ mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559f2c45e6c0) operator()   moving buffer(0x559f2abc8fc8 space 0x559f2b24d2c0 0x0~1000 clean)
Oct 08 09:47:17 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559f2c45e6c0) operator()   moving buffer(0x559f2b2e5ec8 space 0x559f2b1d4760 0x0~1000 clean)
Oct 08 09:47:17 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559f2c45e6c0) operator()   moving buffer(0x559f2b2d0c08 space 0x559f2b24d940 0x0~1000 clean)
Oct 08 09:47:17 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559f2c45e6c0) operator()   moving buffer(0x559f2b2c8de8 space 0x559f2b24dc80 0x0~1000 clean)
Oct 08 09:47:17 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559f2c45e6c0) operator()   moving buffer(0x559f2b2c8ca8 space 0x559f2b1d4900 0x0~1000 clean)
Oct 08 09:47:17 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559f2c45e6c0) operator()   moving buffer(0x559f2b323a68 space 0x559f2b24d120 0x0~1000 clean)
Oct 08 09:47:17 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559f2c45e6c0) operator()   moving buffer(0x559f2b2ee668 space 0x559f2b1d57a0 0x0~1000 clean)
Oct 08 09:47:17 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559f2c45e6c0) operator()   moving buffer(0x559f2b2e3748 space 0x559f2b1d49d0 0x0~1000 clean)
Oct 08 09:47:17 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559f2c45e6c0) operator()   moving buffer(0x559f2b2d0d48 space 0x559f2b1d5120 0x0~1000 clean)
Oct 08 09:47:17 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559f2c45e6c0) operator()   moving buffer(0x559f2b2e3248 space 0x559f2b1d4aa0 0x0~1000 clean)
Oct 08 09:47:17 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559f2c45e6c0) operator()   moving buffer(0x559f2abc8b68 space 0x559f2b190760 0x0~1000 clean)
Oct 08 09:47:17 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559f2c45e6c0) operator()   moving buffer(0x559f2b2eed48 space 0x559f2b3277a0 0x0~1000 clean)
Oct 08 09:47:17 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559f2c45e6c0) operator()   moving buffer(0x559f2abc8208 space 0x559f2b24dae0 0x0~1000 clean)
Oct 08 09:47:17 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559f2c45e6c0) operator()   moving buffer(0x559f2b2e5068 space 0x559f2b24c4f0 0x0~1000 clean)
Oct 08 09:47:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:17 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:17 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559f2c45e6c0) operator()   moving buffer(0x559f2b2d1428 space 0x559f2b1d5460 0x0~1000 clean)
Oct 08 09:47:17 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559f2c45e6c0) operator()   moving buffer(0x559f2abc9f68 space 0x559f2b0bad10 0x0~1000 clean)
Oct 08 09:47:17 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559f2c45e6c0) operator()   moving buffer(0x559f2b306f28 space 0x559f2b1d4de0 0x0~1000 clean)
Oct 08 09:47:17 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559f2c45e6c0) operator()   moving buffer(0x559f2b2eeac8 space 0x559f2b1d4d10 0x0~1000 clean)
Oct 08 09:47:17 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559f2c45e6c0) operator()   moving buffer(0x559f2abc9b08 space 0x559f2b24d600 0x0~1000 clean)
Oct 08 09:47:17 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559f2c45e6c0) operator()   moving buffer(0x559f2b2d1b08 space 0x559f2b1d5390 0x0~1000 clean)
Oct 08 09:47:17 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559f2c45e6c0) operator()   moving buffer(0x559f2b2d07a8 space 0x559f2b326350 0x0~1000 clean)
Oct 08 09:47:17 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559f2c45e6c0) operator()   moving buffer(0x559f2b2ee0c8 space 0x559f2b1d4c40 0x0~1000 clean)
Oct 08 09:47:17 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559f2c45e6c0) operator()   moving buffer(0x559f2abc9748 space 0x559f2b3260e0 0x0~1000 clean)
Oct 08 09:47:17 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559f2c45e6c0) operator()   moving buffer(0x559f2b06aa28 space 0x559f2b0ada10 0x0~1000 clean)
Oct 08 09:47:17 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559f2c45e6c0) operator()   moving buffer(0x559f2b2ef4c8 space 0x559f2b327870 0x0~1000 clean)
Oct 08 09:47:17 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559f2c45e6c0) operator()   moving buffer(0x559f2b2d1928 space 0x559f2b1d5530 0x0~1000 clean)
Oct 08 09:47:17 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559f2c45e6c0) operator()   moving buffer(0x559f2abc9568 space 0x559f2b24d7a0 0x0~1000 clean)
Oct 08 09:47:17 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559f2c45e6c0) operator()   moving buffer(0x559f2b2ef9c8 space 0x559f2b327940 0x0~1000 clean)
Oct 08 09:47:17 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559f2c45e6c0) operator()   moving buffer(0x559f2abc9ba8 space 0x559f2b24d460 0x0~1000 clean)
Oct 08 09:47:17 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559f2c45e6c0) operator()   moving buffer(0x559f2b306028 space 0x559f2b1d56d0 0x0~1000 clean)
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.1( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.7( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.17( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=5 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.16( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=5 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.13( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=5 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.1e( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=5 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.10( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.4( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.b( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.1d( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=5 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.c( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.a( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.1b( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=5 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.19( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=5 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.3( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.6( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.e( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.1f( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=5 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.14( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=5 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.15( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=5 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.2( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.18( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=5 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.1a( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=5 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.5( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.9( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.11( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.d( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.8( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.f( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.12( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.1c( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=5 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:17 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003340 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[97382]: ts=2025-10-08T09:47:17.995Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000075209s
Oct 08 09:47:18 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Oct 08 09:47:18 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Oct 08 09:47:18 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Oct 08 09:47:18 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Oct 08 09:47:18 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Oct 08 09:47:18 compute-0 ceph-mgr[73869]: [progress INFO root] update: starting ev e52731e3-e9d7-41cd-9989-1ba9708abc37 (PG autoscaler increasing pool 12 PGs from 1 to 32)
Oct 08 09:47:18 compute-0 ceph-mgr[73869]: [progress INFO root] complete: finished ev f81b6bbc-4070-4d6d-ab15-864f1e35b4da (PG autoscaler increasing pool 8 PGs from 1 to 32)
Oct 08 09:47:18 compute-0 ceph-mgr[73869]: [progress INFO root] Completed event f81b6bbc-4070-4d6d-ab15-864f1e35b4da (PG autoscaler increasing pool 8 PGs from 1 to 32) in 4 seconds
Oct 08 09:47:18 compute-0 ceph-mgr[73869]: [progress INFO root] complete: finished ev e7286b65-9033-43ec-a2dd-3b3dd3094fdb (PG autoscaler increasing pool 9 PGs from 1 to 32)
Oct 08 09:47:18 compute-0 ceph-mgr[73869]: [progress INFO root] Completed event e7286b65-9033-43ec-a2dd-3b3dd3094fdb (PG autoscaler increasing pool 9 PGs from 1 to 32) in 3 seconds
Oct 08 09:47:18 compute-0 ceph-mgr[73869]: [progress INFO root] complete: finished ev 93b10f5d-f027-4d2d-852f-db5ecd9fbce7 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Oct 08 09:47:18 compute-0 ceph-mgr[73869]: [progress INFO root] Completed event 93b10f5d-f027-4d2d-852f-db5ecd9fbce7 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 2 seconds
Oct 08 09:47:18 compute-0 ceph-mgr[73869]: [progress INFO root] complete: finished ev 5180560c-0a09-4c25-9066-0eb3d77771f3 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Oct 08 09:47:18 compute-0 ceph-mgr[73869]: [progress INFO root] Completed event 5180560c-0a09-4c25-9066-0eb3d77771f3 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 1 seconds
Oct 08 09:47:18 compute-0 ceph-mgr[73869]: [progress INFO root] complete: finished ev e52731e3-e9d7-41cd-9989-1ba9708abc37 (PG autoscaler increasing pool 12 PGs from 1 to 32)
Oct 08 09:47:18 compute-0 ceph-mgr[73869]: [progress INFO root] Completed event e52731e3-e9d7-41cd-9989-1ba9708abc37 (PG autoscaler increasing pool 12 PGs from 1 to 32) in 0 seconds
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.15( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-mon[73572]: pgmap v42: 260 pgs: 62 unknown, 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Oct 08 09:47:18 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 08 09:47:18 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Oct 08 09:47:18 compute-0 ceph-mon[73572]: osdmap e55: 3 total, 3 up, 3 in
Oct 08 09:47:18 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.14( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.17( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.14( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.16( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.15( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.16( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.17( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.10( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.11( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.10( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.3( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.2( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.2( v 37'12 (0'0,37'12] local-lis/les=54/56 n=1 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.11( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.f( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.8( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.8( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.9( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.9( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.b( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.a( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.e( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.f( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.c( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.d( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.d( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.3( v 37'12 (0'0,37'12] local-lis/les=54/56 n=1 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.c( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.a( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.b( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.e( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.1( v 37'12 (0'0,37'12] local-lis/les=54/56 n=1 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.0( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 45'1017 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.1( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.6( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.0( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 37'11 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.6( v 37'12 (0'0,37'12] local-lis/les=54/56 n=1 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.7( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.7( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.4( v 37'12 (0'0,37'12] local-lis/les=54/56 n=1 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.1a( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.1b( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.1b( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.1a( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.4( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.5( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.19( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.18( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.1e( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.1f( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.1f( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.18( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.1c( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.1d( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.1d( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.1c( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.5( v 37'12 (0'0,37'12] local-lis/les=54/56 n=1 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.12( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.1e( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.13( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.13( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.12( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.19( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:18 compute-0 ceph-mgr[73869]: [progress INFO root] Writing back 22 completed events
Oct 08 09:47:18 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 08 09:47:18 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:18 compute-0 ceph-mgr[73869]: [progress WARNING root] Starting Global Recovery Event,93 pgs not in active + clean state
Oct 08 09:47:18 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Oct 08 09:47:18 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:47:18 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Oct 08 09:47:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:18 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003340 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:19 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v45: 291 pgs: 93 unknown, 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:47:19 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"} v 0)
Oct 08 09:47:19 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 08 09:47:19 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0)
Oct 08 09:47:19 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 08 09:47:19 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Oct 08 09:47:19 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Oct 08 09:47:19 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct 08 09:47:19 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Oct 08 09:47:19 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Oct 08 09:47:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:19 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6614003f10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:19 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Oct 08 09:47:19 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Oct 08 09:47:19 compute-0 ceph-mon[73572]: osdmap e56: 3 total, 3 up, 3 in
Oct 08 09:47:19 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:19 compute-0 ceph-mon[73572]: 9.15 scrub starts
Oct 08 09:47:19 compute-0 ceph-mon[73572]: 9.15 scrub ok
Oct 08 09:47:19 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 08 09:47:19 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 08 09:47:19 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Oct 08 09:47:19 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct 08 09:47:19 compute-0 ceph-mon[73572]: osdmap e57: 3 total, 3 up, 3 in
Oct 08 09:47:19 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Oct 08 09:47:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:19 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:19 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 57 pg[11.0( empty local-lis/les=42/43 n=0 ec=42/42 lis/c=42/42 les/c/f=43/43/0 sis=57 pruub=15.975506783s) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active pruub 183.588607788s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:19 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 57 pg[11.0( empty local-lis/les=42/43 n=0 ec=42/42 lis/c=42/42 les/c/f=43/43/0 sis=57 pruub=15.975506783s) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown pruub 183.588607788s@ mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:19 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Oct 08 09:47:20 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Oct 08 09:47:20 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Oct 08 09:47:20 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.17( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.16( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.15( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.14( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.13( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.12( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.c( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.b( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.1( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.a( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.9( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.d( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.e( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.f( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.8( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.2( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.3( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.4( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.6( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.7( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.18( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.5( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.19( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.1a( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.1b( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.1c( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.1d( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.1e( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.1f( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.10( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.11( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.17( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.16( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.15( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.14( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.13( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.12( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.0( empty local-lis/les=57/58 n=0 ec=42/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.c( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.b( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.a( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.9( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.1( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.d( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.e( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.f( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.8( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.2( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.3( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.4( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.6( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.5( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.7( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.19( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.1a( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.1c( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.1d( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.1e( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.1f( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.10( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.18( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.1b( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:20 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.11( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:20 compute-0 ceph-mon[73572]: pgmap v45: 291 pgs: 93 unknown, 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:47:20 compute-0 ceph-mon[73572]: 9.17 scrub starts
Oct 08 09:47:20 compute-0 ceph-mon[73572]: 9.17 scrub ok
Oct 08 09:47:20 compute-0 ceph-mon[73572]: 10.17 scrub starts
Oct 08 09:47:20 compute-0 ceph-mon[73572]: 10.17 scrub ok
Oct 08 09:47:20 compute-0 ceph-mon[73572]: osdmap e58: 3 total, 3 up, 3 in
Oct 08 09:47:20 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Oct 08 09:47:20 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Oct 08 09:47:20 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:20 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:21 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v48: 353 pgs: 1 peering, 93 unknown, 259 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:47:21 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:21 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:21 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Oct 08 09:47:21 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Oct 08 09:47:21 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:21 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6614003f10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:21 compute-0 ceph-mon[73572]: 9.14 scrub starts
Oct 08 09:47:21 compute-0 ceph-mon[73572]: 9.14 scrub ok
Oct 08 09:47:21 compute-0 ceph-mon[73572]: 10.16 scrub starts
Oct 08 09:47:21 compute-0 ceph-mon[73572]: 10.16 scrub ok
Oct 08 09:47:21 compute-0 podman[97496]: 2025-10-08 09:47:21.882297998 +0000 UTC m=+5.073190577 container create e991218ef3d77ff81f4a0956b6966115402430f47ed23b9b66696c8b685d0c51 (image=quay.io/ceph/grafana:10.4.0, name=adoring_feistel, maintainer=Grafana Labs <hello@grafana.com>)
Oct 08 09:47:21 compute-0 systemd[1]: Started libpod-conmon-e991218ef3d77ff81f4a0956b6966115402430f47ed23b9b66696c8b685d0c51.scope.
Oct 08 09:47:21 compute-0 podman[97496]: 2025-10-08 09:47:21.86398995 +0000 UTC m=+5.054882549 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Oct 08 09:47:21 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:47:21 compute-0 podman[97496]: 2025-10-08 09:47:21.965338046 +0000 UTC m=+5.156230635 container init e991218ef3d77ff81f4a0956b6966115402430f47ed23b9b66696c8b685d0c51 (image=quay.io/ceph/grafana:10.4.0, name=adoring_feistel, maintainer=Grafana Labs <hello@grafana.com>)
Oct 08 09:47:21 compute-0 podman[97496]: 2025-10-08 09:47:21.972596546 +0000 UTC m=+5.163489135 container start e991218ef3d77ff81f4a0956b6966115402430f47ed23b9b66696c8b685d0c51 (image=quay.io/ceph/grafana:10.4.0, name=adoring_feistel, maintainer=Grafana Labs <hello@grafana.com>)
Oct 08 09:47:21 compute-0 podman[97496]: 2025-10-08 09:47:21.976624922 +0000 UTC m=+5.167517511 container attach e991218ef3d77ff81f4a0956b6966115402430f47ed23b9b66696c8b685d0c51 (image=quay.io/ceph/grafana:10.4.0, name=adoring_feistel, maintainer=Grafana Labs <hello@grafana.com>)
Oct 08 09:47:21 compute-0 adoring_feistel[97718]: 472 0
Oct 08 09:47:21 compute-0 systemd[1]: libpod-e991218ef3d77ff81f4a0956b6966115402430f47ed23b9b66696c8b685d0c51.scope: Deactivated successfully.
Oct 08 09:47:21 compute-0 podman[97496]: 2025-10-08 09:47:21.978700759 +0000 UTC m=+5.169593328 container died e991218ef3d77ff81f4a0956b6966115402430f47ed23b9b66696c8b685d0c51 (image=quay.io/ceph/grafana:10.4.0, name=adoring_feistel, maintainer=Grafana Labs <hello@grafana.com>)
Oct 08 09:47:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-72bb59340fec070e33f6e73b735c61ee0a23b95db8b46154866420f69b84dcf5-merged.mount: Deactivated successfully.
Oct 08 09:47:22 compute-0 podman[97496]: 2025-10-08 09:47:22.031652599 +0000 UTC m=+5.222545178 container remove e991218ef3d77ff81f4a0956b6966115402430f47ed23b9b66696c8b685d0c51 (image=quay.io/ceph/grafana:10.4.0, name=adoring_feistel, maintainer=Grafana Labs <hello@grafana.com>)
Oct 08 09:47:22 compute-0 systemd[1]: libpod-conmon-e991218ef3d77ff81f4a0956b6966115402430f47ed23b9b66696c8b685d0c51.scope: Deactivated successfully.
Oct 08 09:47:22 compute-0 podman[97735]: 2025-10-08 09:47:22.097355171 +0000 UTC m=+0.041841871 container create 2d9f2683e24337120c32f398115d175038ee03cf7db3bbf27d44f93018c66e61 (image=quay.io/ceph/grafana:10.4.0, name=fervent_hopper, maintainer=Grafana Labs <hello@grafana.com>)
Oct 08 09:47:22 compute-0 systemd[1]: Started libpod-conmon-2d9f2683e24337120c32f398115d175038ee03cf7db3bbf27d44f93018c66e61.scope.
Oct 08 09:47:22 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:47:22 compute-0 podman[97735]: 2025-10-08 09:47:22.150844858 +0000 UTC m=+0.095331588 container init 2d9f2683e24337120c32f398115d175038ee03cf7db3bbf27d44f93018c66e61 (image=quay.io/ceph/grafana:10.4.0, name=fervent_hopper, maintainer=Grafana Labs <hello@grafana.com>)
Oct 08 09:47:22 compute-0 podman[97735]: 2025-10-08 09:47:22.15596638 +0000 UTC m=+0.100453130 container start 2d9f2683e24337120c32f398115d175038ee03cf7db3bbf27d44f93018c66e61 (image=quay.io/ceph/grafana:10.4.0, name=fervent_hopper, maintainer=Grafana Labs <hello@grafana.com>)
Oct 08 09:47:22 compute-0 fervent_hopper[97751]: 472 0
Oct 08 09:47:22 compute-0 systemd[1]: libpod-2d9f2683e24337120c32f398115d175038ee03cf7db3bbf27d44f93018c66e61.scope: Deactivated successfully.
Oct 08 09:47:22 compute-0 podman[97735]: 2025-10-08 09:47:22.161634188 +0000 UTC m=+0.106120898 container attach 2d9f2683e24337120c32f398115d175038ee03cf7db3bbf27d44f93018c66e61 (image=quay.io/ceph/grafana:10.4.0, name=fervent_hopper, maintainer=Grafana Labs <hello@grafana.com>)
Oct 08 09:47:22 compute-0 podman[97735]: 2025-10-08 09:47:22.16200217 +0000 UTC m=+0.106488880 container died 2d9f2683e24337120c32f398115d175038ee03cf7db3bbf27d44f93018c66e61 (image=quay.io/ceph/grafana:10.4.0, name=fervent_hopper, maintainer=Grafana Labs <hello@grafana.com>)
Oct 08 09:47:22 compute-0 podman[97735]: 2025-10-08 09:47:22.079262631 +0000 UTC m=+0.023749361 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Oct 08 09:47:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-a014035d7ebe0cf60c822fbfe58852474930720f1424ae3978fdb2ca08872ad6-merged.mount: Deactivated successfully.
Oct 08 09:47:22 compute-0 podman[97735]: 2025-10-08 09:47:22.203277203 +0000 UTC m=+0.147763943 container remove 2d9f2683e24337120c32f398115d175038ee03cf7db3bbf27d44f93018c66e61 (image=quay.io/ceph/grafana:10.4.0, name=fervent_hopper, maintainer=Grafana Labs <hello@grafana.com>)
Oct 08 09:47:22 compute-0 systemd[1]: libpod-conmon-2d9f2683e24337120c32f398115d175038ee03cf7db3bbf27d44f93018c66e61.scope: Deactivated successfully.
Oct 08 09:47:22 compute-0 systemd[1]: Reloading.
Oct 08 09:47:22 compute-0 systemd-sysv-generator[97824]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:47:22 compute-0 systemd-rc-local-generator[97817]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:47:22 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Oct 08 09:47:22 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Oct 08 09:47:22 compute-0 sudo[97793]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-satnmvlgzrmgkqigzzbjdluhibwwzzix ; /usr/bin/python3'
Oct 08 09:47:22 compute-0 sudo[97793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:47:22 compute-0 systemd[1]: Reloading.
Oct 08 09:47:22 compute-0 ceph-mon[73572]: pgmap v48: 353 pgs: 1 peering, 93 unknown, 259 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:47:22 compute-0 ceph-mon[73572]: 8.16 scrub starts
Oct 08 09:47:22 compute-0 ceph-mon[73572]: 8.16 scrub ok
Oct 08 09:47:22 compute-0 ceph-mon[73572]: 10.15 scrub starts
Oct 08 09:47:22 compute-0 ceph-mon[73572]: 10.15 scrub ok
Oct 08 09:47:22 compute-0 systemd-sysv-generator[97867]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:47:22 compute-0 systemd-rc-local-generator[97863]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:47:22 compute-0 python3[97831]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:47:22 compute-0 podman[97869]: 2025-10-08 09:47:22.855897998 +0000 UTC m=+0.042225583 container create 50f1c0b74f27cdc0e4078d5c8aa1f387160fda33bc20096de6b9010beda70ead (image=quay.io/ceph/ceph:v19, name=cool_turing, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 08 09:47:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:22 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6600000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:22 compute-0 systemd[1]: Started libpod-conmon-50f1c0b74f27cdc0e4078d5c8aa1f387160fda33bc20096de6b9010beda70ead.scope.
Oct 08 09:47:22 compute-0 systemd[1]: Starting Ceph grafana.compute-0 for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct 08 09:47:22 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:47:22 compute-0 podman[97869]: 2025-10-08 09:47:22.837285111 +0000 UTC m=+0.023612716 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:47:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f750cfd5cc883c54b45ef76d6b79714621ed94c20e0f08145e2cd8bf14557cf/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:47:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f750cfd5cc883c54b45ef76d6b79714621ed94c20e0f08145e2cd8bf14557cf/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:47:22 compute-0 podman[97869]: 2025-10-08 09:47:22.95011547 +0000 UTC m=+0.136443085 container init 50f1c0b74f27cdc0e4078d5c8aa1f387160fda33bc20096de6b9010beda70ead (image=quay.io/ceph/ceph:v19, name=cool_turing, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:47:22 compute-0 podman[97869]: 2025-10-08 09:47:22.956087059 +0000 UTC m=+0.142414644 container start 50f1c0b74f27cdc0e4078d5c8aa1f387160fda33bc20096de6b9010beda70ead (image=quay.io/ceph/ceph:v19, name=cool_turing, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct 08 09:47:22 compute-0 podman[97869]: 2025-10-08 09:47:22.959128055 +0000 UTC m=+0.145455640 container attach 50f1c0b74f27cdc0e4078d5c8aa1f387160fda33bc20096de6b9010beda70ead (image=quay.io/ceph/ceph:v19, name=cool_turing, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:47:23 compute-0 cool_turing[97887]: could not fetch user info: no user info saved
Oct 08 09:47:23 compute-0 podman[98013]: 2025-10-08 09:47:23.119689169 +0000 UTC m=+0.042957495 container create 56b2a7b6ecfb4c45a674b20b3f1d60f53e5d4fac0af3b6685d98e6cab4ce59f5 (image=quay.io/ceph/grafana:10.4.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 08 09:47:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/299d1132a49e90b1d598865e6a36f1a7dd2aea77757b20cf4893ea1efcfcb275/merged/etc/grafana/grafana.ini supports timestamps until 2038 (0x7fffffff)
Oct 08 09:47:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/299d1132a49e90b1d598865e6a36f1a7dd2aea77757b20cf4893ea1efcfcb275/merged/etc/grafana/certs supports timestamps until 2038 (0x7fffffff)
Oct 08 09:47:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/299d1132a49e90b1d598865e6a36f1a7dd2aea77757b20cf4893ea1efcfcb275/merged/var/lib/grafana/grafana.db supports timestamps until 2038 (0x7fffffff)
Oct 08 09:47:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/299d1132a49e90b1d598865e6a36f1a7dd2aea77757b20cf4893ea1efcfcb275/merged/etc/grafana/provisioning/datasources supports timestamps until 2038 (0x7fffffff)
Oct 08 09:47:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/299d1132a49e90b1d598865e6a36f1a7dd2aea77757b20cf4893ea1efcfcb275/merged/etc/grafana/provisioning/dashboards supports timestamps until 2038 (0x7fffffff)
Oct 08 09:47:23 compute-0 systemd[1]: libpod-50f1c0b74f27cdc0e4078d5c8aa1f387160fda33bc20096de6b9010beda70ead.scope: Deactivated successfully.
Oct 08 09:47:23 compute-0 conmon[97887]: conmon 50f1c0b74f27cdc0e407 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-50f1c0b74f27cdc0e4078d5c8aa1f387160fda33bc20096de6b9010beda70ead.scope/container/memory.events
Oct 08 09:47:23 compute-0 podman[97869]: 2025-10-08 09:47:23.186498516 +0000 UTC m=+0.372826111 container died 50f1c0b74f27cdc0e4078d5c8aa1f387160fda33bc20096de6b9010beda70ead (image=quay.io/ceph/ceph:v19, name=cool_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 08 09:47:23 compute-0 podman[98013]: 2025-10-08 09:47:23.187441176 +0000 UTC m=+0.110709482 container init 56b2a7b6ecfb4c45a674b20b3f1d60f53e5d4fac0af3b6685d98e6cab4ce59f5 (image=quay.io/ceph/grafana:10.4.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 08 09:47:23 compute-0 podman[98013]: 2025-10-08 09:47:23.192382562 +0000 UTC m=+0.115650858 container start 56b2a7b6ecfb4c45a674b20b3f1d60f53e5d4fac0af3b6685d98e6cab4ce59f5 (image=quay.io/ceph/grafana:10.4.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 08 09:47:23 compute-0 podman[98013]: 2025-10-08 09:47:23.098045687 +0000 UTC m=+0.021314013 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Oct 08 09:47:23 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v49: 353 pgs: 1 peering, 93 unknown, 259 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:47:23 compute-0 bash[98013]: 56b2a7b6ecfb4c45a674b20b3f1d60f53e5d4fac0af3b6685d98e6cab4ce59f5
Oct 08 09:47:23 compute-0 systemd[1]: Started Ceph grafana.compute-0 for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct 08 09:47:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f750cfd5cc883c54b45ef76d6b79714621ed94c20e0f08145e2cd8bf14557cf-merged.mount: Deactivated successfully.
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:23 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8000d90 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:23 compute-0 podman[97869]: 2025-10-08 09:47:23.240968045 +0000 UTC m=+0.427295630 container remove 50f1c0b74f27cdc0e4078d5c8aa1f387160fda33bc20096de6b9010beda70ead (image=quay.io/ceph/ceph:v19, name=cool_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:47:23 compute-0 systemd[1]: libpod-conmon-50f1c0b74f27cdc0e4078d5c8aa1f387160fda33bc20096de6b9010beda70ead.scope: Deactivated successfully.
Oct 08 09:47:23 compute-0 sudo[97793]: pam_unix(sudo:session): session closed for user root
Oct 08 09:47:23 compute-0 sudo[97429]: pam_unix(sudo:session): session closed for user root
Oct 08 09:47:23 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 09:47:23 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:23 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 09:47:23 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:23 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Oct 08 09:47:23 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:23 compute-0 ceph-mgr[73869]: [progress INFO root] complete: finished ev 9550db9d-3c92-4760-9334-11f23ea86e6f (Updating grafana deployment (+1 -> 1))
Oct 08 09:47:23 compute-0 ceph-mgr[73869]: [progress INFO root] Completed event 9550db9d-3c92-4760-9334-11f23ea86e6f (Updating grafana deployment (+1 -> 1)) in 7 seconds
Oct 08 09:47:23 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Oct 08 09:47:23 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:23 compute-0 ceph-mgr[73869]: [progress INFO root] update: starting ev eb90faac-447e-4af6-82aa-528626b39460 (Updating ingress.rgw.default deployment (+4 -> 4))
Oct 08 09:47:23 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/monitor_password}] v 0)
Oct 08 09:47:23 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:23 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-0.zadvee on compute-0
Oct 08 09:47:23 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-0.zadvee on compute-0
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=settings t=2025-10-08T09:47:23.363525741Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2025-10-08T09:47:23Z
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=settings t=2025-10-08T09:47:23.363760508Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=settings t=2025-10-08T09:47:23.363772358Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=settings t=2025-10-08T09:47:23.363776389Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=settings t=2025-10-08T09:47:23.363779909Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=settings t=2025-10-08T09:47:23.363783469Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=settings t=2025-10-08T09:47:23.363787129Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=settings t=2025-10-08T09:47:23.363791749Z level=info msg="Config overridden from command line" arg="default.log.mode=console"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=settings t=2025-10-08T09:47:23.363795519Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=settings t=2025-10-08T09:47:23.363799329Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=settings t=2025-10-08T09:47:23.363803189Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=settings t=2025-10-08T09:47:23.36380773Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=settings t=2025-10-08T09:47:23.36381219Z level=info msg=Target target=[all]
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=settings t=2025-10-08T09:47:23.36381898Z level=info msg="Path Home" path=/usr/share/grafana
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=settings t=2025-10-08T09:47:23.36382308Z level=info msg="Path Data" path=/var/lib/grafana
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=settings t=2025-10-08T09:47:23.36382735Z level=info msg="Path Logs" path=/var/log/grafana
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=settings t=2025-10-08T09:47:23.36383128Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=settings t=2025-10-08T09:47:23.36383528Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=settings t=2025-10-08T09:47:23.363838631Z level=info msg="App mode production"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=sqlstore t=2025-10-08T09:47:23.364117969Z level=info msg="Connecting to DB" dbtype=sqlite3
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=sqlstore t=2025-10-08T09:47:23.36413979Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r-----
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.364693748Z level=info msg="Starting DB migrations"
Oct 08 09:47:23 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.365804262Z level=info msg="Executing migration" id="create migration_log table"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.366931908Z level=info msg="Migration successfully executed" id="create migration_log table" duration=1.128156ms
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.37045974Z level=info msg="Executing migration" id="create user table"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.371220483Z level=info msg="Migration successfully executed" id="create user table" duration=760.463µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.372858955Z level=info msg="Executing migration" id="add unique index user.login"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.373432694Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=573.779µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.375350914Z level=info msg="Executing migration" id="add unique index user.email"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.375928722Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=577.748µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.377508462Z level=info msg="Executing migration" id="drop index UQE_user_login - v1"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.378304987Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=795.925µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.380023411Z level=info msg="Executing migration" id="drop index UQE_user_email - v1"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.380718503Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=694.552µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.382322583Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.385026908Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=2.700715ms
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.386820476Z level=info msg="Executing migration" id="create user table v2"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.38760154Z level=info msg="Migration successfully executed" id="create user table v2" duration=780.234µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.390175222Z level=info msg="Executing migration" id="create index UQE_user_login - v2"
Oct 08 09:47:23 compute-0 sudo[98096]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vegcnvoxmbnrmgjkrthcptgyrneqhtvj ; /usr/bin/python3'
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.390807151Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=631.629µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.392547646Z level=info msg="Executing migration" id="create index UQE_user_email - v2"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.393185066Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=637.07µs
Oct 08 09:47:23 compute-0 sudo[98096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.39518904Z level=info msg="Executing migration" id="copy data_source v1 to v2"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.39551368Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=324.64µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.397188012Z level=info msg="Executing migration" id="Drop old table user_v1"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.397717129Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=526.837µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.399710822Z level=info msg="Executing migration" id="Add column help_flags1 to user table"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.40059622Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=887.058µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.403278295Z level=info msg="Executing migration" id="Update user table charset"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.403303436Z level=info msg="Migration successfully executed" id="Update user table charset" duration=25.91µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.404957437Z level=info msg="Executing migration" id="Add last_seen_at column to user"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.405827025Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=869.258µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.407424985Z level=info msg="Executing migration" id="Add missing user data"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.407644162Z level=info msg="Migration successfully executed" id="Add missing user data" duration=218.897µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.411817514Z level=info msg="Executing migration" id="Add is_disabled column to user"
Oct 08 09:47:23 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.413192517Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.373973ms
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.414974014Z level=info msg="Executing migration" id="Add index user.login/user.email"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.415697556Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=722.642µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.417581746Z level=info msg="Executing migration" id="Add is_service_account column to user"
Oct 08 09:47:23 compute-0 sudo[98091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.418491684Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=909.958µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.420307332Z level=info msg="Executing migration" id="Update is_service_account column to nullable"
Oct 08 09:47:23 compute-0 sudo[98091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:47:23 compute-0 sudo[98091]: pam_unix(sudo:session): session closed for user root
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.427101386Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=6.791324ms
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.429196303Z level=info msg="Executing migration" id="Add uid column to user"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.430161392Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=965.07µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.432497376Z level=info msg="Executing migration" id="Update uid column values for users"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.432671921Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=176.825µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.435063828Z level=info msg="Executing migration" id="Add unique index user_uid"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.435702597Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=639.189µs
Oct 08 09:47:23 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.440386816Z level=info msg="Executing migration" id="create temp user table v1-7"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.441357196Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=970.43µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.445684523Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.446413345Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=728.653µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:23 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.448339766Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.449014608Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=674.552µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.450957629Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.451516767Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=559.138µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.453196139Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.453739987Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=543.987µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.455475771Z level=info msg="Executing migration" id="Update temp_user table charset"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.455523922Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=48.641µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.457237447Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.457812504Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=570.907µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.459984694Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.460559561Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=576.287µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.463831014Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.464426613Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=597.409µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.466079105Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.466765367Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=686.302µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.469793643Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.472312422Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=2.518239ms
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.474705077Z level=info msg="Executing migration" id="create temp_user v2"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.475367288Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=662.331µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.477490215Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.478100345Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=609.82µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.48302793Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.483705541Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=678.361µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.486130958Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.486730167Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=599.539µs
Oct 08 09:47:23 compute-0 sudo[98119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/haproxy:2.3 --timeout 895 _orch deploy --fsid 787292cc-8154-50c4-9e00-e9be3e817149
Oct 08 09:47:23 compute-0 sudo[98119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.489724261Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.490415243Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=692.512µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.493206111Z level=info msg="Executing migration" id="copy temp_user v1 to v2"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.493604284Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=398.123µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.49632664Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.497134585Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=811.155µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.499142769Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.49953302Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=392.041µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.502135893Z level=info msg="Executing migration" id="create star table"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.502660519Z level=info msg="Migration successfully executed" id="create star table" duration=524.536µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.50524241Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.505838649Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=596.249µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.507515303Z level=info msg="Executing migration" id="create org table v1"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.508096401Z level=info msg="Migration successfully executed" id="create org table v1" duration=581.068µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.510186527Z level=info msg="Executing migration" id="create index UQE_org_name - v1"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.510763445Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=576.748µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.514889335Z level=info msg="Executing migration" id="create org_user table v1"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.515482364Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=592.299µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.517936821Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.519092518Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=1.153317ms
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.521916066Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.523053293Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=1.122276ms
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.525789329Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.526741219Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=952µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.528764943Z level=info msg="Executing migration" id="Update org table charset"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.528798304Z level=info msg="Migration successfully executed" id="Update org table charset" duration=34.801µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.531079466Z level=info msg="Executing migration" id="Update org_user table charset"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.531109547Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=31.381µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.533881904Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.534167543Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=286.399µs
Oct 08 09:47:23 compute-0 python3[98114]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.536384183Z level=info msg="Executing migration" id="create dashboard table"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.537388055Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.002342ms
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.540927607Z level=info msg="Executing migration" id="add index dashboard.account_id"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.541694Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=767.053µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.543769597Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.544598322Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=828.766µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.546527633Z level=info msg="Executing migration" id="create dashboard_tag table"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.547157863Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=629.469µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.54895267Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.549685073Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=732.803µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.551541271Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.55243871Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=898.149µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.554677Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.560924798Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=6.242328ms
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.563631833Z level=info msg="Executing migration" id="create dashboard v2"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.564423567Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=792.544µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.568236078Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.56924346Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=1.007482ms
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.571229583Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.571976056Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=747.204µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.575877639Z level=info msg="Executing migration" id="copy dashboard v1 to v2"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.576254801Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=376.822µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.579687949Z level=info msg="Executing migration" id="drop table dashboard_v1"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.580615789Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=928.15µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.586310948Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.586420611Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=111.354µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.589216279Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2"
Oct 08 09:47:23 compute-0 podman[98144]: 2025-10-08 09:47:23.589496658 +0000 UTC m=+0.040779477 container create dc5c510f959ee980d665744efef7a9a3cccbb12affceccac875af2186e9116cb (image=quay.io/ceph/ceph:v19, name=busy_black, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.590832791Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=1.615332ms
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.592691169Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.594390743Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.699804ms
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.596417237Z level=info msg="Executing migration" id="Add column gnetId in dashboard"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.597831331Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.415054ms
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.60190454Z level=info msg="Executing migration" id="Add index for gnetId in dashboard"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.602654324Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=749.224µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.6079268Z level=info msg="Executing migration" id="Add column plugin_id in dashboard"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.609785428Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.864278ms
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.612801304Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.61395593Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=1.156796ms
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.616672536Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.617348837Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=676.201µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.621491598Z level=info msg="Executing migration" id="Update dashboard table charset"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.621515199Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=24.401µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.625183585Z level=info msg="Executing migration" id="Update dashboard_tag table charset"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.625204875Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=22.38µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.63140505Z level=info msg="Executing migration" id="Add column folder_id in dashboard"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.632883127Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=1.478327ms
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.63457639Z level=info msg="Executing migration" id="Add column isFolder in dashboard"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.635989035Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=1.412645ms
Oct 08 09:47:23 compute-0 systemd[1]: Started libpod-conmon-dc5c510f959ee980d665744efef7a9a3cccbb12affceccac875af2186e9116cb.scope.
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.638812024Z level=info msg="Executing migration" id="Add column has_acl in dashboard"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.640550089Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=1.737805ms
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.64249229Z level=info msg="Executing migration" id="Add column uid in dashboard"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.643937236Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=1.444666ms
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.646392763Z level=info msg="Executing migration" id="Update uid column values in dashboard"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.646586459Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=193.296µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.648907203Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.649578014Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=670.182µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.651285557Z level=info msg="Executing migration" id="Remove unique index org_id_slug"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.652087563Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=799.216µs
Oct 08 09:47:23 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:47:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/158173a8601ef455c874590a082955d8a4e8ee2f60a959f6a275ea7b73a78840/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:47:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/158173a8601ef455c874590a082955d8a4e8ee2f60a959f6a275ea7b73a78840/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.662571913Z level=info msg="Executing migration" id="Update dashboard title length"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.662602254Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=35.271µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.664871416Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id"
Oct 08 09:47:23 compute-0 podman[98144]: 2025-10-08 09:47:23.665453824 +0000 UTC m=+0.116736673 container init dc5c510f959ee980d665744efef7a9a3cccbb12affceccac875af2186e9116cb (image=quay.io/ceph/ceph:v19, name=busy_black, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.665587449Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=717.632µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.667652993Z level=info msg="Executing migration" id="create dashboard_provisioning"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.668268723Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=615.74µs
Oct 08 09:47:23 compute-0 podman[98144]: 2025-10-08 09:47:23.573207545 +0000 UTC m=+0.024490364 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.67004946Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1"
Oct 08 09:47:23 compute-0 podman[98144]: 2025-10-08 09:47:23.671377581 +0000 UTC m=+0.122660400 container start dc5c510f959ee980d665744efef7a9a3cccbb12affceccac875af2186e9116cb (image=quay.io/ceph/ceph:v19, name=busy_black, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.67386196Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=3.809579ms
Oct 08 09:47:23 compute-0 podman[98144]: 2025-10-08 09:47:23.675326006 +0000 UTC m=+0.126608825 container attach dc5c510f959ee980d665744efef7a9a3cccbb12affceccac875af2186e9116cb (image=quay.io/ceph/ceph:v19, name=busy_black, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.675720899Z level=info msg="Executing migration" id="create dashboard_provisioning v2"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.676274115Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=553.226µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.678207907Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.678845827Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=637.189µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.681353287Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.681967445Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=613.899µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.688564864Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.689200874Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=639.331µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.691209737Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.691901869Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=692.162µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.693874061Z level=info msg="Executing migration" id="Add check_sum column"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.695792391Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=1.91815ms
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.69825837Z level=info msg="Executing migration" id="Add index for dashboard_title"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.699124556Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=866.926µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.701224852Z level=info msg="Executing migration" id="delete tags for deleted dashboards"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.702173533Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=948.501µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.704706513Z level=info msg="Executing migration" id="delete stars for deleted dashboards"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.70493864Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=232.528µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.707445009Z level=info msg="Executing migration" id="Add index for dashboard_is_folder"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.708207863Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=762.704µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.710685761Z level=info msg="Executing migration" id="Add isPublic for dashboard"
Oct 08 09:47:23 compute-0 ceph-mon[73572]: 8.15 scrub starts
Oct 08 09:47:23 compute-0 ceph-mon[73572]: 8.15 scrub ok
Oct 08 09:47:23 compute-0 ceph-mon[73572]: 10.14 deep-scrub starts
Oct 08 09:47:23 compute-0 ceph-mon[73572]: 10.14 deep-scrub ok
Oct 08 09:47:23 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:23 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:23 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:23 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:23 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.713205601Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.51869ms
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.71539889Z level=info msg="Executing migration" id="create data_source table"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.716264847Z level=info msg="Migration successfully executed" id="create data_source table" duration=865.937µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.720962075Z level=info msg="Executing migration" id="add index data_source.account_id"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.72173513Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=774.965µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.723880967Z level=info msg="Executing migration" id="add unique index data_source.account_id_name"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.72459123Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=710.083µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.726538251Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.727247914Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=709.483µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.73030902Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.731334243Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=1.025453ms
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.735254506Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.739881563Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=4.626647ms
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.742133354Z level=info msg="Executing migration" id="create data_source table v2"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.74330172Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=1.167587ms
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.747538474Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.748482934Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=944.63µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.753522493Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.75441458Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=892.117µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.757169407Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.757974553Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=800.906µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.760052258Z level=info msg="Executing migration" id="Add column with_credentials"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.76198689Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=1.935362ms
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.76390294Z level=info msg="Executing migration" id="Add secure json data column"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.765772659Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=1.869929ms
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.770435616Z level=info msg="Executing migration" id="Update data_source table charset"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.770778767Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=342.091µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.774921467Z level=info msg="Executing migration" id="Update initial version to 1"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.775273909Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=351.242µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.77723374Z level=info msg="Executing migration" id="Add read_only data column"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.779186092Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=1.952422ms
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.781614749Z level=info msg="Executing migration" id="Migrate logging ds to loki ds"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.781872187Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=258.488µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.783811068Z level=info msg="Executing migration" id="Update json_data with nulls"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.784144099Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=333.381µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.786577275Z level=info msg="Executing migration" id="Add uid column"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.788672351Z level=info msg="Migration successfully executed" id="Add uid column" duration=2.094566ms
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.791280663Z level=info msg="Executing migration" id="Update uid value"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.791551122Z level=info msg="Migration successfully executed" id="Update uid value" duration=270.839µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.795576589Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.797089507Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=1.516907ms
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.799352878Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.800701271Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=1.350223ms
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.803618623Z level=info msg="Executing migration" id="create api_key table"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.804903113Z level=info msg="Migration successfully executed" id="create api_key table" duration=1.28371ms
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.80767273Z level=info msg="Executing migration" id="add index api_key.account_id"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.80922333Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=1.55063ms
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.812276016Z level=info msg="Executing migration" id="add index api_key.key"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.813272367Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=996.781µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.815234349Z level=info msg="Executing migration" id="add index api_key.account_id_name"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.816356254Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=1.124495ms
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.821537718Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.822785277Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=1.24923ms
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.824721198Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.826231686Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=1.510158ms
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.830401537Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.831577955Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=1.177348ms
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.834365203Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.839178074Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=4.814751ms
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.841200088Z level=info msg="Executing migration" id="create api_key table v2"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.841820128Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=615.819µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.845471743Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.846162355Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=687.842µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.850569914Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.851462672Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=895.388µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.853497616Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.854218939Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=719.313µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.859875148Z level=info msg="Executing migration" id="copy api_key v1 to v2"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.860296281Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=422.104µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.862077267Z level=info msg="Executing migration" id="Drop old table api_key_v1"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.862641745Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=565.018µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.868542301Z level=info msg="Executing migration" id="Update api_key table charset"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.868586952Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=48.211µs
Oct 08 09:47:23 compute-0 busy_black[98160]: {
Oct 08 09:47:23 compute-0 busy_black[98160]:     "user_id": "openstack",
Oct 08 09:47:23 compute-0 busy_black[98160]:     "display_name": "openstack",
Oct 08 09:47:23 compute-0 busy_black[98160]:     "email": "",
Oct 08 09:47:23 compute-0 busy_black[98160]:     "suspended": 0,
Oct 08 09:47:23 compute-0 busy_black[98160]:     "max_buckets": 1000,
Oct 08 09:47:23 compute-0 busy_black[98160]:     "subusers": [],
Oct 08 09:47:23 compute-0 busy_black[98160]:     "keys": [
Oct 08 09:47:23 compute-0 busy_black[98160]:         {
Oct 08 09:47:23 compute-0 busy_black[98160]:             "user": "openstack",
Oct 08 09:47:23 compute-0 busy_black[98160]:             "access_key": "32PZJT640EWC6V5K10TY",
Oct 08 09:47:23 compute-0 busy_black[98160]:             "secret_key": "Fa9b6AD4bUkQZvXtdLMApI7GwoxTHPqfY3ShJGwI",
Oct 08 09:47:23 compute-0 busy_black[98160]:             "active": true,
Oct 08 09:47:23 compute-0 busy_black[98160]:             "create_date": "2025-10-08T09:47:23.852017Z"
Oct 08 09:47:23 compute-0 busy_black[98160]:         }
Oct 08 09:47:23 compute-0 busy_black[98160]:     ],
Oct 08 09:47:23 compute-0 busy_black[98160]:     "swift_keys": [],
Oct 08 09:47:23 compute-0 busy_black[98160]:     "caps": [],
Oct 08 09:47:23 compute-0 busy_black[98160]:     "op_mask": "read, write, delete",
Oct 08 09:47:23 compute-0 busy_black[98160]:     "default_placement": "",
Oct 08 09:47:23 compute-0 busy_black[98160]:     "default_storage_class": "",
Oct 08 09:47:23 compute-0 busy_black[98160]:     "placement_tags": [],
Oct 08 09:47:23 compute-0 busy_black[98160]:     "bucket_quota": {
Oct 08 09:47:23 compute-0 busy_black[98160]:         "enabled": false,
Oct 08 09:47:23 compute-0 busy_black[98160]:         "check_on_raw": false,
Oct 08 09:47:23 compute-0 busy_black[98160]:         "max_size": -1,
Oct 08 09:47:23 compute-0 busy_black[98160]:         "max_size_kb": 0,
Oct 08 09:47:23 compute-0 busy_black[98160]:         "max_objects": -1
Oct 08 09:47:23 compute-0 busy_black[98160]:     },
Oct 08 09:47:23 compute-0 busy_black[98160]:     "user_quota": {
Oct 08 09:47:23 compute-0 busy_black[98160]:         "enabled": false,
Oct 08 09:47:23 compute-0 busy_black[98160]:         "check_on_raw": false,
Oct 08 09:47:23 compute-0 busy_black[98160]:         "max_size": -1,
Oct 08 09:47:23 compute-0 busy_black[98160]:         "max_size_kb": 0,
Oct 08 09:47:23 compute-0 busy_black[98160]:         "max_objects": -1
Oct 08 09:47:23 compute-0 busy_black[98160]:     },
Oct 08 09:47:23 compute-0 busy_black[98160]:     "temp_url_keys": [],
Oct 08 09:47:23 compute-0 busy_black[98160]:     "type": "rgw",
Oct 08 09:47:23 compute-0 busy_black[98160]:     "mfa_ids": [],
Oct 08 09:47:23 compute-0 busy_black[98160]:     "account_id": "",
Oct 08 09:47:23 compute-0 busy_black[98160]:     "path": "/",
Oct 08 09:47:23 compute-0 busy_black[98160]:     "create_date": "2025-10-08T09:47:23.851736Z",
Oct 08 09:47:23 compute-0 busy_black[98160]:     "tags": [],
Oct 08 09:47:23 compute-0 busy_black[98160]:     "group_ids": []
Oct 08 09:47:23 compute-0 busy_black[98160]: }
Oct 08 09:47:23 compute-0 busy_black[98160]: 
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.872722712Z level=info msg="Executing migration" id="Add expires to api_key table"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.875555722Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=2.83864ms
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.87738679Z level=info msg="Executing migration" id="Add service account foreign key"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.880464216Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=3.076776ms
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.882496781Z level=info msg="Executing migration" id="set service account foreign key to nil if 0"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.882716028Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=223.827µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.884664089Z level=info msg="Executing migration" id="Add last_used_at to api_key table"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.887765097Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=3.099648ms
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.889577474Z level=info msg="Executing migration" id="Add is_revoked column to api_key table"
Oct 08 09:47:23 compute-0 podman[98281]: 2025-10-08 09:47:23.890150282 +0000 UTC m=+0.041145068 container create 6d7c303463f92ed51ec0d120bcf71f4dc1a2f279a809c2cfcd0790db62e25a87 (image=quay.io/ceph/haproxy:2.3, name=confident_driscoll)
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.892757985Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=3.17943ms
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.894860391Z level=info msg="Executing migration" id="create dashboard_snapshot table v4"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.895993967Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=1.133727ms
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.899685464Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.900377575Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=695.392µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.902755099Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.903532304Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=777.455µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.905342692Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.906120816Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=777.814µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.90782146Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.90848555Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=662.38µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.910451183Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.911225727Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=774.374µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.913087966Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.913129307Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=41.461µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.914726947Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.914751028Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=24.771µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.916752581Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.919296672Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=2.543411ms
Oct 08 09:47:23 compute-0 systemd[1]: Started libpod-conmon-6d7c303463f92ed51ec0d120bcf71f4dc1a2f279a809c2cfcd0790db62e25a87.scope.
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.921535293Z level=info msg="Executing migration" id="Add encrypted dashboard json column"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.923609158Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.071006ms
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.927214342Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.927259583Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=45.681µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.928839793Z level=info msg="Executing migration" id="create quota table v1"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.929412811Z level=info msg="Migration successfully executed" id="create quota table v1" duration=572.929µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.931572689Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.932413056Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=840.336µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.934115469Z level=info msg="Executing migration" id="Update quota table charset"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.934135499Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=20.65µs
Oct 08 09:47:23 compute-0 podman[98144]: 2025-10-08 09:47:23.935123751 +0000 UTC m=+0.386406600 container died dc5c510f959ee980d665744efef7a9a3cccbb12affceccac875af2186e9116cb (image=quay.io/ceph/ceph:v19, name=busy_black, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.936797683Z level=info msg="Executing migration" id="create plugin_setting table"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.937398293Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=600.74µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.94079025Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.94143564Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=644.91µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.944050303Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.946093127Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=2.042164ms
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.948797982Z level=info msg="Executing migration" id="Update plugin_setting table charset"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.948818743Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=21.261µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.950545537Z level=info msg="Executing migration" id="create session table"
Oct 08 09:47:23 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:47:23 compute-0 systemd[1]: libpod-dc5c510f959ee980d665744efef7a9a3cccbb12affceccac875af2186e9116cb.scope: Deactivated successfully.
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.951601441Z level=info msg="Migration successfully executed" id="create session table" duration=1.026143ms
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.953769689Z level=info msg="Executing migration" id="Drop old table playlist table"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.953872652Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=103.433µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.955467063Z level=info msg="Executing migration" id="Drop old table playlist_item table"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.955558886Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=92.013µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.957734284Z level=info msg="Executing migration" id="create playlist table v2"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.958344834Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=610.59µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.960057887Z level=info msg="Executing migration" id="create playlist item table v2"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.960663317Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=605.64µs
Oct 08 09:47:23 compute-0 podman[98281]: 2025-10-08 09:47:23.869003116 +0000 UTC m=+0.019997942 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.966110088Z level=info msg="Executing migration" id="Update playlist table charset"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.966132949Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=23.421µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.967967217Z level=info msg="Executing migration" id="Update playlist_item table charset"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.967989907Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=23.63µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.969595649Z level=info msg="Executing migration" id="Add playlist column created_at"
Oct 08 09:47:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-158173a8601ef455c874590a082955d8a4e8ee2f60a959f6a275ea7b73a78840-merged.mount: Deactivated successfully.
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.971969433Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=2.372944ms
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.976743384Z level=info msg="Executing migration" id="Add playlist column updated_at"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.979319275Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=2.578972ms
Oct 08 09:47:23 compute-0 podman[98281]: 2025-10-08 09:47:23.981146302 +0000 UTC m=+0.132141098 container init 6d7c303463f92ed51ec0d120bcf71f4dc1a2f279a809c2cfcd0790db62e25a87 (image=quay.io/ceph/haproxy:2.3, name=confident_driscoll)
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.982507025Z level=info msg="Executing migration" id="drop preferences table v2"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.982627339Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=121.014µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.984986334Z level=info msg="Executing migration" id="drop preferences table v3"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.985137708Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=151.105µs
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.987004687Z level=info msg="Executing migration" id="create preferences table v3"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.98773244Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=728.183µs
Oct 08 09:47:23 compute-0 podman[98281]: 2025-10-08 09:47:23.988618078 +0000 UTC m=+0.139612864 container start 6d7c303463f92ed51ec0d120bcf71f4dc1a2f279a809c2cfcd0790db62e25a87 (image=quay.io/ceph/haproxy:2.3, name=confident_driscoll)
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.989615199Z level=info msg="Executing migration" id="Update preferences table charset"
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.989668221Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=53.722µs
Oct 08 09:47:23 compute-0 confident_driscoll[98304]: 0 0
Oct 08 09:47:23 compute-0 systemd[1]: libpod-6d7c303463f92ed51ec0d120bcf71f4dc1a2f279a809c2cfcd0790db62e25a87.scope: Deactivated successfully.
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.99535091Z level=info msg="Executing migration" id="Add column team_id in preferences"
Oct 08 09:47:23 compute-0 podman[98281]: 2025-10-08 09:47:23.995925539 +0000 UTC m=+0.146920325 container attach 6d7c303463f92ed51ec0d120bcf71f4dc1a2f279a809c2cfcd0790db62e25a87 (image=quay.io/ceph/haproxy:2.3, name=confident_driscoll)
Oct 08 09:47:23 compute-0 podman[98281]: 2025-10-08 09:47:23.996986512 +0000 UTC m=+0.147981298 container died 6d7c303463f92ed51ec0d120bcf71f4dc1a2f279a809c2cfcd0790db62e25a87 (image=quay.io/ceph/haproxy:2.3, name=confident_driscoll)
Oct 08 09:47:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.998327534Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=2.975984ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.002235018Z level=info msg="Executing migration" id="Update team_id column values in preferences"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.002457375Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=220.857µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.005284774Z level=info msg="Executing migration" id="Add column week_start in preferences"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.007610727Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=2.325843ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.009710013Z level=info msg="Executing migration" id="Add column preferences.json_data"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.01215794Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=2.448107ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.016545319Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.016824818Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=282.349µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.01877754Z level=info msg="Executing migration" id="Add preferences index org_id"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.019623967Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=846.327µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.021925439Z level=info msg="Executing migration" id="Add preferences index user_id"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.0229143Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=988.831µs
Oct 08 09:47:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-15a299666df75208b19bc13f287e74f6c95d5000aabd8ff9935fbeca52106f85-merged.mount: Deactivated successfully.
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.025949335Z level=info msg="Executing migration" id="create alert table v1"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.027009719Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.060474ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.029987563Z level=info msg="Executing migration" id="add index alert org_id & id "
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.030971715Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=984.311µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.03464274Z level=info msg="Executing migration" id="add index alert state"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.035615891Z level=info msg="Migration successfully executed" id="add index alert state" duration=971.641µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.039574326Z level=info msg="Executing migration" id="add index alert dashboard_id"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.040525835Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=951.719µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.042485848Z level=info msg="Executing migration" id="Create alert_rule_tag table v1"
Oct 08 09:47:24 compute-0 podman[98281]: 2025-10-08 09:47:24.042800327 +0000 UTC m=+0.193795113 container remove 6d7c303463f92ed51ec0d120bcf71f4dc1a2f279a809c2cfcd0790db62e25a87 (image=quay.io/ceph/haproxy:2.3, name=confident_driscoll)
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.043590922Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=1.104574ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.045149712Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.046084201Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=934.179µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.047735103Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.048747205Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=1.011982ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.051289995Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1"
Oct 08 09:47:24 compute-0 systemd[1]: libpod-conmon-6d7c303463f92ed51ec0d120bcf71f4dc1a2f279a809c2cfcd0790db62e25a87.scope: Deactivated successfully.
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.060060751Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=8.768556ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.062388625Z level=info msg="Executing migration" id="Create alert_rule_tag table v2"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.06319489Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=806.875µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.065501193Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2"
Oct 08 09:47:24 compute-0 podman[98144]: 2025-10-08 09:47:24.064338077 +0000 UTC m=+0.515620926 container remove dc5c510f959ee980d665744efef7a9a3cccbb12affceccac875af2186e9116cb (image=quay.io/ceph/ceph:v19, name=busy_black, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.066539066Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=1.037603ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.068301851Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.068695374Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=393.293µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.070310475Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.070925934Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=615.259µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.072956799Z level=info msg="Executing migration" id="create alert_notification table v1"
Oct 08 09:47:24 compute-0 systemd[1]: libpod-conmon-dc5c510f959ee980d665744efef7a9a3cccbb12affceccac875af2186e9116cb.scope: Deactivated successfully.
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.073778434Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=821.145µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.075572111Z level=info msg="Executing migration" id="Add column is_default"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.078355479Z level=info msg="Migration successfully executed" id="Add column is_default" duration=2.782868ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.080135835Z level=info msg="Executing migration" id="Add column frequency"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.082919073Z level=info msg="Migration successfully executed" id="Add column frequency" duration=2.782658ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.084769671Z level=info msg="Executing migration" id="Add column send_reminder"
Oct 08 09:47:24 compute-0 sudo[98096]: pam_unix(sudo:session): session closed for user root
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.088663924Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.892733ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.090594685Z level=info msg="Executing migration" id="Add column disable_resolve_message"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.093876408Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=3.281394ms
Oct 08 09:47:24 compute-0 systemd[1]: Reloading.
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.095984164Z level=info msg="Executing migration" id="add index alert_notification org_id & name"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.097138251Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=1.153697ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.09932626Z level=info msg="Executing migration" id="Update alert table charset"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.099477355Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=151.855µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.101253252Z level=info msg="Executing migration" id="Update alert_notification table charset"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.101397456Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=146.585µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.103202343Z level=info msg="Executing migration" id="create notification_journal table v1"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.104002378Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=799.385µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.106398294Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.107697964Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.30166ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.109733699Z level=info msg="Executing migration" id="drop alert_notification_journal"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.110886675Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=1.152576ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.112766284Z level=info msg="Executing migration" id="create alert_notification_state table v1"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.113832988Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=1.066244ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.115940254Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.117163703Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=1.222709ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.118995071Z level=info msg="Executing migration" id="Add for to alert table"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.123937546Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=4.941556ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.127066405Z level=info msg="Executing migration" id="Add column uid in alert_notification"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.132201258Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=5.131433ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.135609645Z level=info msg="Executing migration" id="Update uid column values in alert_notification"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.135826351Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=217.847µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.13800915Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.139276591Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=1.26678ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.141108068Z level=info msg="Executing migration" id="Remove unique index org_id_name"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.142131001Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=1.022913ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.144569127Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.148827142Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=4.256745ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.151147345Z level=info msg="Executing migration" id="alter alert.settings to mediumtext"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.151228198Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=81.753µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.153424747Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.154588264Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=1.163137ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.156520645Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.157951219Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=1.430004ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.160320834Z level=info msg="Executing migration" id="Drop old annotation table v4"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.160439378Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=119.414µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.164378772Z level=info msg="Executing migration" id="create annotation table v5"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.165683964Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=1.303252ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.167975976Z level=info msg="Executing migration" id="add index annotation 0 v3"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.169283507Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=1.308291ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.17125625Z level=info msg="Executing migration" id="add index annotation 1 v3"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.172394625Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.138455ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.174433869Z level=info msg="Executing migration" id="add index annotation 2 v3"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.175554035Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=1.122176ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.177657161Z level=info msg="Executing migration" id="add index annotation 3 v3"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.178935602Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.278051ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.181151121Z level=info msg="Executing migration" id="add index annotation 4 v3"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.182481484Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.330303ms
Oct 08 09:47:24 compute-0 systemd-sysv-generator[98366]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:47:24 compute-0 systemd-rc-local-generator[98362]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.189486974Z level=info msg="Executing migration" id="Update annotation table charset"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.189554906Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=71.802µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.192451788Z level=info msg="Executing migration" id="Add column region_id to annotation table"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.202393961Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=9.939393ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.205284272Z level=info msg="Executing migration" id="Drop category_id index"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.208556586Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=3.272163ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.211146947Z level=info msg="Executing migration" id="Add column tags to annotation table"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.21916761Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=8.020423ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.221514585Z level=info msg="Executing migration" id="Create annotation_tag table v2"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.222898018Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=1.383613ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.225387747Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.226956977Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=1.56873ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.229801016Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.231498319Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=1.697553ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.233968007Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.251737278Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=17.769671ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.253855595Z level=info msg="Executing migration" id="Create annotation_tag table v3"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.254609339Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=751.514µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.256323202Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.257224321Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=900.219µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.259118711Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.259443261Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=326.36µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.261191896Z level=info msg="Executing migration" id="drop table annotation_tag_v2"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.261833136Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=640.89µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.264867802Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.265070868Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=203.246µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.266896396Z level=info msg="Executing migration" id="Add created time to annotation table"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.270716187Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=3.81944ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.272439591Z level=info msg="Executing migration" id="Add updated time to annotation table"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.276254401Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=3.813811ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.278088809Z level=info msg="Executing migration" id="Add index for created in annotation table"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.279282967Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=1.193328ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.281346022Z level=info msg="Executing migration" id="Add index for updated in annotation table"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.282155708Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=809.465µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.284191811Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.284417929Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=226.648µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.286266507Z level=info msg="Executing migration" id="Add epoch_end column"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.289328793Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=3.061446ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.291167212Z level=info msg="Executing migration" id="Add index for epoch_end"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.291891874Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=724.232µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.293612699Z level=info msg="Executing migration" id="Make epoch_end the same as epoch"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.293768114Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=150.165µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.295408245Z level=info msg="Executing migration" id="Move region to single row"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.295720865Z level=info msg="Migration successfully executed" id="Move region to single row" duration=312.57µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.297657186Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.298488092Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=830.856µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.300226597Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.30095851Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=731.323µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.302747167Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.30348876Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=738.783µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.305263577Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.305998779Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=734.992µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.308017224Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.308927391Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=909.818µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.310689388Z level=info msg="Executing migration" id="Add index for alert_id on annotation table"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.311538934Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=851.316µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.313358692Z level=info msg="Executing migration" id="Increase tags column to length 4096"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.313432574Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=73.902µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.315087516Z level=info msg="Executing migration" id="create test_data table"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.315724966Z level=info msg="Migration successfully executed" id="create test_data table" duration=637.29µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.317261224Z level=info msg="Executing migration" id="create dashboard_version table v1"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.317958557Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=697.173µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.319582538Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.320399823Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=817.185µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.321905231Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.322685335Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=779.724µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.324427221Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.324591326Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=164.125µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.326414313Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.326743384Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=328.751µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.328443657Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.32852886Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=85.113µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.33042948Z level=info msg="Executing migration" id="create team table"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.331026499Z level=info msg="Migration successfully executed" id="create team table" duration=596.638µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.332888068Z level=info msg="Executing migration" id="add index team.org_id"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.334082125Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.192417ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.336014127Z level=info msg="Executing migration" id="add unique index team_org_id_name"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.336907814Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=893.497µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.338729862Z level=info msg="Executing migration" id="Add column uid in team"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.342314945Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=3.584652ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.344009969Z level=info msg="Executing migration" id="Update uid column values in team"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.344215005Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=205.106µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.346085914Z level=info msg="Executing migration" id="Add unique index team_org_id_uid"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.347076895Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=991.111µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.348772729Z level=info msg="Executing migration" id="create team member table"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.349502952Z level=info msg="Migration successfully executed" id="create team member table" duration=729.623µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.35323828Z level=info msg="Executing migration" id="add index team_member.org_id"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.354068056Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=830.116µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.35576109Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.356729389Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=967.159µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.35830434Z level=info msg="Executing migration" id="add index team_member.team_id"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.359246029Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=939.889µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.361314325Z level=info msg="Executing migration" id="Add column email to team table"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.366669134Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=5.351018ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.368867153Z level=info msg="Executing migration" id="Add column external to team_member table"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.372497697Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=3.630384ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.374617824Z level=info msg="Executing migration" id="Add column permission to team_member table"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.378195847Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=3.576693ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.380121328Z level=info msg="Executing migration" id="create dashboard acl table"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.380999946Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=878.638µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.382794362Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.383712401Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=917.689µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.386100516Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.387164419Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.063573ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.389420951Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.390609858Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=1.188537ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.392594391Z level=info msg="Executing migration" id="add index dashboard_acl_user_id"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.393495029Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=900.889µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.395060969Z level=info msg="Executing migration" id="add index dashboard_acl_team_id"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.395848273Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=787.434µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.397602049Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.398408164Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=805.705µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.399946373Z level=info msg="Executing migration" id="add index dashboard_permission"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.400720008Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=773.535µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.40240217Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.402846125Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=443.395µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.404382833Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.404585699Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=202.996µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.406137349Z level=info msg="Executing migration" id="create tag table"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.406749407Z level=info msg="Migration successfully executed" id="create tag table" duration=613.559µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.408267356Z level=info msg="Executing migration" id="add index tag.key_value"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.408980478Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=712.792µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.410559578Z level=info msg="Executing migration" id="create login attempt table"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.411155337Z level=info msg="Migration successfully executed" id="create login attempt table" duration=595.339µs
Oct 08 09:47:24 compute-0 systemd[1]: Reloading.
Oct 08 09:47:24 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.414320707Z level=info msg="Executing migration" id="add index login_attempt.username"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.415240045Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=918.218µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.417091464Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.418278511Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.186487ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.420205642Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1"
Oct 08 09:47:24 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.430182467Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=9.975545ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.431941802Z level=info msg="Executing migration" id="create login_attempt v2"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.432687696Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=745.993µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.434863664Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.435598247Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=734.523µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.437290691Z level=info msg="Executing migration" id="copy login_attempt v1 to v2"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.43757843Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=287.799µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.439269174Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.439803511Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=533.747µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.441643728Z level=info msg="Executing migration" id="create user auth table"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.442449164Z level=info msg="Migration successfully executed" id="create user auth table" duration=805.826µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.444573841Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.445527491Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=953.25µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.447384489Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.447461522Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=79.063µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.449368842Z level=info msg="Executing migration" id="Add OAuth access token to user_auth"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.453381939Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=4.011027ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.455845276Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.459931485Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=4.084699ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.461847066Z level=info msg="Executing migration" id="Add OAuth token type to user_auth"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.466004946Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=4.15624ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.468367281Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.473539905Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=5.173494ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.477019154Z level=info msg="Executing migration" id="Add index to user_id column in user_auth"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.477870141Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=851.217µs
Oct 08 09:47:24 compute-0 systemd-rc-local-generator[98403]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.481343211Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.485533432Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=4.189161ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.48798256Z level=info msg="Executing migration" id="create server_lock table"
Oct 08 09:47:24 compute-0 systemd-sysv-generator[98406]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.490743717Z level=info msg="Migration successfully executed" id="create server_lock table" duration=2.754847ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.493279638Z level=info msg="Executing migration" id="add index server_lock.operation_uid"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.49402397Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=745.033µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.497353445Z level=info msg="Executing migration" id="create user auth token table"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.498197132Z level=info msg="Migration successfully executed" id="create user auth token table" duration=843.317µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.50035453Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.501028682Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=673.882µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.502931681Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.503681955Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=747.574µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.505326057Z level=info msg="Executing migration" id="add index user_auth_token.user_id"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.506136123Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=809.936µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.507917028Z level=info msg="Executing migration" id="Add revoked_at to the user auth token"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.51175501Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=3.837462ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.51336742Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.514083863Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=716.773µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.515578321Z level=info msg="Executing migration" id="create cache_data table"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.516268042Z level=info msg="Migration successfully executed" id="create cache_data table" duration=689.491µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.518177593Z level=info msg="Executing migration" id="add unique index cache_data.cache_key"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.518888584Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=713.131µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.520657611Z level=info msg="Executing migration" id="create short_url table v1"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.521443835Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=787.624µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.525532694Z level=info msg="Executing migration" id="add index short_url.org_id-uid"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.526306509Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=773.495µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.527604999Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.527650551Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=46.462µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.529219651Z level=info msg="Executing migration" id="delete alert_definition table"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.529292383Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=72.952µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.531034288Z level=info msg="Executing migration" id="recreate alert_definition table"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.531800722Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=768.574µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.534804177Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.535786857Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=982.26µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.537669988Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.538420471Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=750.282µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.540080593Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.540125475Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=45.342µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.541785497Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.542526351Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=740.693µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.544247294Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.544934746Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=687.122µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.546536317Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.547454466Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=917.729µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.549124409Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.549987585Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=863.176µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.551812764Z level=info msg="Executing migration" id="Add column paused in alert_definition"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.556052547Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=4.224072ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.55930769Z level=info msg="Executing migration" id="drop alert_definition table"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.560154437Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=846.837µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.561679454Z level=info msg="Executing migration" id="delete alert_definition_version table"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.561744157Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=64.983µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.563623205Z level=info msg="Executing migration" id="recreate alert_definition_version table"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.564469543Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=846.207µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.567135317Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.567921642Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=785.915µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.569611715Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.570392789Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=780.844µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.572151015Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.572197246Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=46.321µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.574808878Z level=info msg="Executing migration" id="drop alert_definition_version table"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.575657625Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=847.997µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.577188724Z level=info msg="Executing migration" id="create alert_instance table"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.577873725Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=684.681µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.579436305Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.580181148Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=745.113µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.582362927Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.583282196Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=918.609µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.584891657Z level=info msg="Executing migration" id="add column current_state_end to alert_instance"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.589896615Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=5.001617ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.591540216Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.592312141Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=772.005µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.594142708Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.594856451Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=715.963µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.59705609Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.61889701Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=21.855251ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.623883757Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.643899098Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=20.013131ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.645741486Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.646608904Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=868.048µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.648058609Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.648745911Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=687.512µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.650291669Z level=info msg="Executing migration" id="add current_reason column related to current_state"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.654271635Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=3.979396ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.656560197Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.660699948Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=4.141631ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.663711273Z level=info msg="Executing migration" id="create alert_rule table"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.664628652Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=919.879µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.666810131Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.667664128Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=856.616µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.669591929Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.670542948Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=954.259µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.672301324Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.67314262Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=841.276µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.675535676Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.675597468Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=62.472µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.679588454Z level=info msg="Executing migration" id="add column for to alert_rule"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.683986642Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=4.397578ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.687494303Z level=info msg="Executing migration" id="add column annotations to alert_rule"
Oct 08 09:47:24 compute-0 systemd[1]: Starting Ceph haproxy.rgw.default.compute-0.zadvee for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.698411557Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=10.917214ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.701291058Z level=info msg="Executing migration" id="add column labels to alert_rule"
Oct 08 09:47:24 compute-0 python3[98435]: ansible-ansible.builtin.get_url Invoked with url=http://192.168.122.100:8443 dest=/tmp/dash_response mode=0644 validate_certs=False force=False http_agent=ansible-httpget use_proxy=True force_basic_auth=False use_gssapi=False backup=False checksum= timeout=10 unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None headers=None tmp_dest=None ciphers=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.711641875Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=10.350247ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.713937998Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.715746734Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=1.806126ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.718646096Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.720534675Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.887289ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.722838959Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.732306737Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=9.461308ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.734706292Z level=info msg="Executing migration" id="add panel_id column to alert_rule"
Oct 08 09:47:24 compute-0 ceph-mon[73572]: pgmap v49: 353 pgs: 1 peering, 93 unknown, 259 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:47:24 compute-0 ceph-mon[73572]: Deploying daemon haproxy.rgw.default.compute-0.zadvee on compute-0
Oct 08 09:47:24 compute-0 ceph-mon[73572]: 9.16 scrub starts
Oct 08 09:47:24 compute-0 ceph-mon[73572]: 9.16 scrub ok
Oct 08 09:47:24 compute-0 ceph-mon[73572]: 10.2 scrub starts
Oct 08 09:47:24 compute-0 ceph-mon[73572]: 10.2 scrub ok
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.751607955Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=16.897423ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.755006903Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.75713261Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=2.126276ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.760265639Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.769698476Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=9.432108ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.772018989Z level=info msg="Executing migration" id="add is_paused column to alert_rule table"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.778308848Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=6.289419ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.780474136Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.780538018Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=64.762µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.783390658Z level=info msg="Executing migration" id="create alert_rule_version table"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.786392163Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=3.003545ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.789323766Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.791821034Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=2.496029ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.795918533Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.798299719Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=2.379646ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.801353704Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.801667074Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=311.02µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.804749432Z level=info msg="Executing migration" id="add column for to alert_rule_version"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.810950827Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=6.196256ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.812964191Z level=info msg="Executing migration" id="add column annotations to alert_rule_version"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.818915909Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=5.951097ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.82085306Z level=info msg="Executing migration" id="add column labels to alert_rule_version"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.826872309Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=6.018869ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.828744079Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.834727928Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=5.983349ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.836756402Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.842891615Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=6.133923ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.845122736Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.845307251Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=184.766µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.847353786Z level=info msg="Executing migration" id=create_alert_configuration_table
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.848330927Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=976.681µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.850313199Z level=info msg="Executing migration" id="Add column default in alert_configuration"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.856412822Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=6.098723ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.858560399Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql"
Oct 08 09:47:24 compute-0 ceph-mgr[73869]: [dashboard INFO request] [192.168.122.100:40840] [GET] [200] [0.116s] [6.3K] [75ce02be-8930-488b-9b75-1f211d459076] /
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.858741995Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=181.736µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.860653435Z level=info msg="Executing migration" id="add column org_id in alert_configuration"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.867678627Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=7.021562ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.869897947Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.870963711Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=1.068744ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.872993985Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.879721296Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=6.725061ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.88171395Z level=info msg="Executing migration" id=create_ngalert_configuration_table
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.882402222Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=689.102µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.884839208Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.885582371Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=743.273µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.887096839Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.891464887Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=4.367838ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.89313686Z level=info msg="Executing migration" id="create provenance_type table"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:24 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6614003f10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.893771459Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=640.679µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.895312849Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.896115604Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=802.845µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.897781476Z level=info msg="Executing migration" id="create alert_image table"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.898472858Z level=info msg="Migration successfully executed" id="create alert_image table" duration=691.522µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.899997166Z level=info msg="Executing migration" id="add unique index on token to alert_image table"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.900767601Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=769.995µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.902250377Z level=info msg="Executing migration" id="support longer URLs in alert_image table"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.90233373Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=83.603µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.903888119Z level=info msg="Executing migration" id=create_alert_configuration_history_table
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.904666184Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=778.045µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.90612628Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.906901024Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=774.754µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.90836996Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.90868149Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.910810237Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.911302153Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=491.736µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.913003807Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.913941856Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=937.999µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.915733163Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.922006Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=6.267527ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.924602832Z level=info msg="Executing migration" id="create library_element table v1"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.925995206Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=1.390334ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.928118653Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.929521308Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.399485ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.931216671Z level=info msg="Executing migration" id="create library_element_connection table v1"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.932180202Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=963.741µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.93561209Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.936745685Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.133515ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.938543462Z level=info msg="Executing migration" id="add unique index library_element org_id_uid"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.939705229Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.161207ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.941456954Z level=info msg="Executing migration" id="increase max description length to 2048"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.941488885Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=32.811µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.943828329Z level=info msg="Executing migration" id="alter library_element model to mediumtext"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.943910122Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=82.183µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.946368699Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.94669238Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=323.501µs
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.948633601Z level=info msg="Executing migration" id="create data_keys table"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.949741455Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.108505ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.95177663Z level=info msg="Executing migration" id="create secrets table"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.952649157Z level=info msg="Migration successfully executed" id="create secrets table" duration=872.427µs
Oct 08 09:47:24 compute-0 podman[98488]: 2025-10-08 09:47:24.955646491 +0000 UTC m=+0.050198374 container create 8c1b83f4045183ce85a8a1c015c338bfd7d7cdf70eee65cf19c9e353ede24b18 (image=quay.io/ceph/haproxy:2.3, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-rgw-default-compute-0-zadvee)
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.95590603Z level=info msg="Executing migration" id="rename data_keys name column to id"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.989968384Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=34.058624ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.991569845Z level=info msg="Executing migration" id="add name column into data_keys"
Oct 08 09:47:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb94876679d3a90106e8c2f3621edec27290aaed4850928ee71a79ddaebfd34b/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.996422238Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=4.852193ms
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.999732063Z level=info msg="Executing migration" id="copy data_keys id column values into name"
Oct 08 09:47:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.999841916Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=110.203µs
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.001475717Z level=info msg="Executing migration" id="rename data_keys name column to label"
Oct 08 09:47:25 compute-0 podman[98488]: 2025-10-08 09:47:25.00662733 +0000 UTC m=+0.101179303 container init 8c1b83f4045183ce85a8a1c015c338bfd7d7cdf70eee65cf19c9e353ede24b18 (image=quay.io/ceph/haproxy:2.3, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-rgw-default-compute-0-zadvee)
Oct 08 09:47:25 compute-0 podman[98488]: 2025-10-08 09:47:25.012709482 +0000 UTC m=+0.107261395 container start 8c1b83f4045183ce85a8a1c015c338bfd7d7cdf70eee65cf19c9e353ede24b18 (image=quay.io/ceph/haproxy:2.3, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-rgw-default-compute-0-zadvee)
Oct 08 09:47:25 compute-0 bash[98488]: 8c1b83f4045183ce85a8a1c015c338bfd7d7cdf70eee65cf19c9e353ede24b18
Oct 08 09:47:25 compute-0 podman[98488]: 2025-10-08 09:47:24.932674108 +0000 UTC m=+0.027225991 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-rgw-default-compute-0-zadvee[98503]: [NOTICE] 280/094725 (2) : New worker #1 (4) forked
Oct 08 09:47:25 compute-0 systemd[1]: Started Ceph haproxy.rgw.default.compute-0.zadvee for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct 08 09:47:25 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.026977292Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=25.496845ms
Oct 08 09:47:25 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 09:47:25 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:47:25.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.028871971Z level=info msg="Executing migration" id="rename data_keys id column back to name"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.05674708Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=27.884419ms
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.058566149Z level=info msg="Executing migration" id="create kv_store table v1"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.059546749Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=980.201µs
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.061730097Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.062677128Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=946.151µs
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.064300029Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.064469654Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=170.575µs
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.066067575Z level=info msg="Executing migration" id="create permission table"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.066828869Z level=info msg="Migration successfully executed" id="create permission table" duration=761.364µs
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.068727849Z level=info msg="Executing migration" id="add unique index permission.role_id"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.069510274Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=782.795µs
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.071125564Z level=info msg="Executing migration" id="add unique index role_id_action_scope"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.071985622Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=860.088µs
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.073486228Z level=info msg="Executing migration" id="create role table"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.074279124Z level=info msg="Migration successfully executed" id="create role table" duration=792.526µs
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.075680727Z level=info msg="Executing migration" id="add column display_name"
Oct 08 09:47:25 compute-0 sudo[98119]: pam_unix(sudo:session): session closed for user root
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.081804181Z level=info msg="Migration successfully executed" id="add column display_name" duration=6.121274ms
Oct 08 09:47:25 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.083580777Z level=info msg="Executing migration" id="add column group_name"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.088803722Z level=info msg="Migration successfully executed" id="add column group_name" duration=5.221275ms
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.090512576Z level=info msg="Executing migration" id="add index role.org_id"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.091439415Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=926.439µs
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.093764778Z level=info msg="Executing migration" id="add unique index role_org_id_name"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.09476582Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.000492ms
Oct 08 09:47:25 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.096396141Z level=info msg="Executing migration" id="add index role_org_id_uid"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.097439975Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.040713ms
Oct 08 09:47:25 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.099674015Z level=info msg="Executing migration" id="create team role table"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.100685787Z level=info msg="Migration successfully executed" id="create team role table" duration=1.011922ms
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.104121755Z level=info msg="Executing migration" id="add index team_role.org_id"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.105076075Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=951.41µs
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.108017768Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.109605928Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.58731ms
Oct 08 09:47:25 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.112904122Z level=info msg="Executing migration" id="add index team_role.team_id"
Oct 08 09:47:25 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.11410555Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.204548ms
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.115678389Z level=info msg="Executing migration" id="create user role table"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.116472935Z level=info msg="Migration successfully executed" id="create user role table" duration=794.396µs
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.118217569Z level=info msg="Executing migration" id="add index user_role.org_id"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.120367958Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=2.148269ms
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.123531497Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.126119019Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=2.586561ms
Oct 08 09:47:25 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.128257577Z level=info msg="Executing migration" id="add index user_role.user_id"
Oct 08 09:47:25 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-2.frbwni on compute-2
Oct 08 09:47:25 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-2.frbwni on compute-2
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.130254199Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.996522ms
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.133416789Z level=info msg="Executing migration" id="create builtin role table"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.134902566Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.486117ms
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.137212509Z level=info msg="Executing migration" id="add index builtin_role.role_id"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.139168671Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.953422ms
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.141177844Z level=info msg="Executing migration" id="add index builtin_role.name"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.143378383Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=2.201189ms
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.147149262Z level=info msg="Executing migration" id="Add column org_id to builtin_role table"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.161410002Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=14.26168ms
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.163748816Z level=info msg="Executing migration" id="add index builtin_role.org_id"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.165263073Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.514177ms
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.166903496Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.168272108Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.367982ms
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.171201681Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.172460831Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.25794ms
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.174178415Z level=info msg="Executing migration" id="add unique index role.uid"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.175368863Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.191998ms
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.177093557Z level=info msg="Executing migration" id="create seed assignment table"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.177906503Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=814.185µs
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.180787883Z level=info msg="Executing migration" id="add unique index builtin_role_role_name"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.181997842Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.209499ms
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.186314458Z level=info msg="Executing migration" id="add column hidden to role table"
Oct 08 09:47:25 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v50: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.196313603Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=9.996135ms
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.198290466Z level=info msg="Executing migration" id="permission kind migration"
Oct 08 09:47:25 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"} v 0)
Oct 08 09:47:25 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 08 09:47:25 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0)
Oct 08 09:47:25 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 08 09:47:25 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0)
Oct 08 09:47:25 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 08 09:47:25 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0)
Oct 08 09:47:25 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct 08 09:47:25 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Oct 08 09:47:25 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.209228481Z level=info msg="Migration successfully executed" id="permission kind migration" duration=10.933875ms
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.211891615Z level=info msg="Executing migration" id="permission attribute migration"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.217674987Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=5.781602ms
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.219723272Z level=info msg="Executing migration" id="permission identifier migration"
Oct 08 09:47:25 compute-0 python3[98541]: ansible-ansible.builtin.get_url Invoked with url=http://192.168.122.100:8443 dest=/tmp/dash_http_response mode=0644 validate_certs=False username=VALUE_SPECIFIED_IN_NO_LOG_PARAMETER password=NOT_LOGGING_PARAMETER url_username=VALUE_SPECIFIED_IN_NO_LOG_PARAMETER url_password=NOT_LOGGING_PARAMETER force=False http_agent=ansible-httpget use_proxy=True force_basic_auth=False use_gssapi=False backup=False checksum= timeout=10 unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False client_cert=None client_key=None headers=None tmp_dest=None ciphers=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.227478696Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=7.746705ms
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.229378767Z level=info msg="Executing migration" id="add permission identifier index"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.23046994Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.090384ms
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.232133423Z level=info msg="Executing migration" id="add permission action scope role_id index"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.233457145Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.322982ms
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.235247801Z level=info msg="Executing migration" id="remove permission role_id action scope index"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.236593274Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.345853ms
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:25 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66000016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.238268426Z level=info msg="Executing migration" id="create query_history table v1"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.239114383Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=846.887µs
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.240647802Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.241605242Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=956.129µs
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.243096919Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.243160781Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=59.632µs
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.245209446Z level=info msg="Executing migration" id="rbac disabled migrator"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.245253657Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=42.142µs
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.247058593Z level=info msg="Executing migration" id="teams permissions migration"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.247621751Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=564.198µs
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.249238752Z level=info msg="Executing migration" id="dashboard permissions"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.249846221Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=608.579µs
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.251404151Z level=info msg="Executing migration" id="dashboard permissions uid scopes"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.251934688Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=530.637µs
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.253392714Z level=info msg="Executing migration" id="drop managed folder create actions"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.253542158Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=149.694µs
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.254849679Z level=info msg="Executing migration" id="alerting notification permissions"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.255370846Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=521.297µs
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.256785201Z level=info msg="Executing migration" id="create query_history_star table v1"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.257477873Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=692.562µs
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.259169716Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.260306282Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.117015ms
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.262636105Z level=info msg="Executing migration" id="add column org_id in query_history_star"
Oct 08 09:47:25 compute-0 ceph-mgr[73869]: [dashboard INFO request] [192.168.122.100:40848] [GET] [200] [0.002s] [6.3K] [aa7dbdf5-1f88-42de-8182-57c808bf6b3b] /
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.268756749Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=6.119654ms
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.270651698Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.270727861Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=76.863µs
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.27231539Z level=info msg="Executing migration" id="create correlation table v1"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.273164417Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=846.197µs
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.274783438Z level=info msg="Executing migration" id="add index correlations.uid"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.275628815Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=844.997µs
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.277171554Z level=info msg="Executing migration" id="add index correlations.source_uid"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.27800655Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=835.056µs
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.280015094Z level=info msg="Executing migration" id="add correlation config column"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.285860387Z level=info msg="Migration successfully executed" id="add correlation config column" duration=5.844443ms
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.287647564Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.288527591Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=879.787µs
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.290606978Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.293571801Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=2.965603ms
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.295974607Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.336605558Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=40.625791ms
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.339351485Z level=info msg="Executing migration" id="create correlation v2"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.34077914Z level=info msg="Migration successfully executed" id="create correlation v2" duration=1.427895ms
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.342722151Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.343899489Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.176848ms
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.34583247Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.347166162Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.333843ms
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.34964765Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.35156039Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.91337ms
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.353918274Z level=info msg="Executing migration" id="copy correlation v1 to v2"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.354383849Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=469.575µs
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.356399693Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.35788379Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=1.483877ms
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.364753656Z level=info msg="Executing migration" id="add provisioning column"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.37724039Z level=info msg="Migration successfully executed" id="add provisioning column" duration=12.481844ms
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.389074344Z level=info msg="Executing migration" id="create entity_events table"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.390196139Z level=info msg="Migration successfully executed" id="create entity_events table" duration=1.129045ms
Oct 08 09:47:25 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Oct 08 09:47:25 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:25 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f80018b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.455128297Z level=info msg="Executing migration" id="create dashboard public config v1"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.457711889Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=2.586632ms
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.475119588Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.475680756Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.495270344Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.496294035Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.518809676Z level=info msg="Executing migration" id="Drop old dashboard public config table"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.521102608Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=2.295273ms
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.578274542Z level=info msg="Executing migration" id="recreate dashboard public config v1"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.580778421Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=2.506198ms
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.585415927Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.587739021Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=2.323255ms
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.590666093Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.592836781Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=2.169998ms
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.5953265Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.59724933Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.92111ms
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.604550031Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.606938766Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=2.389295ms
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.608996541Z level=info msg="Executing migration" id="Drop public config table"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.609837688Z level=info msg="Migration successfully executed" id="Drop public config table" duration=843.546µs
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.611635294Z level=info msg="Executing migration" id="Recreate dashboard public config v2"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.612530203Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=895.008µs
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.614394541Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.615904249Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.509628ms
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.618597524Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.619822602Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.224918ms
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.622378913Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.623891231Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.507888ms
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.626121491Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.657243583Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=31.098741ms
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.711365969Z level=info msg="Executing migration" id="add annotations_enabled column"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.725448183Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=14.080954ms
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.727584942Z level=info msg="Executing migration" id="add time_selection_enabled column"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.736991068Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=9.405396ms
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.739078354Z level=info msg="Executing migration" id="delete orphaned public dashboards"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.739347152Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=268.358µs
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.741306874Z level=info msg="Executing migration" id="add share column"
Oct 08 09:47:25 compute-0 ceph-mon[73572]: 8.17 scrub starts
Oct 08 09:47:25 compute-0 ceph-mon[73572]: 8.17 scrub ok
Oct 08 09:47:25 compute-0 ceph-mon[73572]: 10.0 deep-scrub starts
Oct 08 09:47:25 compute-0 ceph-mon[73572]: 10.0 deep-scrub ok
Oct 08 09:47:25 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:25 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:25 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:25 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 08 09:47:25 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 08 09:47:25 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 08 09:47:25 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct 08 09:47:25 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.751270278Z level=info msg="Migration successfully executed" id="add share column" duration=9.961934ms
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.753069726Z level=info msg="Executing migration" id="backfill empty share column fields with default of public"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.753321493Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=285.379µs
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.788891526Z level=info msg="Executing migration" id="create file table"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.790203226Z level=info msg="Migration successfully executed" id="create file table" duration=1.311831ms
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.79187708Z level=info msg="Executing migration" id="file table idx: path natural pk"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.792745766Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=869.186µs
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.794560734Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.79539873Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=838.006µs
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.797474406Z level=info msg="Executing migration" id="create file_meta table"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.79857309Z level=info msg="Migration successfully executed" id="create file_meta table" duration=1.098694ms
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.800208832Z level=info msg="Executing migration" id="file table idx: path key"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.801495052Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.28496ms
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.803370602Z level=info msg="Executing migration" id="set path collation in file table"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.803439174Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=69.202µs
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.805545721Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.805607593Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=62.562µs
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.809308779Z level=info msg="Executing migration" id="managed permissions migration"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.809892177Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=584.348µs
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.811681024Z level=info msg="Executing migration" id="managed folder permissions alert actions migration"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.811896751Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=216.067µs
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.813860983Z level=info msg="Executing migration" id="RBAC action name migrator"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.814928346Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.065813ms
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.849884229Z level=info msg="Executing migration" id="Add UID column to playlist"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.859476782Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=9.595053ms
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.887250018Z level=info msg="Executing migration" id="Update uid column values in playlist"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.887409693Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=161.585µs
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.889286642Z level=info msg="Executing migration" id="Add index for uid in playlist"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.890842021Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.553698ms
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.892847255Z level=info msg="Executing migration" id="update group index for alert rules"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.893303838Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=458.094µs
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.89622001Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.896478198Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=258.788µs
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.898440201Z level=info msg="Executing migration" id="admin only folder/dashboard permission"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.898946266Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=506.595µs
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.900901878Z level=info msg="Executing migration" id="add action column to seed_assignment"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.9104816Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=9.577352ms
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.912489324Z level=info msg="Executing migration" id="add scope column to seed_assignment"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.923092048Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=10.601784ms
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.92506494Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.926506006Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.440646ms
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.931423341Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable"
Oct 08 09:47:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[97382]: ts=2025-10-08T09:47:25.998Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.003404702s
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.017824526Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=86.397225ms
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.020814691Z level=info msg="Executing migration" id="add unique index builtin_role_name back"
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.021803752Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=988.091µs
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.061804583Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope"
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.06296092Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.157987ms
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.064670914Z level=info msg="Executing migration" id="add primary key to seed_assigment"
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.090421206Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=25.745842ms
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.097922583Z level=info msg="Executing migration" id="add origin column to seed_assignment"
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.104448659Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=6.523566ms
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.106659298Z level=info msg="Executing migration" id="add origin to plugin seed_assignment"
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.106928307Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=268.739µs
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.110092307Z level=info msg="Executing migration" id="prevent seeding OnCall access"
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.110245372Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=153.155µs
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.118027197Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration"
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.118196302Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=169.315µs
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.121841508Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration"
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.121990852Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=149.184µs
Oct 08 09:47:26 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.167664012Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse"
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.167835218Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=171.836µs
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.16978578Z level=info msg="Executing migration" id="create folder table"
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.170583414Z level=info msg="Migration successfully executed" id="create folder table" duration=798.924µs
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.173363472Z level=info msg="Executing migration" id="Add index for parent_uid"
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.174371974Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.009682ms
Oct 08 09:47:26 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 08 09:47:26 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 08 09:47:26 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 08 09:47:26 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Oct 08 09:47:26 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 08 09:47:26 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.178451573Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id"
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.179337961Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=886.278µs
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.181605312Z level=info msg="Executing migration" id="Update folder title length"
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.181624143Z level=info msg="Migration successfully executed" id="Update folder title length" duration=18.951µs
Oct 08 09:47:26 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.184462322Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid"
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.185361611Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=901.019µs
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.193242399Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid"
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.194285742Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.044513ms
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[12.19( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [1] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.196344227Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id"
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[12.1c( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [1] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[10.1b( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.197343849Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=997.662µs
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[10.18( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[10.19( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[10.5( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[10.2( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[12.8( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [1] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[12.a( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [1] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[12.e( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [1] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[10.8( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[12.c( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [1] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[12.b( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [1] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[12.6( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [1] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[10.13( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[10.15( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[10.14( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[12.12( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [1] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[12.10( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [1] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.14( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.040661812s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 active pruub 182.314361572s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.14( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.040636063s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.314361572s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.16( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.065840721s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active pruub 184.340148926s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.16( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.065819740s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 184.340148926s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.14( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.066782951s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active pruub 184.341278076s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.16( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.040081978s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 active pruub 182.314575195s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.17( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.065618515s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active pruub 184.340118408s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.14( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.066769600s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 184.341278076s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.17( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.065597534s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 184.340118408s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.16( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.040049553s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.314575195s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.17( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.040047646s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 active pruub 182.314666748s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.17( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.040028572s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.314666748s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.13( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.066614151s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active pruub 184.341293335s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.13( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.066596031s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 184.341293335s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.10( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.039962769s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 active pruub 182.314682007s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.11( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.039947510s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 active pruub 182.314712524s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.11( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.039935112s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.314712524s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.1( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.066735268s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active pruub 184.341583252s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.1( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.066510201s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 184.341583252s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.12( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.066233635s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active pruub 184.341293335s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.2( v 37'12 (0'0,37'12] local-lis/les=54/56 n=1 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.039722443s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 active pruub 182.314804077s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.2( v 37'12 (0'0,37'12] local-lis/les=54/56 n=1 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.039710045s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.314804077s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.12( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.066205025s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 184.341293335s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.3( v 37'12 (0'0,37'12] local-lis/les=54/56 n=1 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.040011406s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 active pruub 182.315200806s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.3( v 37'12 (0'0,37'12] local-lis/les=54/56 n=1 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.039994240s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.315200806s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.f( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.039584160s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 active pruub 182.314865112s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.f( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.039558411s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.314865112s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.15( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.039474487s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 active pruub 182.314620972s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.10( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.039942741s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.314682007s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.15( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.039208412s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.314620972s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.8( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.039422035s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 active pruub 182.314910889s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.8( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.039410591s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.314910889s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.a( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.065825462s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active pruub 184.341430664s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.a( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.065814972s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 184.341430664s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.a( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.039337158s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 active pruub 182.315063477s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.a( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.039321899s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.315063477s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.9( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.039178848s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 active pruub 182.314941406s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.9( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.039118767s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.314941406s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.d( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.039288521s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 active pruub 182.315170288s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.d( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.039273262s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.315170288s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.e( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.065698624s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active pruub 184.341613770s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.e( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.065679550s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 184.341613770s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.f( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.065616608s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active pruub 184.341629028s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.f( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.065602303s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 184.341629028s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.c( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.039060593s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 active pruub 182.315246582s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.c( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.039015770s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.315246582s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.8( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.065329552s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active pruub 184.341644287s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.8( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.065310478s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 184.341644287s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.b( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.038849831s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 active pruub 182.315292358s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.b( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.038825035s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.315292358s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.3( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.065237045s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active pruub 184.341781616s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.3( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.065219879s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 184.341781616s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.4( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.067255020s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active pruub 184.343902588s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.6( v 37'12 (0'0,37'12] local-lis/les=54/56 n=1 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.040740967s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 active pruub 182.317459106s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.6( v 37'12 (0'0,37'12] local-lis/les=54/56 n=1 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.040719032s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.317459106s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.5( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.067422867s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active pruub 184.344146729s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.4( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.067235947s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 184.343902588s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.5( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.067331314s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 184.344146729s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.5( v 37'12 (0'0,37'12] local-lis/les=54/56 n=1 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.040680885s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 active pruub 182.317520142s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.5( v 37'12 (0'0,37'12] local-lis/les=54/56 n=1 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.040670395s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.317520142s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.4( v 37'12 (0'0,37'12] local-lis/les=54/56 n=1 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.040611267s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 active pruub 182.317596436s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.4( v 37'12 (0'0,37'12] local-lis/les=54/56 n=1 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.040597916s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.317596436s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.7( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.067056656s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active pruub 184.344146729s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.19( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.067019463s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active pruub 184.344161987s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.19( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.066965103s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 184.344161987s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.1b( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.040418625s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 active pruub 182.317642212s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.1a( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.066916466s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active pruub 184.344223022s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.1b( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.040397644s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.317642212s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.1a( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.066900253s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 184.344223022s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.19( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.040371895s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 active pruub 182.317779541s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.19( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.040351868s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.317779541s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.1b( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.067070961s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active pruub 184.344528198s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.1b( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.067058563s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 184.344528198s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.18( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.040277481s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 active pruub 182.317794800s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.18( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.040252686s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.317794800s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.1c( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.066783905s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active pruub 184.344360352s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.1f( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.040222168s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 active pruub 182.317825317s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.1c( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.066760063s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 184.344360352s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.1f( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.040188789s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.317825317s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.7( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.067039490s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 184.344146729s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.1e( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.066552162s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active pruub 184.344390869s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.1d( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.066520691s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active pruub 184.344360352s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.1e( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.066532135s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 184.344390869s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.1d( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.066502571s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 184.344360352s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.1c( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.040002823s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 active pruub 182.317947388s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.1c( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.039972305s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.317947388s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.206341713Z level=info msg="Executing migration" id="Sync dashboard and folder table"
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.12( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.039451599s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 active pruub 182.318054199s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.12( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.039312363s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.318054199s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.207255241Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=916.098µs
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.208877973Z level=info msg="Executing migration" id="Remove ghost folders from the folder table"
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.209245955Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=369.032µs
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.212174707Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id"
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.213568561Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.395384ms
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.216491873Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid"
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.217632889Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.140476ms
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.248079909Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id"
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.249956078Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.879289ms
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.254361217Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title"
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.25572885Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=1.367633ms
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.258186228Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id"
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.259345055Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.154947ms
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.262331939Z level=info msg="Executing migration" id="create anon_device table"
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.263190966Z level=info msg="Migration successfully executed" id="create anon_device table" duration=859.137µs
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.264945712Z level=info msg="Executing migration" id="add unique index anon_device.device_id"
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.265965743Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.018321ms
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.268288036Z level=info msg="Executing migration" id="add index anon_device.updated_at"
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.269212246Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=924.25µs
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.271985303Z level=info msg="Executing migration" id="create signing_key table"
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.272880952Z level=info msg="Migration successfully executed" id="create signing_key table" duration=896.719µs
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.275350379Z level=info msg="Executing migration" id="add unique index signing_key.key_id"
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.276331581Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=984.282µs
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.278386355Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore"
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.279597263Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.211108ms
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.283055012Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore"
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.283408864Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=354.781µs
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.285578452Z level=info msg="Executing migration" id="Add folder_uid for dashboard"
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.292873002Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=7.29369ms
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.295251167Z level=info msg="Executing migration" id="Populate dashboard folder_uid column"
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.295893628Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=643.011µs
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.297891091Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title"
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.29882406Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=934.86µs
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.30202831Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title"
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.303132226Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=1.103596ms
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.304968713Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title"
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.306004106Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=1.035003ms
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.310128647Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder"
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.310975693Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=846.696µs
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.314883907Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title"
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.315713613Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=829.156µs
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.321505935Z level=info msg="Executing migration" id="create sso_setting table"
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.322354952Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=846.807µs
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.324153089Z level=info msg="Executing migration" id="copy kvstore migration status to each org"
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.324710426Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=557.847µs
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.326150812Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status"
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.326372189Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=220.387µs
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.328855357Z level=info msg="Executing migration" id="alter kv_store.value to longtext"
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.328898389Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=43.292µs
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.332345838Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table"
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.33847747Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=6.131082ms
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.342853399Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table"
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.349083346Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=6.228916ms
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.35050022Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration"
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.350756548Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=256.128µs
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.35273069Z level=info msg="migrations completed" performed=547 skipped=0 duration=2.986961619s
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=sqlstore t=2025-10-08T09:47:26.354074573Z level=info msg="Created default organization"
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=secrets t=2025-10-08T09:47:26.356152108Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=plugin.store t=2025-10-08T09:47:26.387807417Z level=info msg="Loading plugins..."
Oct 08 09:47:26 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Oct 08 09:47:26 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=local.finder t=2025-10-08T09:47:26.484574479Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=plugin.store t=2025-10-08T09:47:26.48460475Z level=info msg="Plugins loaded" count=55 duration=96.800693ms
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=query_data t=2025-10-08T09:47:26.487072277Z level=info msg="Query Service initialization"
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=live.push_http t=2025-10-08T09:47:26.490690242Z level=info msg="Live Push Gateway initialization"
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=ngalert.migration t=2025-10-08T09:47:26.494508423Z level=info msg=Starting
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=ngalert.migration t=2025-10-08T09:47:26.494970307Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=ngalert.migration orgID=1 t=2025-10-08T09:47:26.495505404Z level=info msg="Migrating alerts for organisation"
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=ngalert.migration orgID=1 t=2025-10-08T09:47:26.496360871Z level=info msg="Alerts found to migrate" alerts=0
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=ngalert.migration t=2025-10-08T09:47:26.498626242Z level=info msg="Completed alerting migration"
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=ngalert.state.manager t=2025-10-08T09:47:26.521273527Z level=info msg="Running in alternative execution of Error/NoData mode"
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=infra.usagestats.collector t=2025-10-08T09:47:26.524170008Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=provisioning.datasources t=2025-10-08T09:47:26.525592623Z level=info msg="inserting datasource from configuration" name=Loki uid=P8E80F9AEF21F6940
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=provisioning.alerting t=2025-10-08T09:47:26.542124315Z level=info msg="starting to provision alerting"
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=provisioning.alerting t=2025-10-08T09:47:26.542146295Z level=info msg="finished to provision alerting"
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=grafanaStorageLogger t=2025-10-08T09:47:26.54230398Z level=info msg="Storage starting"
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=ngalert.state.manager t=2025-10-08T09:47:26.542441985Z level=info msg="Warming state cache for startup"
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=ngalert.multiorg.alertmanager t=2025-10-08T09:47:26.542730143Z level=info msg="Starting MultiOrg Alertmanager"
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=http.server t=2025-10-08T09:47:26.54547192Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=http.server t=2025-10-08T09:47:26.545774179Z level=info msg="HTTP Server Listen" address=192.168.122.100:3000 protocol=https subUrl= socket=
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=ngalert.state.manager t=2025-10-08T09:47:26.571026476Z level=info msg="State cache has been initialized" states=0 duration=28.583521ms
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=ngalert.scheduler t=2025-10-08T09:47:26.571080917Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=ticker t=2025-10-08T09:47:26.571129409Z level=info msg=starting first_tick=2025-10-08T09:47:30Z
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=provisioning.dashboard t=2025-10-08T09:47:26.602572781Z level=info msg="starting to provision dashboards"
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=plugins.update.checker t=2025-10-08T09:47:26.633367782Z level=info msg="Update check succeeded" duration=90.012019ms
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=grafana.update.checker t=2025-10-08T09:47:26.635334944Z level=info msg="Update check succeeded" duration=92.234189ms
Oct 08 09:47:26 compute-0 ceph-mon[73572]: Deploying daemon haproxy.rgw.default.compute-2.frbwni on compute-2
Oct 08 09:47:26 compute-0 ceph-mon[73572]: pgmap v50: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:47:26 compute-0 ceph-mon[73572]: 8.10 scrub starts
Oct 08 09:47:26 compute-0 ceph-mon[73572]: 8.10 scrub ok
Oct 08 09:47:26 compute-0 ceph-mon[73572]: 10.1 scrub starts
Oct 08 09:47:26 compute-0 ceph-mon[73572]: 10.1 scrub ok
Oct 08 09:47:26 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 08 09:47:26 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 08 09:47:26 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 08 09:47:26 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Oct 08 09:47:26 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 08 09:47:26 compute-0 ceph-mon[73572]: osdmap e59: 3 total, 3 up, 3 in
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=provisioning.dashboard t=2025-10-08T09:47:26.828867099Z level=info msg="finished to provision dashboards"
Oct 08 09:47:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:26 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:26 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:47:26 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:47:26 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:47:26.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:47:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=grafana-apiserver t=2025-10-08T09:47:27.015629461Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager"
Oct 08 09:47:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=grafana-apiserver t=2025-10-08T09:47:27.016029303Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager"
Oct 08 09:47:27 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:47:27 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:47:27 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:47:27.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:47:27 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 08 09:47:27 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:27 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 08 09:47:27 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Oct 08 09:47:27 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:27 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v52: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:47:27 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0)
Oct 08 09:47:27 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct 08 09:47:27 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Oct 08 09:47:27 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Oct 08 09:47:27 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Oct 08 09:47:27 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 60 pg[12.10( v 58'65 lc 51'45 (0'0,58'65] local-lis/les=59/60 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [1] r=0 lpr=59 pi=[57,59)/1 crt=58'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:27 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6614003f10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:27 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 60 pg[10.15( v 58'57 lc 58'56 (0'0,58'57] local-lis/les=59/60 n=1 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=58'57 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:27 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 60 pg[10.14( v 58'57 lc 58'56 (0'0,58'57] local-lis/les=59/60 n=1 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=58'57 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:27 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 60 pg[10.13( v 41'48 (0'0,41'48] local-lis/les=59/60 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:27 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 60 pg[12.6( v 51'62 lc 51'44 (0'0,51'62] local-lis/les=59/60 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [1] r=0 lpr=59 pi=[57,59)/1 crt=51'62 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:27 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 60 pg[12.b( v 51'62 (0'0,51'62] local-lis/les=59/60 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [1] r=0 lpr=59 pi=[57,59)/1 crt=51'62 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:27 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 60 pg[12.12( v 51'62 (0'0,51'62] local-lis/les=59/60 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [1] r=0 lpr=59 pi=[57,59)/1 crt=51'62 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:27 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 60 pg[12.c( v 51'62 (0'0,51'62] local-lis/les=59/60 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [1] r=0 lpr=59 pi=[57,59)/1 crt=51'62 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:27 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 60 pg[10.8( v 41'48 (0'0,41'48] local-lis/les=59/60 n=1 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:27 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 60 pg[12.e( v 51'62 (0'0,51'62] local-lis/les=59/60 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [1] r=0 lpr=59 pi=[57,59)/1 crt=51'62 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:27 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 60 pg[12.8( v 51'62 (0'0,51'62] local-lis/les=59/60 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [1] r=0 lpr=59 pi=[57,59)/1 crt=51'62 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:27 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 60 pg[12.a( v 51'62 lc 0'0 (0'0,51'62] local-lis/les=59/60 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [1] r=0 lpr=59 pi=[57,59)/1 crt=51'62 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:27 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 60 pg[10.2( v 41'48 (0'0,41'48] local-lis/les=59/60 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:27 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 60 pg[10.5( v 41'48 (0'0,41'48] local-lis/les=59/60 n=1 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:27 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 60 pg[10.19( v 41'48 (0'0,41'48] local-lis/les=59/60 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:27 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 60 pg[10.18( v 41'48 (0'0,41'48] local-lis/les=59/60 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:27 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 60 pg[10.1b( v 41'48 (0'0,41'48] local-lis/les=59/60 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:27 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 60 pg[12.1c( v 51'62 (0'0,51'62] local-lis/les=59/60 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [1] r=0 lpr=59 pi=[57,59)/1 crt=51'62 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:27 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 60 pg[12.19( v 51'62 (0'0,51'62] local-lis/les=59/60 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [1] r=0 lpr=59 pi=[57,59)/1 crt=51'62 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:27 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:27 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/keepalived_password}] v 0)
Oct 08 09:47:27 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:27 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 08 09:47:27 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 08 09:47:27 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 08 09:47:27 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 08 09:47:27 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-0.bphuep on compute-0
Oct 08 09:47:27 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-0.bphuep on compute-0
Oct 08 09:47:27 compute-0 sudo[98549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:47:27 compute-0 sudo[98549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:47:27 compute-0 sudo[98549]: pam_unix(sudo:session): session closed for user root
Oct 08 09:47:27 compute-0 sudo[98574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/keepalived:2.2.4 --timeout 895 _orch deploy --fsid 787292cc-8154-50c4-9e00-e9be3e817149
Oct 08 09:47:27 compute-0 sudo[98574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:47:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:27 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66000016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:27 compute-0 podman[98640]: 2025-10-08 09:47:27.865772437 +0000 UTC m=+0.067581753 container create 05fdabed716e35df2f8ffc4a9ab98d0571bb28422988baebe1a49a627aec1836 (image=quay.io/ceph/keepalived:2.2.4, name=elated_bartik, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, io.openshift.expose-services=, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, release=1793, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., description=keepalived for Ceph, com.redhat.component=keepalived-container)
Oct 08 09:47:27 compute-0 ceph-mon[73572]: 11.15 scrub starts
Oct 08 09:47:27 compute-0 ceph-mon[73572]: 11.15 scrub ok
Oct 08 09:47:27 compute-0 ceph-mon[73572]: 12.15 scrub starts
Oct 08 09:47:27 compute-0 ceph-mon[73572]: 12.15 scrub ok
Oct 08 09:47:27 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:27 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:27 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct 08 09:47:27 compute-0 ceph-mon[73572]: osdmap e60: 3 total, 3 up, 3 in
Oct 08 09:47:27 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:27 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:27 compute-0 podman[98640]: 2025-10-08 09:47:27.82085408 +0000 UTC m=+0.022663426 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Oct 08 09:47:27 compute-0 systemd[1]: Started libpod-conmon-05fdabed716e35df2f8ffc4a9ab98d0571bb28422988baebe1a49a627aec1836.scope.
Oct 08 09:47:27 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:47:28 compute-0 podman[98640]: 2025-10-08 09:47:28.028623324 +0000 UTC m=+0.230432740 container init 05fdabed716e35df2f8ffc4a9ab98d0571bb28422988baebe1a49a627aec1836 (image=quay.io/ceph/keepalived:2.2.4, name=elated_bartik, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, architecture=x86_64, io.openshift.tags=Ceph keepalived, version=2.2.4, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc.)
Oct 08 09:47:28 compute-0 podman[98640]: 2025-10-08 09:47:28.04496084 +0000 UTC m=+0.246770166 container start 05fdabed716e35df2f8ffc4a9ab98d0571bb28422988baebe1a49a627aec1836 (image=quay.io/ceph/keepalived:2.2.4, name=elated_bartik, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, release=1793, vcs-type=git, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20)
Oct 08 09:47:28 compute-0 elated_bartik[98656]: 0 0
Oct 08 09:47:28 compute-0 systemd[1]: libpod-05fdabed716e35df2f8ffc4a9ab98d0571bb28422988baebe1a49a627aec1836.scope: Deactivated successfully.
Oct 08 09:47:28 compute-0 podman[98640]: 2025-10-08 09:47:28.093177281 +0000 UTC m=+0.294986677 container attach 05fdabed716e35df2f8ffc4a9ab98d0571bb28422988baebe1a49a627aec1836 (image=quay.io/ceph/keepalived:2.2.4, name=elated_bartik, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, vcs-type=git, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=keepalived-container, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, version=2.2.4, io.buildah.version=1.28.2, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64)
Oct 08 09:47:28 compute-0 podman[98640]: 2025-10-08 09:47:28.093978805 +0000 UTC m=+0.295788161 container died 05fdabed716e35df2f8ffc4a9ab98d0571bb28422988baebe1a49a627aec1836 (image=quay.io/ceph/keepalived:2.2.4, name=elated_bartik, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, io.buildah.version=1.28.2, vcs-type=git, com.redhat.component=keepalived-container, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, name=keepalived, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, release=1793, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph)
Oct 08 09:47:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ee6b2075cb72eba111cf8b5dbede92b9c61f969f4a181378fd1ac67fa70d18f-merged.mount: Deactivated successfully.
Oct 08 09:47:28 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Oct 08 09:47:28 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Oct 08 09:47:28 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Oct 08 09:47:28 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Oct 08 09:47:28 compute-0 ceph-mgr[73869]: [progress INFO root] Writing back 23 completed events
Oct 08 09:47:28 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 08 09:47:28 compute-0 podman[98640]: 2025-10-08 09:47:28.332306743 +0000 UTC m=+0.534116059 container remove 05fdabed716e35df2f8ffc4a9ab98d0571bb28422988baebe1a49a627aec1836 (image=quay.io/ceph/keepalived:2.2.4, name=elated_bartik, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.28.2, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, name=keepalived, release=1793, architecture=x86_64, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct 08 09:47:28 compute-0 systemd[1]: libpod-conmon-05fdabed716e35df2f8ffc4a9ab98d0571bb28422988baebe1a49a627aec1836.scope: Deactivated successfully.
Oct 08 09:47:28 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e61 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:47:28 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:28 compute-0 ceph-mgr[73869]: [progress INFO root] Completed event b9fe5884-05d2-4569-bb9a-538e8e55db00 (Global Recovery Event) in 10 seconds
Oct 08 09:47:28 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 12.a scrub starts
Oct 08 09:47:28 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 12.a scrub ok
Oct 08 09:47:28 compute-0 systemd[1]: Reloading.
Oct 08 09:47:28 compute-0 systemd-rc-local-generator[98711]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:47:28 compute-0 systemd-sysv-generator[98715]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:47:28 compute-0 ceph-mon[73572]: pgmap v52: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:47:28 compute-0 ceph-mon[73572]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 08 09:47:28 compute-0 ceph-mon[73572]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 08 09:47:28 compute-0 ceph-mon[73572]: Deploying daemon keepalived.rgw.default.compute-0.bphuep on compute-0
Oct 08 09:47:28 compute-0 ceph-mon[73572]: 10.e scrub starts
Oct 08 09:47:28 compute-0 ceph-mon[73572]: 10.e scrub ok
Oct 08 09:47:28 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Oct 08 09:47:28 compute-0 ceph-mon[73572]: osdmap e61: 3 total, 3 up, 3 in
Oct 08 09:47:28 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:28 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f80018b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:28 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:47:28 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:47:28 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:47:28.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:47:28 compute-0 systemd[1]: Reloading.
Oct 08 09:47:29 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:47:29 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:47:29 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:47:29.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:47:29 compute-0 systemd-rc-local-generator[98748]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:47:29 compute-0 systemd-sysv-generator[98752]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:47:29 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v55: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:47:29 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0)
Oct 08 09:47:29 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct 08 09:47:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:29 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:29 compute-0 systemd[1]: Starting Ceph keepalived.rgw.default.compute-0.bphuep for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct 08 09:47:29 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Oct 08 09:47:29 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Oct 08 09:47:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:29 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:29 compute-0 podman[98809]: 2025-10-08 09:47:29.540705561 +0000 UTC m=+0.021203061 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Oct 08 09:47:29 compute-0 podman[98809]: 2025-10-08 09:47:29.641320174 +0000 UTC m=+0.121817644 container create ad8a1348c81c698896af7b0b783bb40335d664a7a01f20c114dc63c98a072845 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-rgw-default-compute-0-bphuep, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, vendor=Red Hat, Inc., release=1793, io.buildah.version=1.28.2, name=keepalived, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, version=2.2.4, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64)
Oct 08 09:47:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b80a31699ad10f5fa3ddd7e83a9f7352a11464bb70509c10da2b40cb8d83f11c/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:47:29 compute-0 podman[98809]: 2025-10-08 09:47:29.812789082 +0000 UTC m=+0.293286583 container init ad8a1348c81c698896af7b0b783bb40335d664a7a01f20c114dc63c98a072845 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-rgw-default-compute-0-bphuep, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, architecture=x86_64, build-date=2023-02-22T09:23:20, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., release=1793, name=keepalived, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=)
Oct 08 09:47:29 compute-0 podman[98809]: 2025-10-08 09:47:29.817965187 +0000 UTC m=+0.298462657 container start ad8a1348c81c698896af7b0b783bb40335d664a7a01f20c114dc63c98a072845 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-rgw-default-compute-0-bphuep, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, version=2.2.4, vcs-type=git, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct 08 09:47:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-rgw-default-compute-0-bphuep[98824]: Wed Oct  8 09:47:29 2025: Starting Keepalived v2.2.4 (08/21,2021)
Oct 08 09:47:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-rgw-default-compute-0-bphuep[98824]: Wed Oct  8 09:47:29 2025: Running on Linux 5.14.0-620.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025 (built for Linux 5.14.0)
Oct 08 09:47:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-rgw-default-compute-0-bphuep[98824]: Wed Oct  8 09:47:29 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Oct 08 09:47:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-rgw-default-compute-0-bphuep[98824]: Wed Oct  8 09:47:29 2025: Configuration file /etc/keepalived/keepalived.conf
Oct 08 09:47:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-rgw-default-compute-0-bphuep[98824]: Wed Oct  8 09:47:29 2025: Failed to bind to process monitoring socket - errno 98 - Address already in use
Oct 08 09:47:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-rgw-default-compute-0-bphuep[98824]: Wed Oct  8 09:47:29 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Oct 08 09:47:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-rgw-default-compute-0-bphuep[98824]: Wed Oct  8 09:47:29 2025: Starting VRRP child process, pid=4
Oct 08 09:47:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-rgw-default-compute-0-bphuep[98824]: Wed Oct  8 09:47:29 2025: Startup complete
Oct 08 09:47:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw[96948]: Wed Oct  8 09:47:29 2025: (VI_0) Entering BACKUP STATE
Oct 08 09:47:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-rgw-default-compute-0-bphuep[98824]: Wed Oct  8 09:47:29 2025: (VI_0) Entering BACKUP STATE (init)
Oct 08 09:47:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-rgw-default-compute-0-bphuep[98824]: Wed Oct  8 09:47:29 2025: VRRP_Script(check_backend) succeeded
Oct 08 09:47:29 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Oct 08 09:47:30 compute-0 bash[98809]: ad8a1348c81c698896af7b0b783bb40335d664a7a01f20c114dc63c98a072845
Oct 08 09:47:30 compute-0 systemd[1]: Started Ceph keepalived.rgw.default.compute-0.bphuep for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct 08 09:47:30 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Oct 08 09:47:30 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Oct 08 09:47:30 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Oct 08 09:47:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 62 pg[9.3( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=62 pruub=12.025028229s) [2] r=-1 lpr=62 pi=[54,62)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 190.314880371s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 62 pg[9.3( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=62 pruub=12.024970055s) [2] r=-1 lpr=62 pi=[54,62)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.314880371s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 62 pg[9.b( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=62 pruub=12.025154114s) [2] r=-1 lpr=62 pi=[54,62)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 190.315155029s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 62 pg[9.b( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=62 pruub=12.025109291s) [2] r=-1 lpr=62 pi=[54,62)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.315155029s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 62 pg[9.17( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=3 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=62 pruub=12.024371147s) [2] r=-1 lpr=62 pi=[54,62)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 190.314498901s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 62 pg[9.f( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=62 pruub=12.025084496s) [2] r=-1 lpr=62 pi=[54,62)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 190.315231323s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 62 pg[9.f( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=62 pruub=12.025041580s) [2] r=-1 lpr=62 pi=[54,62)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.315231323s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 62 pg[9.17( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=3 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=62 pruub=12.024296761s) [2] r=-1 lpr=62 pi=[54,62)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.314498901s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 62 pg[9.7( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=62 pruub=12.026891708s) [2] r=-1 lpr=62 pi=[54,62)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 190.317626953s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 62 pg[9.7( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=62 pruub=12.026872635s) [2] r=-1 lpr=62 pi=[54,62)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.317626953s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 62 pg[9.1b( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=62 pruub=12.026808739s) [2] r=-1 lpr=62 pi=[54,62)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 190.317749023s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 62 pg[9.1b( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=62 pruub=12.026784897s) [2] r=-1 lpr=62 pi=[54,62)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.317749023s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 62 pg[9.1f( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=62 pruub=12.026291847s) [2] r=-1 lpr=62 pi=[54,62)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 190.318008423s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 62 pg[9.13( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=62 pruub=12.026341438s) [2] r=-1 lpr=62 pi=[54,62)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 190.318145752s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 62 pg[9.13( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=62 pruub=12.026314735s) [2] r=-1 lpr=62 pi=[54,62)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.318145752s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 62 pg[9.1f( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=62 pruub=12.026110649s) [2] r=-1 lpr=62 pi=[54,62)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.318008423s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:30 compute-0 ceph-mon[73572]: 12.a scrub starts
Oct 08 09:47:30 compute-0 ceph-mon[73572]: 12.a scrub ok
Oct 08 09:47:30 compute-0 ceph-mon[73572]: 10.c scrub starts
Oct 08 09:47:30 compute-0 ceph-mon[73572]: 10.c scrub ok
Oct 08 09:47:30 compute-0 ceph-mon[73572]: 10.3 scrub starts
Oct 08 09:47:30 compute-0 ceph-mon[73572]: 10.3 scrub ok
Oct 08 09:47:30 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct 08 09:47:30 compute-0 sudo[98574]: pam_unix(sudo:session): session closed for user root
Oct 08 09:47:30 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 09:47:30 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:30 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 09:47:30 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:30 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Oct 08 09:47:30 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:30 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 08 09:47:30 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 08 09:47:30 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 08 09:47:30 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 08 09:47:30 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-2.jvgfkf on compute-2
Oct 08 09:47:30 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-2.jvgfkf on compute-2
Oct 08 09:47:30 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Oct 08 09:47:30 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Oct 08 09:47:30 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw[96948]: Wed Oct  8 09:47:30 2025: (VI_0) Entering MASTER STATE
Oct 08 09:47:30 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:30 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66000016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:30 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:47:30 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 09:47:30 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:47:30.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 09:47:31 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:47:31 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 09:47:31 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:47:31.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 09:47:31 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v57: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 324 B/s, 0 keys/s, 2 objects/s recovering
Oct 08 09:47:31 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0)
Oct 08 09:47:31 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct 08 09:47:31 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Oct 08 09:47:31 compute-0 ceph-mon[73572]: pgmap v55: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:47:31 compute-0 ceph-mon[73572]: 9.11 scrub starts
Oct 08 09:47:31 compute-0 ceph-mon[73572]: 9.11 scrub ok
Oct 08 09:47:31 compute-0 ceph-mon[73572]: 10.a scrub starts
Oct 08 09:47:31 compute-0 ceph-mon[73572]: 10.a scrub ok
Oct 08 09:47:31 compute-0 ceph-mon[73572]: 12.1a scrub starts
Oct 08 09:47:31 compute-0 ceph-mon[73572]: 12.1a scrub ok
Oct 08 09:47:31 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Oct 08 09:47:31 compute-0 ceph-mon[73572]: osdmap e62: 3 total, 3 up, 3 in
Oct 08 09:47:31 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:31 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:31 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:31 compute-0 ceph-mon[73572]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 08 09:47:31 compute-0 ceph-mon[73572]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 08 09:47:31 compute-0 ceph-mon[73572]: Deploying daemon keepalived.rgw.default.compute-2.jvgfkf on compute-2
Oct 08 09:47:31 compute-0 ceph-mon[73572]: 9.10 scrub starts
Oct 08 09:47:31 compute-0 ceph-mon[73572]: 9.10 scrub ok
Oct 08 09:47:31 compute-0 ceph-mon[73572]: 10.9 scrub starts
Oct 08 09:47:31 compute-0 ceph-mon[73572]: 10.9 scrub ok
Oct 08 09:47:31 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct 08 09:47:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:31 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f80018b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:31 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Oct 08 09:47:31 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Oct 08 09:47:31 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Oct 08 09:47:31 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 63 pg[9.1f( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=63) [2]/[1] r=0 lpr=63 pi=[54,63)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:31 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 63 pg[9.f( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=63) [2]/[1] r=0 lpr=63 pi=[54,63)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:31 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 63 pg[9.1f( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=63) [2]/[1] r=0 lpr=63 pi=[54,63)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:31 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 63 pg[9.1b( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=63) [2]/[1] r=0 lpr=63 pi=[54,63)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:31 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 63 pg[9.13( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=63) [2]/[1] r=0 lpr=63 pi=[54,63)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:31 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 63 pg[9.f( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=63) [2]/[1] r=0 lpr=63 pi=[54,63)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:31 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 63 pg[9.1b( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=63) [2]/[1] r=0 lpr=63 pi=[54,63)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:31 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 63 pg[9.13( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=63) [2]/[1] r=0 lpr=63 pi=[54,63)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:31 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 63 pg[9.b( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=63) [2]/[1] r=0 lpr=63 pi=[54,63)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:31 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 63 pg[9.3( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=63) [2]/[1] r=0 lpr=63 pi=[54,63)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:31 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 63 pg[9.b( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=63) [2]/[1] r=0 lpr=63 pi=[54,63)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:31 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 63 pg[9.3( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=63) [2]/[1] r=0 lpr=63 pi=[54,63)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:31 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 63 pg[9.17( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=3 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=63) [2]/[1] r=0 lpr=63 pi=[54,63)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:31 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 63 pg[9.17( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=3 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=63) [2]/[1] r=0 lpr=63 pi=[54,63)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:31 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 63 pg[9.7( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=63) [2]/[1] r=0 lpr=63 pi=[54,63)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:31 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 63 pg[9.7( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=63) [2]/[1] r=0 lpr=63 pi=[54,63)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:31 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Oct 08 09:47:31 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Oct 08 09:47:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:31 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6614003f10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Oct 08 09:47:32 compute-0 ceph-mon[73572]: 12.7 scrub starts
Oct 08 09:47:32 compute-0 ceph-mon[73572]: 12.7 scrub ok
Oct 08 09:47:32 compute-0 ceph-mon[73572]: pgmap v57: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 324 B/s, 0 keys/s, 2 objects/s recovering
Oct 08 09:47:32 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Oct 08 09:47:32 compute-0 ceph-mon[73572]: osdmap e63: 3 total, 3 up, 3 in
Oct 08 09:47:32 compute-0 ceph-mon[73572]: 9.2 scrub starts
Oct 08 09:47:32 compute-0 ceph-mon[73572]: 9.2 scrub ok
Oct 08 09:47:32 compute-0 ceph-mon[73572]: 12.f scrub starts
Oct 08 09:47:32 compute-0 ceph-mon[73572]: 12.f scrub ok
Oct 08 09:47:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 08 09:47:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Oct 08 09:47:32 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Oct 08 09:47:32 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:32 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 64 pg[9.7( v 45'1018 (0'0,45'1018] local-lis/les=63/64 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=63) [2]/[1] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:32 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 64 pg[9.b( v 45'1018 (0'0,45'1018] local-lis/les=63/64 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=63) [2]/[1] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:32 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 64 pg[9.1b( v 45'1018 (0'0,45'1018] local-lis/les=63/64 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=63) [2]/[1] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:32 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 64 pg[9.17( v 45'1018 (0'0,45'1018] local-lis/les=63/64 n=3 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=63) [2]/[1] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:32 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 64 pg[9.3( v 45'1018 (0'0,45'1018] local-lis/les=63/64 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=63) [2]/[1] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:32 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 64 pg[9.1f( v 45'1018 (0'0,45'1018] local-lis/les=63/64 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=63) [2]/[1] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 08 09:47:32 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 64 pg[9.f( v 45'1018 (0'0,45'1018] local-lis/les=63/64 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=63) [2]/[1] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:32 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 64 pg[9.13( v 45'1018 (0'0,45'1018] local-lis/les=63/64 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=63) [2]/[1] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:32 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Oct 08 09:47:32 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:32 compute-0 ceph-mgr[73869]: [progress INFO root] complete: finished ev eb90faac-447e-4af6-82aa-528626b39460 (Updating ingress.rgw.default deployment (+4 -> 4))
Oct 08 09:47:32 compute-0 ceph-mgr[73869]: [progress INFO root] Completed event eb90faac-447e-4af6-82aa-528626b39460 (Updating ingress.rgw.default deployment (+4 -> 4)) in 9 seconds
Oct 08 09:47:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Oct 08 09:47:32 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:32 compute-0 ceph-mgr[73869]: [progress INFO root] update: starting ev 0bf79cd8-eb11-4f4f-80b2-14468a3c828d (Updating prometheus deployment (+1 -> 1))
Oct 08 09:47:32 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon prometheus.compute-0 on compute-0
Oct 08 09:47:32 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon prometheus.compute-0 on compute-0
Oct 08 09:47:32 compute-0 sudo[98834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:47:32 compute-0 sudo[98834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:47:32 compute-0 sudo[98834]: pam_unix(sudo:session): session closed for user root
Oct 08 09:47:32 compute-0 sudo[98859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/prometheus:v2.51.0 --timeout 895 _orch deploy --fsid 787292cc-8154-50c4-9e00-e9be3e817149
Oct 08 09:47:32 compute-0 sudo[98859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:47:32 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:32 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:32 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:47:32 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:47:32 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:47:32.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:47:33 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:47:33 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:47:33 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:47:33.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:47:33 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v60: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 333 B/s, 0 keys/s, 3 objects/s recovering
Oct 08 09:47:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Oct 08 09:47:33 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct 08 09:47:33 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:33 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6600002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:33 compute-0 ceph-mon[73572]: 12.18 scrub starts
Oct 08 09:47:33 compute-0 ceph-mon[73572]: 12.18 scrub ok
Oct 08 09:47:33 compute-0 ceph-mon[73572]: osdmap e64: 3 total, 3 up, 3 in
Oct 08 09:47:33 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:33 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:33 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:33 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:33 compute-0 ceph-mon[73572]: 10.d scrub starts
Oct 08 09:47:33 compute-0 ceph-mon[73572]: 10.d scrub ok
Oct 08 09:47:33 compute-0 ceph-mon[73572]: Deploying daemon prometheus.compute-0 on compute-0
Oct 08 09:47:33 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct 08 09:47:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Oct 08 09:47:33 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Oct 08 09:47:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Oct 08 09:47:33 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Oct 08 09:47:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 65 pg[9.15( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=4 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=65 pruub=8.873750687s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 190.311401367s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 65 pg[9.15( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=4 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=65 pruub=8.873694420s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.311401367s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 65 pg[9.17( v 45'1018 (0'0,45'1018] local-lis/les=63/64 n=3 ec=54/38 lis/c=63/54 les/c/f=64/56/0 sis=65 pruub=14.941551208s) [2] async=[2] r=-1 lpr=65 pi=[54,65)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 196.379364014s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 65 pg[9.3( v 45'1018 (0'0,45'1018] local-lis/les=63/64 n=6 ec=54/38 lis/c=63/54 les/c/f=64/56/0 sis=65 pruub=14.941587448s) [2] async=[2] r=-1 lpr=65 pi=[54,65)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 196.379425049s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 65 pg[9.17( v 45'1018 (0'0,45'1018] local-lis/les=63/64 n=3 ec=54/38 lis/c=63/54 les/c/f=64/56/0 sis=65 pruub=14.941474915s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 196.379364014s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 65 pg[9.3( v 45'1018 (0'0,45'1018] local-lis/les=63/64 n=6 ec=54/38 lis/c=63/54 les/c/f=64/56/0 sis=65 pruub=14.941536903s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 196.379425049s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 65 pg[9.b( v 45'1018 (0'0,45'1018] local-lis/les=63/64 n=6 ec=54/38 lis/c=63/54 les/c/f=64/56/0 sis=65 pruub=14.940853119s) [2] async=[2] r=-1 lpr=65 pi=[54,65)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 196.379257202s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 65 pg[9.f( v 45'1018 (0'0,45'1018] local-lis/les=63/64 n=6 ec=54/38 lis/c=63/54 les/c/f=64/56/0 sis=65 pruub=14.944669724s) [2] async=[2] r=-1 lpr=65 pi=[54,65)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 196.383163452s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 65 pg[9.b( v 45'1018 (0'0,45'1018] local-lis/les=63/64 n=6 ec=54/38 lis/c=63/54 les/c/f=64/56/0 sis=65 pruub=14.940762520s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 196.379257202s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 65 pg[9.f( v 45'1018 (0'0,45'1018] local-lis/les=63/64 n=6 ec=54/38 lis/c=63/54 les/c/f=64/56/0 sis=65 pruub=14.944624901s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 196.383163452s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 65 pg[9.d( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=65 pruub=8.876586914s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 190.315261841s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 65 pg[9.d( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=65 pruub=8.876568794s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.315261841s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 65 pg[9.7( v 45'1018 (0'0,45'1018] local-lis/les=63/64 n=6 ec=54/38 lis/c=63/54 les/c/f=64/56/0 sis=65 pruub=14.940307617s) [2] async=[2] r=-1 lpr=65 pi=[54,65)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 196.379257202s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 65 pg[9.7( v 45'1018 (0'0,45'1018] local-lis/les=63/64 n=6 ec=54/38 lis/c=63/54 les/c/f=64/56/0 sis=65 pruub=14.940267563s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 196.379257202s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 65 pg[9.1b( v 45'1018 (0'0,45'1018] local-lis/les=63/64 n=5 ec=54/38 lis/c=63/54 les/c/f=64/56/0 sis=65 pruub=14.940160751s) [2] async=[2] r=-1 lpr=65 pi=[54,65)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 196.379333496s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 65 pg[9.5( v 58'1021 (0'0,58'1021] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=65 pruub=8.878764153s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=45'1018 lcod 58'1020 mlcod 58'1020 active pruub 190.317901611s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 65 pg[9.1b( v 45'1018 (0'0,45'1018] local-lis/les=63/64 n=5 ec=54/38 lis/c=63/54 les/c/f=64/56/0 sis=65 pruub=14.940132141s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 196.379333496s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 65 pg[9.5( v 58'1021 (0'0,58'1021] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=65 pruub=8.878691673s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=45'1018 lcod 58'1020 mlcod 0'0 unknown NOTIFY pruub 190.317901611s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 65 pg[9.1f( v 45'1018 (0'0,45'1018] local-lis/les=63/64 n=5 ec=54/38 lis/c=63/54 les/c/f=64/56/0 sis=65 pruub=14.939930916s) [2] async=[2] r=-1 lpr=65 pi=[54,65)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 196.379470825s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 65 pg[9.1d( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=65 pruub=8.878444672s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 190.318038940s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 65 pg[9.1f( v 45'1018 (0'0,45'1018] local-lis/les=63/64 n=5 ec=54/38 lis/c=63/54 les/c/f=64/56/0 sis=65 pruub=14.939888000s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 196.379470825s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 65 pg[9.1d( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=65 pruub=8.878426552s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.318038940s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 65 pg[9.13( v 45'1018 (0'0,45'1018] local-lis/les=63/64 n=5 ec=54/38 lis/c=63/54 les/c/f=64/56/0 sis=65 pruub=14.943156242s) [2] async=[2] r=-1 lpr=65 pi=[54,65)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 196.383178711s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 65 pg[9.13( v 45'1018 (0'0,45'1018] local-lis/les=63/64 n=5 ec=54/38 lis/c=63/54 les/c/f=64/56/0 sis=65 pruub=14.943113327s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 196.383178711s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:47:33 compute-0 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Oct 08 09:47:33 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:47:33.389910) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 08 09:47:33 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Oct 08 09:47:33 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916853390021, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7355, "num_deletes": 261, "total_data_size": 13706102, "memory_usage": 13998832, "flush_reason": "Manual Compaction"}
Oct 08 09:47:33 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Oct 08 09:47:33 compute-0 ceph-mgr[73869]: [progress INFO root] Writing back 25 completed events
Oct 08 09:47:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 08 09:47:33 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 11.0 scrub starts
Oct 08 09:47:33 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 11.0 scrub ok
Oct 08 09:47:33 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916853452211, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 12234194, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 143, "largest_seqno": 7493, "table_properties": {"data_size": 12207538, "index_size": 16941, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8645, "raw_key_size": 83309, "raw_average_key_size": 24, "raw_value_size": 12141638, "raw_average_value_size": 3529, "num_data_blocks": 745, "num_entries": 3440, "num_filter_entries": 3440, "num_deletions": 261, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916583, "oldest_key_time": 1759916583, "file_creation_time": 1759916853, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Oct 08 09:47:33 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 62367 microseconds, and 23865 cpu microseconds.
Oct 08 09:47:33 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:47:33.452277) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 12234194 bytes OK
Oct 08 09:47:33 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:47:33.452301) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Oct 08 09:47:33 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:47:33.454013) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Oct 08 09:47:33 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:47:33.454056) EVENT_LOG_v1 {"time_micros": 1759916853454027, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Oct 08 09:47:33 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:47:33.454079) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Oct 08 09:47:33 compute-0 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 13672968, prev total WAL file size 13684429, number of live WAL files 2.
Oct 08 09:47:33 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-rgw-default-compute-0-bphuep[98824]: Wed Oct  8 09:47:33 2025: (VI_0) Entering MASTER STATE
Oct 08 09:47:33 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:33 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6600002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:33 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 09:47:33 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:47:33.460106) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323631' seq:0, type:0; will stop at (end)
Oct 08 09:47:33 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Oct 08 09:47:33 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(11MB) 13(58KB) 8(1944B)]
Oct 08 09:47:33 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916853460194, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 12295765, "oldest_snapshot_seqno": -1}
Oct 08 09:47:33 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3253 keys, 12277890 bytes, temperature: kUnknown
Oct 08 09:47:33 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916853517308, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 12277890, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12251433, "index_size": 17195, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8197, "raw_key_size": 82126, "raw_average_key_size": 25, "raw_value_size": 12187076, "raw_average_value_size": 3746, "num_data_blocks": 756, "num_entries": 3253, "num_filter_entries": 3253, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916581, "oldest_key_time": 0, "file_creation_time": 1759916853, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Oct 08 09:47:33 compute-0 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 08 09:47:33 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:47:33.517689) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 12277890 bytes
Oct 08 09:47:33 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:47:33.519259) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 214.6 rd, 214.3 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(11.7, 0.0 +0.0 blob) out(11.7 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3550, records dropped: 297 output_compression: NoCompression
Oct 08 09:47:33 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:47:33.519290) EVENT_LOG_v1 {"time_micros": 1759916853519277, "job": 4, "event": "compaction_finished", "compaction_time_micros": 57296, "compaction_time_cpu_micros": 23922, "output_level": 6, "num_output_files": 1, "total_output_size": 12277890, "num_input_records": 3550, "num_output_records": 3253, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 08 09:47:33 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:33 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 09:47:33 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916853522051, "job": 4, "event": "table_file_deletion", "file_number": 19}
Oct 08 09:47:33 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 09:47:33 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916853522124, "job": 4, "event": "table_file_deletion", "file_number": 13}
Oct 08 09:47:33 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 09:47:33 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916853522164, "job": 4, "event": "table_file_deletion", "file_number": 8}
Oct 08 09:47:33 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:47:33.459994) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 09:47:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Oct 08 09:47:34 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 11.c deep-scrub starts
Oct 08 09:47:34 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 11.c deep-scrub ok
Oct 08 09:47:34 compute-0 ceph-mon[73572]: pgmap v60: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 333 B/s, 0 keys/s, 3 objects/s recovering
Oct 08 09:47:34 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Oct 08 09:47:34 compute-0 ceph-mon[73572]: osdmap e65: 3 total, 3 up, 3 in
Oct 08 09:47:34 compute-0 ceph-mon[73572]: 11.0 scrub starts
Oct 08 09:47:34 compute-0 ceph-mon[73572]: 11.0 scrub ok
Oct 08 09:47:34 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:34 compute-0 ceph-mon[73572]: 10.b deep-scrub starts
Oct 08 09:47:34 compute-0 ceph-mon[73572]: 10.b deep-scrub ok
Oct 08 09:47:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Oct 08 09:47:34 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Oct 08 09:47:34 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 66 pg[9.1d( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=66) [2]/[1] r=0 lpr=66 pi=[54,66)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:34 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 66 pg[9.5( v 58'1021 (0'0,58'1021] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=66) [2]/[1] r=0 lpr=66 pi=[54,66)/1 crt=45'1018 lcod 58'1020 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:34 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 66 pg[9.d( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=66) [2]/[1] r=0 lpr=66 pi=[54,66)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:34 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 66 pg[9.1d( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=66) [2]/[1] r=0 lpr=66 pi=[54,66)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:34 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 66 pg[9.d( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=66) [2]/[1] r=0 lpr=66 pi=[54,66)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:34 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 66 pg[9.5( v 58'1021 (0'0,58'1021] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=66) [2]/[1] r=0 lpr=66 pi=[54,66)/1 crt=45'1018 lcod 58'1020 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:34 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 66 pg[9.15( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=4 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=66) [2]/[1] r=0 lpr=66 pi=[54,66)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:34 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 66 pg[9.15( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=4 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=66) [2]/[1] r=0 lpr=66 pi=[54,66)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:34 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6614003f10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:34 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:47:34 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:47:34 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:47:34.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:47:35 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:47:35 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:47:35 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:47:35.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:47:35 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v63: 353 pgs: 4 unknown, 8 peering, 341 active+clean; 455 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 298 B/s, 8 objects/s recovering
Oct 08 09:47:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:35 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:35 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Oct 08 09:47:35 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Oct 08 09:47:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:35 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6600002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:35 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Oct 08 09:47:35 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Oct 08 09:47:35 compute-0 ceph-mon[73572]: 11.8 deep-scrub starts
Oct 08 09:47:35 compute-0 ceph-mon[73572]: 11.8 deep-scrub ok
Oct 08 09:47:35 compute-0 ceph-mon[73572]: 11.c deep-scrub starts
Oct 08 09:47:35 compute-0 ceph-mon[73572]: 11.c deep-scrub ok
Oct 08 09:47:35 compute-0 ceph-mon[73572]: osdmap e66: 3 total, 3 up, 3 in
Oct 08 09:47:35 compute-0 ceph-mon[73572]: 12.5 scrub starts
Oct 08 09:47:35 compute-0 ceph-mon[73572]: 12.5 scrub ok
Oct 08 09:47:35 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Oct 08 09:47:35 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 67 pg[9.5( v 58'1021 (0'0,58'1021] local-lis/les=66/67 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[54,66)/1 crt=58'1021 lcod 58'1020 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:35 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 67 pg[9.1d( v 45'1018 (0'0,45'1018] local-lis/les=66/67 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[54,66)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:35 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 67 pg[9.d( v 45'1018 (0'0,45'1018] local-lis/les=66/67 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[54,66)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:35 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 67 pg[9.15( v 45'1018 (0'0,45'1018] local-lis/les=66/67 n=4 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[54,66)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:36 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 11.d scrub starts
Oct 08 09:47:36 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 11.d scrub ok
Oct 08 09:47:36 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:36 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6600002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:36 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:47:36 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:47:36 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:47:36.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:47:36 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Oct 08 09:47:37 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:47:37 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:47:37 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:47:37.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:47:37 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v65: 353 pgs: 4 unknown, 8 peering, 341 active+clean; 455 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 242 B/s, 7 objects/s recovering
Oct 08 09:47:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:37 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6600002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:37 compute-0 ceph-mon[73572]: 8.b scrub starts
Oct 08 09:47:37 compute-0 ceph-mon[73572]: 8.b scrub ok
Oct 08 09:47:37 compute-0 ceph-mon[73572]: pgmap v63: 353 pgs: 4 unknown, 8 peering, 341 active+clean; 455 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 298 B/s, 8 objects/s recovering
Oct 08 09:47:37 compute-0 ceph-mon[73572]: 11.9 scrub starts
Oct 08 09:47:37 compute-0 ceph-mon[73572]: 11.9 scrub ok
Oct 08 09:47:37 compute-0 ceph-mon[73572]: 12.d scrub starts
Oct 08 09:47:37 compute-0 ceph-mon[73572]: 12.d scrub ok
Oct 08 09:47:37 compute-0 ceph-mon[73572]: osdmap e67: 3 total, 3 up, 3 in
Oct 08 09:47:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:37 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6618000fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:37 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 8.e scrub starts
Oct 08 09:47:37 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 8.e scrub ok
Oct 08 09:47:37 compute-0 podman[98928]: 2025-10-08 09:47:37.941674318 +0000 UTC m=+4.769362345 volume create d9c3f3155264a057bad85aacca6dba5a24da2b46751f524b7f0f66122813512f
Oct 08 09:47:37 compute-0 podman[98928]: 2025-10-08 09:47:37.958631223 +0000 UTC m=+4.786319250 container create 9b218eb69e3f3397c5284a63453576f35e4fbb04d0228937215154c1654c8bf9 (image=quay.io/prometheus/prometheus:v2.51.0, name=musing_leakey, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:47:38 compute-0 podman[98928]: 2025-10-08 09:47:37.902708019 +0000 UTC m=+4.730396076 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Oct 08 09:47:38 compute-0 systemd[1]: Started libpod-conmon-9b218eb69e3f3397c5284a63453576f35e4fbb04d0228937215154c1654c8bf9.scope.
Oct 08 09:47:38 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:47:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/483be5c6a46c1953c2021770e099d5d487a3cc7c6eaed1a3a6e18f212d999fc8/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Oct 08 09:47:38 compute-0 podman[98928]: 2025-10-08 09:47:38.075945883 +0000 UTC m=+4.903634000 container init 9b218eb69e3f3397c5284a63453576f35e4fbb04d0228937215154c1654c8bf9 (image=quay.io/prometheus/prometheus:v2.51.0, name=musing_leakey, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:47:38 compute-0 podman[98928]: 2025-10-08 09:47:38.085229016 +0000 UTC m=+4.912917063 container start 9b218eb69e3f3397c5284a63453576f35e4fbb04d0228937215154c1654c8bf9 (image=quay.io/prometheus/prometheus:v2.51.0, name=musing_leakey, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:47:38 compute-0 musing_leakey[99194]: 65534 65534
Oct 08 09:47:38 compute-0 systemd[1]: libpod-9b218eb69e3f3397c5284a63453576f35e4fbb04d0228937215154c1654c8bf9.scope: Deactivated successfully.
Oct 08 09:47:38 compute-0 podman[98928]: 2025-10-08 09:47:38.091728711 +0000 UTC m=+4.919416758 container attach 9b218eb69e3f3397c5284a63453576f35e4fbb04d0228937215154c1654c8bf9 (image=quay.io/prometheus/prometheus:v2.51.0, name=musing_leakey, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:47:38 compute-0 podman[98928]: 2025-10-08 09:47:38.092844606 +0000 UTC m=+4.920532663 container died 9b218eb69e3f3397c5284a63453576f35e4fbb04d0228937215154c1654c8bf9 (image=quay.io/prometheus/prometheus:v2.51.0, name=musing_leakey, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:47:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-483be5c6a46c1953c2021770e099d5d487a3cc7c6eaed1a3a6e18f212d999fc8-merged.mount: Deactivated successfully.
Oct 08 09:47:38 compute-0 podman[98928]: 2025-10-08 09:47:38.227659919 +0000 UTC m=+5.055347976 container remove 9b218eb69e3f3397c5284a63453576f35e4fbb04d0228937215154c1654c8bf9 (image=quay.io/prometheus/prometheus:v2.51.0, name=musing_leakey, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:47:38 compute-0 podman[98928]: 2025-10-08 09:47:38.238640535 +0000 UTC m=+5.066328602 volume remove d9c3f3155264a057bad85aacca6dba5a24da2b46751f524b7f0f66122813512f
Oct 08 09:47:38 compute-0 systemd[1]: libpod-conmon-9b218eb69e3f3397c5284a63453576f35e4fbb04d0228937215154c1654c8bf9.scope: Deactivated successfully.
Oct 08 09:47:38 compute-0 podman[99212]: 2025-10-08 09:47:38.317938896 +0000 UTC m=+0.048079147 volume create 5e2e5af56f30451cf6c0d29d44cedcf6e4f8d101525902a32a9eae828ffd2aaa
Oct 08 09:47:38 compute-0 podman[99212]: 2025-10-08 09:47:38.334920392 +0000 UTC m=+0.065060683 container create 7dff5171b4bed265fca9dbe14bc07abd33025c287672fc3ad1fee0d8fe3ea412 (image=quay.io/prometheus/prometheus:v2.51.0, name=goofy_jepsen, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:47:38 compute-0 podman[99212]: 2025-10-08 09:47:38.292919307 +0000 UTC m=+0.023059578 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Oct 08 09:47:38 compute-0 systemd[1]: Started libpod-conmon-7dff5171b4bed265fca9dbe14bc07abd33025c287672fc3ad1fee0d8fe3ea412.scope.
Oct 08 09:47:38 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:47:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d06565b100d91f421383b86c306b258d55e5cc76b05a1cec2b0421cb1f9a2601/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Oct 08 09:47:38 compute-0 podman[99212]: 2025-10-08 09:47:38.44548873 +0000 UTC m=+0.175629081 container init 7dff5171b4bed265fca9dbe14bc07abd33025c287672fc3ad1fee0d8fe3ea412 (image=quay.io/prometheus/prometheus:v2.51.0, name=goofy_jepsen, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:47:38 compute-0 podman[99212]: 2025-10-08 09:47:38.450787337 +0000 UTC m=+0.180927628 container start 7dff5171b4bed265fca9dbe14bc07abd33025c287672fc3ad1fee0d8fe3ea412 (image=quay.io/prometheus/prometheus:v2.51.0, name=goofy_jepsen, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:47:38 compute-0 goofy_jepsen[99228]: 65534 65534
Oct 08 09:47:38 compute-0 systemd[1]: libpod-7dff5171b4bed265fca9dbe14bc07abd33025c287672fc3ad1fee0d8fe3ea412.scope: Deactivated successfully.
Oct 08 09:47:38 compute-0 podman[99212]: 2025-10-08 09:47:38.459356177 +0000 UTC m=+0.189496458 container attach 7dff5171b4bed265fca9dbe14bc07abd33025c287672fc3ad1fee0d8fe3ea412 (image=quay.io/prometheus/prometheus:v2.51.0, name=goofy_jepsen, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:47:38 compute-0 podman[99212]: 2025-10-08 09:47:38.459856903 +0000 UTC m=+0.189997204 container died 7dff5171b4bed265fca9dbe14bc07abd33025c287672fc3ad1fee0d8fe3ea412 (image=quay.io/prometheus/prometheus:v2.51.0, name=goofy_jepsen, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:47:38 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 11.b deep-scrub starts
Oct 08 09:47:38 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 11.b deep-scrub ok
Oct 08 09:47:38 compute-0 ceph-mgr[73869]: [progress WARNING root] Starting Global Recovery Event,12 pgs not in active + clean state
Oct 08 09:47:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-d06565b100d91f421383b86c306b258d55e5cc76b05a1cec2b0421cb1f9a2601-merged.mount: Deactivated successfully.
Oct 08 09:47:38 compute-0 podman[99212]: 2025-10-08 09:47:38.612692434 +0000 UTC m=+0.342832685 container remove 7dff5171b4bed265fca9dbe14bc07abd33025c287672fc3ad1fee0d8fe3ea412 (image=quay.io/prometheus/prometheus:v2.51.0, name=goofy_jepsen, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:47:38 compute-0 podman[99212]: 2025-10-08 09:47:38.647273725 +0000 UTC m=+0.377413976 volume remove 5e2e5af56f30451cf6c0d29d44cedcf6e4f8d101525902a32a9eae828ffd2aaa
Oct 08 09:47:38 compute-0 systemd[1]: libpod-conmon-7dff5171b4bed265fca9dbe14bc07abd33025c287672fc3ad1fee0d8fe3ea412.scope: Deactivated successfully.
Oct 08 09:47:38 compute-0 systemd[1]: Reloading.
Oct 08 09:47:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:38 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6614003f10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:38 compute-0 systemd-sysv-generator[99278]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:47:38 compute-0 systemd-rc-local-generator[99275]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:47:38 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:47:38 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 09:47:38 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:47:38.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 09:47:39 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:47:39 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:47:39 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:47:39.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:47:39 compute-0 systemd[1]: Reloading.
Oct 08 09:47:39 compute-0 systemd-rc-local-generator[99312]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:47:39 compute-0 systemd-sysv-generator[99318]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:47:39 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v66: 353 pgs: 4 unknown, 8 peering, 341 active+clean; 455 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 198 B/s, 5 objects/s recovering
Oct 08 09:47:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:39 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:39 compute-0 PackageKit[31040]: daemon quit
Oct 08 09:47:39 compute-0 systemd[1]: packagekit.service: Deactivated successfully.
Oct 08 09:47:39 compute-0 systemd[1]: Starting Ceph prometheus.compute-0 for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct 08 09:47:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:39 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6600002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:39 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Oct 08 09:47:39 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Oct 08 09:47:39 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Oct 08 09:47:39 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Oct 08 09:47:39 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 68 pg[9.15( v 45'1018 (0'0,45'1018] local-lis/les=66/67 n=4 ec=54/38 lis/c=66/54 les/c/f=67/56/0 sis=68 pruub=12.058046341s) [2] async=[2] r=-1 lpr=68 pi=[54,68)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 199.701385498s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:39 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 68 pg[9.1d( v 45'1018 (0'0,45'1018] local-lis/les=66/67 n=5 ec=54/38 lis/c=66/54 les/c/f=67/56/0 sis=68 pruub=12.057833672s) [2] async=[2] r=-1 lpr=68 pi=[54,68)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 199.701171875s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:39 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 68 pg[9.d( v 45'1018 (0'0,45'1018] local-lis/les=66/67 n=6 ec=54/38 lis/c=66/54 les/c/f=67/56/0 sis=68 pruub=12.057991982s) [2] async=[2] r=-1 lpr=68 pi=[54,68)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 199.701385498s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:39 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 68 pg[9.15( v 45'1018 (0'0,45'1018] local-lis/les=66/67 n=4 ec=54/38 lis/c=66/54 les/c/f=67/56/0 sis=68 pruub=12.057976723s) [2] r=-1 lpr=68 pi=[54,68)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 199.701385498s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:39 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 68 pg[9.1d( v 45'1018 (0'0,45'1018] local-lis/les=66/67 n=5 ec=54/38 lis/c=66/54 les/c/f=67/56/0 sis=68 pruub=12.057407379s) [2] r=-1 lpr=68 pi=[54,68)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 199.701171875s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:39 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 68 pg[9.5( v 67'1027 (0'0,67'1027] local-lis/les=66/67 n=6 ec=54/38 lis/c=66/54 les/c/f=67/56/0 sis=68 pruub=12.026637077s) [2] async=[2] r=-1 lpr=68 pi=[54,68)/1 crt=67'1024 lcod 67'1026 mlcod 67'1026 active pruub 199.670516968s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:39 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 68 pg[9.5( v 67'1027 (0'0,67'1027] local-lis/les=66/67 n=6 ec=54/38 lis/c=66/54 les/c/f=67/56/0 sis=68 pruub=12.026576042s) [2] r=-1 lpr=68 pi=[54,68)/1 crt=67'1024 lcod 67'1026 mlcod 0'0 unknown NOTIFY pruub 199.670516968s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:39 compute-0 ceph-mon[73572]: 11.d scrub starts
Oct 08 09:47:39 compute-0 ceph-mon[73572]: 11.d scrub ok
Oct 08 09:47:39 compute-0 ceph-mon[73572]: 12.0 scrub starts
Oct 08 09:47:39 compute-0 ceph-mon[73572]: 12.0 scrub ok
Oct 08 09:47:39 compute-0 ceph-mon[73572]: pgmap v65: 353 pgs: 4 unknown, 8 peering, 341 active+clean; 455 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 242 B/s, 7 objects/s recovering
Oct 08 09:47:39 compute-0 ceph-mon[73572]: 8.e scrub starts
Oct 08 09:47:39 compute-0 ceph-mon[73572]: 8.e scrub ok
Oct 08 09:47:39 compute-0 ceph-mon[73572]: 10.6 scrub starts
Oct 08 09:47:39 compute-0 ceph-mon[73572]: 10.6 scrub ok
Oct 08 09:47:39 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 68 pg[9.d( v 45'1018 (0'0,45'1018] local-lis/les=66/67 n=6 ec=54/38 lis/c=66/54 les/c/f=67/56/0 sis=68 pruub=12.057183266s) [2] r=-1 lpr=68 pi=[54,68)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 199.701385498s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:39 compute-0 podman[99374]: 2025-10-08 09:47:39.655594301 +0000 UTC m=+0.067294874 container create 50d7285a7766fc915688c3cc737e186f9ad2230d7b0b7e759880375ad996bb6c (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:47:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bd1fadf28913cbc0057245ad8febd4d04a90075db3637e26764bf6babfd02e1/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Oct 08 09:47:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bd1fadf28913cbc0057245ad8febd4d04a90075db3637e26764bf6babfd02e1/merged/etc/prometheus supports timestamps until 2038 (0x7fffffff)
Oct 08 09:47:39 compute-0 podman[99374]: 2025-10-08 09:47:39.609312551 +0000 UTC m=+0.021013104 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Oct 08 09:47:39 compute-0 podman[99374]: 2025-10-08 09:47:39.745126035 +0000 UTC m=+0.156826588 container init 50d7285a7766fc915688c3cc737e186f9ad2230d7b0b7e759880375ad996bb6c (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:47:39 compute-0 podman[99374]: 2025-10-08 09:47:39.754337755 +0000 UTC m=+0.166038288 container start 50d7285a7766fc915688c3cc737e186f9ad2230d7b0b7e759880375ad996bb6c (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:47:39 compute-0 bash[99374]: 50d7285a7766fc915688c3cc737e186f9ad2230d7b0b7e759880375ad996bb6c
Oct 08 09:47:39 compute-0 systemd[1]: Started Ceph prometheus.compute-0 for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct 08 09:47:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0[99390]: ts=2025-10-08T09:47:39.786Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)"
Oct 08 09:47:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0[99390]: ts=2025-10-08T09:47:39.786Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)"
Oct 08 09:47:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0[99390]: ts=2025-10-08T09:47:39.786Z caller=main.go:623 level=info host_details="(Linux 5.14.0-620.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025 x86_64 compute-0 (none))"
Oct 08 09:47:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0[99390]: ts=2025-10-08T09:47:39.786Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)"
Oct 08 09:47:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0[99390]: ts=2025-10-08T09:47:39.786Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)"
Oct 08 09:47:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0[99390]: ts=2025-10-08T09:47:39.788Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=192.168.122.100:9095
Oct 08 09:47:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0[99390]: ts=2025-10-08T09:47:39.789Z caller=main.go:1129 level=info msg="Starting TSDB ..."
Oct 08 09:47:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0[99390]: ts=2025-10-08T09:47:39.793Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=192.168.122.100:9095
Oct 08 09:47:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0[99390]: ts=2025-10-08T09:47:39.793Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=192.168.122.100:9095
Oct 08 09:47:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0[99390]: ts=2025-10-08T09:47:39.795Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any"
Oct 08 09:47:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0[99390]: ts=2025-10-08T09:47:39.795Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=3.18µs
Oct 08 09:47:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0[99390]: ts=2025-10-08T09:47:39.795Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while"
Oct 08 09:47:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0[99390]: ts=2025-10-08T09:47:39.797Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0
Oct 08 09:47:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0[99390]: ts=2025-10-08T09:47:39.797Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=265.717µs wal_replay_duration=1.500438ms wbl_replay_duration=280ns total_replay_duration=1.796176ms
Oct 08 09:47:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0[99390]: ts=2025-10-08T09:47:39.801Z caller=main.go:1150 level=info fs_type=XFS_SUPER_MAGIC
Oct 08 09:47:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0[99390]: ts=2025-10-08T09:47:39.802Z caller=main.go:1153 level=info msg="TSDB started"
Oct 08 09:47:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0[99390]: ts=2025-10-08T09:47:39.802Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml
Oct 08 09:47:39 compute-0 sudo[98859]: pam_unix(sudo:session): session closed for user root
Oct 08 09:47:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0[99390]: ts=2025-10-08T09:47:39.841Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=39.00772ms db_storage=1.88µs remote_storage=2.58µs web_handler=1.02µs query_engine=1.75µs scrape=5.201734ms scrape_sd=235.288µs notify=17.8µs notify_sd=17.011µs rules=31.974138ms tracing=10.92µs
Oct 08 09:47:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0[99390]: ts=2025-10-08T09:47:39.841Z caller=main.go:1114 level=info msg="Server is ready to receive web requests."
Oct 08 09:47:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0[99390]: ts=2025-10-08T09:47:39.841Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..."
Oct 08 09:47:39 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 09:47:39 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:39 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 09:47:39 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:39 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Oct 08 09:47:39 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:39 compute-0 ceph-mgr[73869]: [progress INFO root] complete: finished ev 0bf79cd8-eb11-4f4f-80b2-14468a3c828d (Updating prometheus deployment (+1 -> 1))
Oct 08 09:47:39 compute-0 ceph-mgr[73869]: [progress INFO root] Completed event 0bf79cd8-eb11-4f4f-80b2-14468a3c828d (Updating prometheus deployment (+1 -> 1)) in 8 seconds
Oct 08 09:47:39 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "prometheus"} v 0)
Oct 08 09:47:39 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Oct 08 09:47:40 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Oct 08 09:47:40 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Oct 08 09:47:40 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Oct 08 09:47:40 compute-0 ceph-mon[73572]: 11.b deep-scrub starts
Oct 08 09:47:40 compute-0 ceph-mon[73572]: 11.b deep-scrub ok
Oct 08 09:47:40 compute-0 ceph-mon[73572]: 12.1f scrub starts
Oct 08 09:47:40 compute-0 ceph-mon[73572]: 12.1f scrub ok
Oct 08 09:47:40 compute-0 ceph-mon[73572]: pgmap v66: 353 pgs: 4 unknown, 8 peering, 341 active+clean; 455 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 198 B/s, 5 objects/s recovering
Oct 08 09:47:40 compute-0 ceph-mon[73572]: 11.2 scrub starts
Oct 08 09:47:40 compute-0 ceph-mon[73572]: 11.2 scrub ok
Oct 08 09:47:40 compute-0 ceph-mon[73572]: osdmap e68: 3 total, 3 up, 3 in
Oct 08 09:47:40 compute-0 ceph-mon[73572]: 10.1a scrub starts
Oct 08 09:47:40 compute-0 ceph-mon[73572]: 10.1a scrub ok
Oct 08 09:47:40 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:40 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:40 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:40 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Oct 08 09:47:40 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Oct 08 09:47:40 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Oct 08 09:47:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:40 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6600002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:40 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:47:40 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:47:40 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:47:40.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:47:40 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Oct 08 09:47:40 compute-0 ceph-mgr[73869]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct 08 09:47:40 compute-0 ceph-mgr[73869]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct 08 09:47:40 compute-0 ceph-mgr[73869]: mgr respawn  0: '/usr/bin/ceph-mgr'
Oct 08 09:47:40 compute-0 ceph-mgr[73869]: mgr respawn  1: '-n'
Oct 08 09:47:40 compute-0 ceph-mgr[73869]: mgr respawn  2: 'mgr.compute-0.ixicfj'
Oct 08 09:47:40 compute-0 ceph-mgr[73869]: mgr respawn  3: '-f'
Oct 08 09:47:40 compute-0 ceph-mgr[73869]: mgr respawn  4: '--setuser'
Oct 08 09:47:40 compute-0 ceph-mgr[73869]: mgr respawn  5: 'ceph'
Oct 08 09:47:40 compute-0 ceph-mgr[73869]: mgr respawn  6: '--setgroup'
Oct 08 09:47:40 compute-0 ceph-mgr[73869]: mgr respawn  7: 'ceph'
Oct 08 09:47:40 compute-0 ceph-mgr[73869]: mgr respawn  8: '--default-log-to-file=false'
Oct 08 09:47:40 compute-0 ceph-mgr[73869]: mgr respawn  9: '--default-log-to-journald=true'
Oct 08 09:47:40 compute-0 ceph-mgr[73869]: mgr respawn  10: '--default-log-to-stderr=false'
Oct 08 09:47:40 compute-0 ceph-mgr[73869]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Oct 08 09:47:40 compute-0 ceph-mgr[73869]: mgr respawn  exe_path /proc/self/exe
Oct 08 09:47:40 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e27: compute-0.ixicfj(active, since 87s), standbys: compute-2.mtagwx, compute-1.swlvov
Oct 08 09:47:41 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:47:41 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:47:41 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:47:41.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:47:41 compute-0 sshd-session[92048]: Connection closed by 192.168.122.100 port 43490
Oct 08 09:47:41 compute-0 sshd-session[92017]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 08 09:47:41 compute-0 systemd[1]: session-36.scope: Deactivated successfully.
Oct 08 09:47:41 compute-0 systemd[1]: session-36.scope: Consumed 45.137s CPU time.
Oct 08 09:47:41 compute-0 systemd-logind[798]: Session 36 logged out. Waiting for processes to exit.
Oct 08 09:47:41 compute-0 systemd-logind[798]: Removed session 36.
Oct 08 09:47:41 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ignoring --setuser ceph since I am not root
Oct 08 09:47:41 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ignoring --setgroup ceph since I am not root
Oct 08 09:47:41 compute-0 ceph-mgr[73869]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct 08 09:47:41 compute-0 ceph-mgr[73869]: pidfile_write: ignore empty --pid-file
Oct 08 09:47:41 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'alerts'
Oct 08 09:47:41 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:41.213+0000 7fa16c208140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 08 09:47:41 compute-0 ceph-mgr[73869]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 08 09:47:41 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'balancer'
Oct 08 09:47:41 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:41 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6618001aa0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:41 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:41.288+0000 7fa16c208140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 08 09:47:41 compute-0 ceph-mgr[73869]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 08 09:47:41 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'cephadm'
Oct 08 09:47:41 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:41 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6618001aa0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:41 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 8.0 scrub starts
Oct 08 09:47:41 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 8.0 scrub ok
Oct 08 09:47:41 compute-0 ceph-mon[73572]: 8.1 scrub starts
Oct 08 09:47:41 compute-0 ceph-mon[73572]: 8.1 scrub ok
Oct 08 09:47:41 compute-0 ceph-mon[73572]: 10.1c scrub starts
Oct 08 09:47:41 compute-0 ceph-mon[73572]: 10.1c scrub ok
Oct 08 09:47:41 compute-0 ceph-mon[73572]: osdmap e69: 3 total, 3 up, 3 in
Oct 08 09:47:41 compute-0 ceph-mon[73572]: 8.9 scrub starts
Oct 08 09:47:41 compute-0 ceph-mon[73572]: 8.9 scrub ok
Oct 08 09:47:41 compute-0 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Oct 08 09:47:41 compute-0 ceph-mon[73572]: mgrmap e27: compute-0.ixicfj(active, since 87s), standbys: compute-2.mtagwx, compute-1.swlvov
Oct 08 09:47:42 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'crash'
Oct 08 09:47:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:42.117+0000 7fa16c208140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 08 09:47:42 compute-0 ceph-mgr[73869]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 08 09:47:42 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'dashboard'
Oct 08 09:47:42 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Oct 08 09:47:42 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Oct 08 09:47:42 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'devicehealth'
Oct 08 09:47:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:42.791+0000 7fa16c208140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 08 09:47:42 compute-0 ceph-mgr[73869]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 08 09:47:42 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'diskprediction_local'
Oct 08 09:47:42 compute-0 ceph-mon[73572]: 10.1d scrub starts
Oct 08 09:47:42 compute-0 ceph-mon[73572]: 10.1d scrub ok
Oct 08 09:47:42 compute-0 ceph-mon[73572]: 8.0 scrub starts
Oct 08 09:47:42 compute-0 ceph-mon[73572]: 8.0 scrub ok
Oct 08 09:47:42 compute-0 ceph-mon[73572]: 11.e scrub starts
Oct 08 09:47:42 compute-0 ceph-mon[73572]: 11.e scrub ok
Oct 08 09:47:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:42 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:42 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:47:42 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:47:42 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:47:42.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:47:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 08 09:47:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 08 09:47:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]:   from numpy import show_config as show_numpy_config
Oct 08 09:47:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:42.957+0000 7fa16c208140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 08 09:47:42 compute-0 ceph-mgr[73869]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 08 09:47:42 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'influx'
Oct 08 09:47:43 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:43.033+0000 7fa16c208140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 08 09:47:43 compute-0 ceph-mgr[73869]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 08 09:47:43 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'insights'
Oct 08 09:47:43 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:47:43 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:47:43 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:47:43.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:47:43 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'iostat'
Oct 08 09:47:43 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:43.183+0000 7fa16c208140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 08 09:47:43 compute-0 ceph-mgr[73869]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 08 09:47:43 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'k8sevents'
Oct 08 09:47:43 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:43 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6600003c10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e69 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:47:43 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:43 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6618001aa0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:43 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Oct 08 09:47:43 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Oct 08 09:47:43 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'localpool'
Oct 08 09:47:43 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'mds_autoscaler'
Oct 08 09:47:43 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'mirroring'
Oct 08 09:47:43 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'nfs'
Oct 08 09:47:44 compute-0 ceph-mon[73572]: 12.1b deep-scrub starts
Oct 08 09:47:44 compute-0 ceph-mon[73572]: 12.1b deep-scrub ok
Oct 08 09:47:44 compute-0 ceph-mon[73572]: 8.7 scrub starts
Oct 08 09:47:44 compute-0 ceph-mon[73572]: 8.7 scrub ok
Oct 08 09:47:44 compute-0 ceph-mon[73572]: 8.3 scrub starts
Oct 08 09:47:44 compute-0 ceph-mon[73572]: 8.3 scrub ok
Oct 08 09:47:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:44.212+0000 7fa16c208140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 08 09:47:44 compute-0 ceph-mgr[73869]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 08 09:47:44 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'orchestrator'
Oct 08 09:47:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:44.439+0000 7fa16c208140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 08 09:47:44 compute-0 ceph-mgr[73869]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 08 09:47:44 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'osd_perf_query'
Oct 08 09:47:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:44.522+0000 7fa16c208140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 08 09:47:44 compute-0 ceph-mgr[73869]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 08 09:47:44 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'osd_support'
Oct 08 09:47:44 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Oct 08 09:47:44 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Oct 08 09:47:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:44.602+0000 7fa16c208140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 08 09:47:44 compute-0 ceph-mgr[73869]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 08 09:47:44 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'pg_autoscaler'
Oct 08 09:47:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:44.692+0000 7fa16c208140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 08 09:47:44 compute-0 ceph-mgr[73869]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 08 09:47:44 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'progress'
Oct 08 09:47:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:44.764+0000 7fa16c208140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 08 09:47:44 compute-0 ceph-mgr[73869]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 08 09:47:44 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'prometheus'
Oct 08 09:47:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:44 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6618001aa0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:44 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:47:44 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:47:44 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:47:44.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:47:45 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:47:45 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:47:45 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:47:45.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:47:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:45.099+0000 7fa16c208140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 08 09:47:45 compute-0 ceph-mgr[73869]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 08 09:47:45 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'rbd_support'
Oct 08 09:47:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:45.205+0000 7fa16c208140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 08 09:47:45 compute-0 ceph-mgr[73869]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 08 09:47:45 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'restful'
Oct 08 09:47:45 compute-0 ceph-mon[73572]: 10.1f scrub starts
Oct 08 09:47:45 compute-0 ceph-mon[73572]: 10.1f scrub ok
Oct 08 09:47:45 compute-0 ceph-mon[73572]: 11.6 scrub starts
Oct 08 09:47:45 compute-0 ceph-mon[73572]: 11.6 scrub ok
Oct 08 09:47:45 compute-0 ceph-mon[73572]: 11.a scrub starts
Oct 08 09:47:45 compute-0 ceph-mon[73572]: 11.a scrub ok
Oct 08 09:47:45 compute-0 ceph-mon[73572]: 11.18 scrub starts
Oct 08 09:47:45 compute-0 ceph-mon[73572]: 11.18 scrub ok
Oct 08 09:47:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:45 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'rgw'
Oct 08 09:47:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6600003c10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:45 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Oct 08 09:47:45 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Oct 08 09:47:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:45.640+0000 7fa16c208140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 08 09:47:45 compute-0 ceph-mgr[73869]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 08 09:47:45 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'rook'
Oct 08 09:47:46 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:46.199+0000 7fa16c208140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 08 09:47:46 compute-0 ceph-mgr[73869]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 08 09:47:46 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'selftest'
Oct 08 09:47:46 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:46.267+0000 7fa16c208140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 08 09:47:46 compute-0 ceph-mgr[73869]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 08 09:47:46 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'snap_schedule'
Oct 08 09:47:46 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:46.353+0000 7fa16c208140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 08 09:47:46 compute-0 ceph-mgr[73869]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 08 09:47:46 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'stats'
Oct 08 09:47:46 compute-0 ceph-mon[73572]: 12.16 scrub starts
Oct 08 09:47:46 compute-0 ceph-mon[73572]: 12.16 scrub ok
Oct 08 09:47:46 compute-0 ceph-mon[73572]: 8.5 scrub starts
Oct 08 09:47:46 compute-0 ceph-mon[73572]: 8.5 scrub ok
Oct 08 09:47:46 compute-0 ceph-mon[73572]: 10.7 scrub starts
Oct 08 09:47:46 compute-0 ceph-mon[73572]: 10.7 scrub ok
Oct 08 09:47:46 compute-0 ceph-mon[73572]: 8.1a scrub starts
Oct 08 09:47:46 compute-0 ceph-mon[73572]: 8.1a scrub ok
Oct 08 09:47:46 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'status'
Oct 08 09:47:46 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:46.523+0000 7fa16c208140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 08 09:47:46 compute-0 ceph-mgr[73869]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 08 09:47:46 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'telegraf'
Oct 08 09:47:46 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Oct 08 09:47:46 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Oct 08 09:47:46 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:46.598+0000 7fa16c208140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 08 09:47:46 compute-0 ceph-mgr[73869]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 08 09:47:46 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'telemetry'
Oct 08 09:47:46 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:46.765+0000 7fa16c208140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 08 09:47:46 compute-0 ceph-mgr[73869]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 08 09:47:46 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'test_orchestrator'
Oct 08 09:47:46 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:46 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6618001aa0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:46 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:47:46 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000031s ======
Oct 08 09:47:46 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:47:46.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Oct 08 09:47:46 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:46.998+0000 7fa16c208140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 08 09:47:46 compute-0 ceph-mgr[73869]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 08 09:47:46 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'volumes'
Oct 08 09:47:47 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:47:47 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:47:47 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:47:47.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:47:47 compute-0 sshd-session[99446]: Accepted publickey for zuul from 192.168.122.30 port 35646 ssh2: ECDSA SHA256:7LvTHAj52RCSkKXOsIbzSDrEw7lwj23D0dAJ0Qgx0Rg
Oct 08 09:47:47 compute-0 systemd-logind[798]: New session 38 of user zuul.
Oct 08 09:47:47 compute-0 systemd[1]: Started Session 38 of User zuul.
Oct 08 09:47:47 compute-0 sshd-session[99446]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 08 09:47:47 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.swlvov restarted
Oct 08 09:47:47 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.swlvov started
Oct 08 09:47:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:47 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6618001aa0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:47.271+0000 7fa16c208140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: mgr[py] Loading python module 'zabbix'
Oct 08 09:47:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:47.345+0000 7fa16c208140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 08 09:47:47 compute-0 ceph-mon[73572]: log_channel(cluster) log [INF] : Active manager daemon compute-0.ixicfj restarted
Oct 08 09:47:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Oct 08 09:47:47 compute-0 ceph-mon[73572]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.ixicfj
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: ms_deliver_dispatch: unhandled message 0x55d6aa6db860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Oct 08 09:47:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:47 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Oct 08 09:47:47 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Oct 08 09:47:47 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e28: compute-0.ixicfj(active, starting, since 0.161204s), standbys: compute-2.mtagwx, compute-1.swlvov
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: mgr handle_mgr_map Activating!
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: mgr handle_mgr_map I am now activating
Oct 08 09:47:47 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Oct 08 09:47:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Oct 08 09:47:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 08 09:47:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct 08 09:47:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 08 09:47:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct 08 09:47:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 08 09:47:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.wfaozr"} v 0)
Oct 08 09:47:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.wfaozr"}]: dispatch
Oct 08 09:47:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).mds e9 all = 0
Oct 08 09:47:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.lphril"} v 0)
Oct 08 09:47:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.lphril"}]: dispatch
Oct 08 09:47:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).mds e9 all = 0
Oct 08 09:47:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.bumazt"} v 0)
Oct 08 09:47:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.bumazt"}]: dispatch
Oct 08 09:47:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).mds e9 all = 0
Oct 08 09:47:47 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Oct 08 09:47:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.ixicfj", "id": "compute-0.ixicfj"} v 0)
Oct 08 09:47:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mgr metadata", "who": "compute-0.ixicfj", "id": "compute-0.ixicfj"}]: dispatch
Oct 08 09:47:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.mtagwx", "id": "compute-2.mtagwx"} v 0)
Oct 08 09:47:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mgr metadata", "who": "compute-2.mtagwx", "id": "compute-2.mtagwx"}]: dispatch
Oct 08 09:47:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.swlvov", "id": "compute-1.swlvov"} v 0)
Oct 08 09:47:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mgr metadata", "who": "compute-1.swlvov", "id": "compute-1.swlvov"}]: dispatch
Oct 08 09:47:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct 08 09:47:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 08 09:47:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct 08 09:47:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 08 09:47:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 08 09:47:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 08 09:47:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Oct 08 09:47:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 08 09:47:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).mds e9 all = 1
Oct 08 09:47:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Oct 08 09:47:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 08 09:47:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Oct 08 09:47:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: balancer
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Starting
Oct 08 09:47:47 compute-0 ceph-mon[73572]: log_channel(cluster) log [INF] : Manager daemon compute-0.ixicfj is now available
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_09:47:47
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: cephadm
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: crash
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: dashboard
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO access_control] Loading user roles DB version=2
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO sso] Loading SSO DB version=1
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO root] Configured CherryPy, starting engine...
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: devicehealth
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: iostat
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [devicehealth INFO root] Starting
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: nfs
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: orchestrator
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: pg_autoscaler
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: progress
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [prometheus DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:47:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [progress INFO root] Loading...
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7fa0ebd25460>, <progress.module.GhostEvent object at 0x7fa0ebd25490>, <progress.module.GhostEvent object at 0x7fa0ebd25b80>, <progress.module.GhostEvent object at 0x7fa0ebd25be0>, <progress.module.GhostEvent object at 0x7fa0ebd25c70>, <progress.module.GhostEvent object at 0x7fa0ebd25ca0>, <progress.module.GhostEvent object at 0x7fa0ebd25c40>, <progress.module.GhostEvent object at 0x7fa0ebd25cd0>, <progress.module.GhostEvent object at 0x7fa0ebd25d30>, <progress.module.GhostEvent object at 0x7fa0ebd25d90>, <progress.module.GhostEvent object at 0x7fa0ebd25dc0>, <progress.module.GhostEvent object at 0x7fa0ebd25df0>, <progress.module.GhostEvent object at 0x7fa0ebd25e20>, <progress.module.GhostEvent object at 0x7fa0ebd25ee0>, <progress.module.GhostEvent object at 0x7fa0ebd25e50>, <progress.module.GhostEvent object at 0x7fa0ebd25e80>, <progress.module.GhostEvent object at 0x7fa0ebd25d60>, <progress.module.GhostEvent object at 0x7fa0ebd25d00>, <progress.module.GhostEvent object at 0x7fa0ebd25eb0>, <progress.module.GhostEvent object at 0x7fa0ebd25f10>, <progress.module.GhostEvent object at 0x7fa0ebd25f40>, <progress.module.GhostEvent object at 0x7fa0ebd25f70>, <progress.module.GhostEvent object at 0x7fa0ebd25fa0>, <progress.module.GhostEvent object at 0x7fa0ebd25fd0>, <progress.module.GhostEvent object at 0x7fa0ebd32040>] historic events
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [progress INFO root] Loaded OSDMap, ready.
Oct 08 09:47:47 compute-0 ceph-mon[73572]: 11.19 deep-scrub starts
Oct 08 09:47:47 compute-0 ceph-mon[73572]: 11.19 deep-scrub ok
Oct 08 09:47:47 compute-0 ceph-mon[73572]: 12.14 deep-scrub starts
Oct 08 09:47:47 compute-0 ceph-mon[73572]: 12.14 deep-scrub ok
Oct 08 09:47:47 compute-0 ceph-mon[73572]: 8.1e scrub starts
Oct 08 09:47:47 compute-0 ceph-mon[73572]: 8.1e scrub ok
Oct 08 09:47:47 compute-0 ceph-mon[73572]: Standby manager daemon compute-1.swlvov restarted
Oct 08 09:47:47 compute-0 ceph-mon[73572]: Standby manager daemon compute-1.swlvov started
Oct 08 09:47:47 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.mtagwx restarted
Oct 08 09:47:47 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.mtagwx started
Oct 08 09:47:47 compute-0 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Oct 08 09:47:47 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:47:47.789231) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 08 09:47:47 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Oct 08 09:47:47 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916867789333, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 524, "num_deletes": 251, "total_data_size": 944358, "memory_usage": 955552, "flush_reason": "Manual Compaction"}
Oct 08 09:47:47 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Oct 08 09:47:47 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: prometheus
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [prometheus INFO root] server_addr: :: server_port: 9283
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [prometheus INFO root] Cache enabled
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [prometheus INFO root] starting metric collection thread
Oct 08 09:47:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 09:47:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [prometheus INFO root] Starting engine...
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.error] [08/Oct/2025:09:47:47] ENGINE Bus STARTING
Oct 08 09:47:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: [08/Oct/2025:09:47:47] ENGINE Bus STARTING
Oct 08 09:47:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: CherryPy Checker:
Oct 08 09:47:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: The Application mounted at '' has an empty config.
Oct 08 09:47:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 
Oct 08 09:47:47 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916867816265, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 939999, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7494, "largest_seqno": 8017, "table_properties": {"data_size": 936979, "index_size": 864, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8456, "raw_average_key_size": 20, "raw_value_size": 930271, "raw_average_value_size": 2220, "num_data_blocks": 37, "num_entries": 419, "num_filter_entries": 419, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916853, "oldest_key_time": 1759916853, "file_creation_time": 1759916867, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Oct 08 09:47:47 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 27093 microseconds, and 4668 cpu microseconds.
Oct 08 09:47:47 compute-0 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 08 09:47:47 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:47:47.816336) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 939999 bytes OK
Oct 08 09:47:47 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:47:47.816372) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Oct 08 09:47:47 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:47:47.820257) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Oct 08 09:47:47 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:47:47.820275) EVENT_LOG_v1 {"time_micros": 1759916867820269, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 08 09:47:47 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:47:47.820303) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 08 09:47:47 compute-0 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 941087, prev total WAL file size 941087, number of live WAL files 2.
Oct 08 09:47:47 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 09:47:47 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:47:47.820915) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Oct 08 09:47:47 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 08 09:47:47 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(917KB)], [20(11MB)]
Oct 08 09:47:47 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916867821147, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 13217889, "oldest_snapshot_seqno": -1}
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] recovery thread starting
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] starting setup
Oct 08 09:47:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ixicfj/mirror_snapshot_schedule"} v 0)
Oct 08 09:47:47 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ixicfj/mirror_snapshot_schedule"}]: dispatch
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: rbd_support
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: restful
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [restful INFO root] server_addr: :: server_port: 8003
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [restful WARNING root] server not running: no certificate configured
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: status
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: telemetry
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] PerfHandler: starting
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_task_task: vms, start_after=
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_task_task: volumes, start_after=
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: mgr load Constructed class from module: volumes
Oct 08 09:47:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:47.920+0000 7fa0ced58640 -1 client.0 error registering admin socket command: (17) File exists
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: client.0 error registering admin socket command: (17) File exists
Oct 08 09:47:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:47.920+0000 7fa0d4d64640 -1 client.0 error registering admin socket command: (17) File exists
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: client.0 error registering admin socket command: (17) File exists
Oct 08 09:47:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:47.920+0000 7fa0d4d64640 -1 client.0 error registering admin socket command: (17) File exists
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: client.0 error registering admin socket command: (17) File exists
Oct 08 09:47:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:47.920+0000 7fa0d4d64640 -1 client.0 error registering admin socket command: (17) File exists
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: client.0 error registering admin socket command: (17) File exists
Oct 08 09:47:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:47.920+0000 7fa0d4d64640 -1 client.0 error registering admin socket command: (17) File exists
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: client.0 error registering admin socket command: (17) File exists
Oct 08 09:47:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:47.920+0000 7fa0d4d64640 -1 client.0 error registering admin socket command: (17) File exists
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: client.0 error registering admin socket command: (17) File exists
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_task_task: backups, start_after=
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_task_task: images, start_after=
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Oct 08 09:47:47 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3148 keys, 12002416 bytes, temperature: kUnknown
Oct 08 09:47:47 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916867946561, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 12002416, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11977584, "index_size": 15891, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7877, "raw_key_size": 81561, "raw_average_key_size": 25, "raw_value_size": 11915700, "raw_average_value_size": 3785, "num_data_blocks": 691, "num_entries": 3148, "num_filter_entries": 3148, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916581, "oldest_key_time": 0, "file_creation_time": 1759916867, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Oct 08 09:47:47 compute-0 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Oct 08 09:47:47 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:47:47.946819) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 12002416 bytes
Oct 08 09:47:47 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:47:47.954943) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 105.4 rd, 95.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 11.7 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(26.8) write-amplify(12.8) OK, records in: 3672, records dropped: 524 output_compression: NoCompression
Oct 08 09:47:47 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:47:47.955027) EVENT_LOG_v1 {"time_micros": 1759916867954988, "job": 6, "event": "compaction_finished", "compaction_time_micros": 125452, "compaction_time_cpu_micros": 28598, "output_level": 6, "num_output_files": 1, "total_output_size": 12002416, "num_input_records": 3672, "num_output_records": 3148, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 08 09:47:47 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 09:47:47 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916867955531, "job": 6, "event": "table_file_deletion", "file_number": 22}
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Oct 08 09:47:47 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 09:47:47 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916867958163, "job": 6, "event": "table_file_deletion", "file_number": 20}
Oct 08 09:47:47 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:47:47.820832) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 09:47:47 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:47:47.958344) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 09:47:47 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:47:47.958352) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 09:47:47 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:47:47.958353) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 09:47:47 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:47:47.958477) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 09:47:47 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:47:47.958487) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] TaskHandler: starting
Oct 08 09:47:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ixicfj/trash_purge_schedule"} v 0)
Oct 08 09:47:47 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ixicfj/trash_purge_schedule"}]: dispatch
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Oct 08 09:47:47 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Oct 08 09:47:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 09:47:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: [08/Oct/2025:09:47:48] ENGINE Serving on http://:::9283
Oct 08 09:47:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 09:47:48 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.error] [08/Oct/2025:09:47:48] ENGINE Serving on http://:::9283
Oct 08 09:47:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: [08/Oct/2025:09:47:48] ENGINE Bus STARTED
Oct 08 09:47:48 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.error] [08/Oct/2025:09:47:48] ENGINE Bus STARTED
Oct 08 09:47:48 compute-0 ceph-mgr[73869]: [prometheus INFO root] Engine started.
Oct 08 09:47:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct 08 09:47:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] setup complete
Oct 08 09:47:48 compute-0 sshd-session[99662]: Accepted publickey for ceph-admin from 192.168.122.100 port 57904 ssh2: RSA SHA256:oltPosKfvcqSfDqAHq+rz23Sj7/sQ0zn4f0i/r7NEZA
Oct 08 09:47:48 compute-0 systemd-logind[798]: New session 39 of user ceph-admin.
Oct 08 09:47:48 compute-0 systemd[1]: Started Session 39 of User ceph-admin.
Oct 08 09:47:48 compute-0 sshd-session[99662]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 08 09:47:48 compute-0 ceph-mgr[73869]: [dashboard INFO dashboard.module] Engine started.
Oct 08 09:47:48 compute-0 python3.9[99710]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 08 09:47:48 compute-0 sudo[99774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:47:48 compute-0 sudo[99774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:47:48 compute-0 sudo[99774]: pam_unix(sudo:session): session closed for user root
Oct 08 09:47:48 compute-0 sudo[99802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Oct 08 09:47:48 compute-0 sudo[99802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:47:48 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e70 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:47:48 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Oct 08 09:47:48 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Oct 08 09:47:48 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e29: compute-0.ixicfj(active, since 1.22103s), standbys: compute-2.mtagwx, compute-1.swlvov
Oct 08 09:47:48 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v3: 353 pgs: 353 active+clean; 456 KiB data, 107 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:47:48 compute-0 ceph-mon[73572]: 8.a scrub starts
Oct 08 09:47:48 compute-0 ceph-mon[73572]: 8.a scrub ok
Oct 08 09:47:48 compute-0 ceph-mon[73572]: Active manager daemon compute-0.ixicfj restarted
Oct 08 09:47:48 compute-0 ceph-mon[73572]: Activating manager daemon compute-0.ixicfj
Oct 08 09:47:48 compute-0 ceph-mon[73572]: 12.1 scrub starts
Oct 08 09:47:48 compute-0 ceph-mon[73572]: 12.1 scrub ok
Oct 08 09:47:48 compute-0 ceph-mon[73572]: osdmap e70: 3 total, 3 up, 3 in
Oct 08 09:47:48 compute-0 ceph-mon[73572]: mgrmap e28: compute-0.ixicfj(active, starting, since 0.161204s), standbys: compute-2.mtagwx, compute-1.swlvov
Oct 08 09:47:48 compute-0 ceph-mon[73572]: 8.1d scrub starts
Oct 08 09:47:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 08 09:47:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 08 09:47:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 08 09:47:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.wfaozr"}]: dispatch
Oct 08 09:47:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.lphril"}]: dispatch
Oct 08 09:47:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.bumazt"}]: dispatch
Oct 08 09:47:48 compute-0 ceph-mon[73572]: 8.1d scrub ok
Oct 08 09:47:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mgr metadata", "who": "compute-0.ixicfj", "id": "compute-0.ixicfj"}]: dispatch
Oct 08 09:47:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mgr metadata", "who": "compute-2.mtagwx", "id": "compute-2.mtagwx"}]: dispatch
Oct 08 09:47:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mgr metadata", "who": "compute-1.swlvov", "id": "compute-1.swlvov"}]: dispatch
Oct 08 09:47:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 08 09:47:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 08 09:47:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 08 09:47:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 08 09:47:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 08 09:47:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 08 09:47:48 compute-0 ceph-mon[73572]: Manager daemon compute-0.ixicfj is now available
Oct 08 09:47:48 compute-0 ceph-mon[73572]: 11.13 deep-scrub starts
Oct 08 09:47:48 compute-0 ceph-mon[73572]: 11.13 deep-scrub ok
Oct 08 09:47:48 compute-0 ceph-mon[73572]: Standby manager daemon compute-2.mtagwx restarted
Oct 08 09:47:48 compute-0 ceph-mon[73572]: Standby manager daemon compute-2.mtagwx started
Oct 08 09:47:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:47:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ixicfj/mirror_snapshot_schedule"}]: dispatch
Oct 08 09:47:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ixicfj/trash_purge_schedule"}]: dispatch
Oct 08 09:47:48 compute-0 ceph-mon[73572]: mgrmap e29: compute-0.ixicfj(active, since 1.22103s), standbys: compute-2.mtagwx, compute-1.swlvov
Oct 08 09:47:48 compute-0 podman[99954]: 2025-10-08 09:47:48.896263915 +0000 UTC m=+0.112956965 container exec 01c666addd8584f02eb7d29040d97e45d8cb7f834fd134029c1f7cca550aeb9d (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct 08 09:47:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:48 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6600003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:48 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:47:48 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:47:48 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:47:48.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:47:49 compute-0 podman[99954]: 2025-10-08 09:47:49.003328302 +0000 UTC m=+0.220021352 container exec_died 01c666addd8584f02eb7d29040d97e45d8cb7f834fd134029c1f7cca550aeb9d (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:47:49 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:47:49 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:47:49 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:47:49.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:47:49 compute-0 ceph-mgr[73869]: [cephadm INFO cherrypy.error] [08/Oct/2025:09:47:49] ENGINE Bus STARTING
Oct 08 09:47:49 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : [08/Oct/2025:09:47:49] ENGINE Bus STARTING
Oct 08 09:47:49 compute-0 ceph-mgr[73869]: [cephadm INFO cherrypy.error] [08/Oct/2025:09:47:49] ENGINE Serving on http://192.168.122.100:8765
Oct 08 09:47:49 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : [08/Oct/2025:09:47:49] ENGINE Serving on http://192.168.122.100:8765
Oct 08 09:47:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:49 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6614003f10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:49 compute-0 ceph-mgr[73869]: [cephadm INFO cherrypy.error] [08/Oct/2025:09:47:49] ENGINE Serving on https://192.168.122.100:7150
Oct 08 09:47:49 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : [08/Oct/2025:09:47:49] ENGINE Serving on https://192.168.122.100:7150
Oct 08 09:47:49 compute-0 ceph-mgr[73869]: [cephadm INFO cherrypy.error] [08/Oct/2025:09:47:49] ENGINE Bus STARTED
Oct 08 09:47:49 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : [08/Oct/2025:09:47:49] ENGINE Bus STARTED
Oct 08 09:47:49 compute-0 ceph-mgr[73869]: [cephadm INFO cherrypy.error] [08/Oct/2025:09:47:49] ENGINE Client ('192.168.122.100', 43154) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 08 09:47:49 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : [08/Oct/2025:09:47:49] ENGINE Client ('192.168.122.100', 43154) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 08 09:47:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:49 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6618003770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:49 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Oct 08 09:47:49 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Oct 08 09:47:49 compute-0 podman[100141]: 2025-10-08 09:47:49.551521333 +0000 UTC m=+0.067864491 container exec 0dbea514cc83cec397480471ade0bebb407738deaf60dfa42fe4b53fba64588f (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:47:49 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v4: 353 pgs: 353 active+clean; 456 KiB data, 107 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:47:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0)
Oct 08 09:47:49 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct 08 09:47:49 compute-0 podman[100166]: 2025-10-08 09:47:49.622223014 +0000 UTC m=+0.054991676 container exec_died 0dbea514cc83cec397480471ade0bebb407738deaf60dfa42fe4b53fba64588f (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:47:49 compute-0 podman[100141]: 2025-10-08 09:47:49.633420117 +0000 UTC m=+0.149763285 container exec_died 0dbea514cc83cec397480471ade0bebb407738deaf60dfa42fe4b53fba64588f (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:47:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Oct 08 09:47:49 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Oct 08 09:47:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Oct 08 09:47:49 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Oct 08 09:47:49 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 71 pg[9.16( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=4 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=71 pruub=8.475421906s) [0] r=-1 lpr=71 pi=[54,71)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 206.314956665s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:49 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 71 pg[9.16( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=4 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=71 pruub=8.475279808s) [0] r=-1 lpr=71 pi=[54,71)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 206.314956665s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:49 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 71 pg[9.e( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=71 pruub=8.475381851s) [0] r=-1 lpr=71 pi=[54,71)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 206.315155029s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:49 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 71 pg[9.e( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=71 pruub=8.475350380s) [0] r=-1 lpr=71 pi=[54,71)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 206.315155029s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:49 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 71 pg[9.6( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=71 pruub=8.477218628s) [0] r=-1 lpr=71 pi=[54,71)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 206.317718506s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:49 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 71 pg[9.6( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=71 pruub=8.477195740s) [0] r=-1 lpr=71 pi=[54,71)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 206.317718506s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:49 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 71 pg[9.1e( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=71 pruub=8.477206230s) [0] r=-1 lpr=71 pi=[54,71)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 206.318115234s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:49 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 71 pg[9.1e( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=71 pruub=8.477184296s) [0] r=-1 lpr=71 pi=[54,71)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 206.318115234s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:49 compute-0 ceph-mon[73572]: 11.1f scrub starts
Oct 08 09:47:49 compute-0 ceph-mon[73572]: 11.1f scrub ok
Oct 08 09:47:49 compute-0 ceph-mon[73572]: 11.1c deep-scrub starts
Oct 08 09:47:49 compute-0 ceph-mon[73572]: 11.1c deep-scrub ok
Oct 08 09:47:49 compute-0 ceph-mon[73572]: pgmap v3: 353 pgs: 353 active+clean; 456 KiB data, 107 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:47:49 compute-0 ceph-mon[73572]: 11.16 scrub starts
Oct 08 09:47:49 compute-0 ceph-mon[73572]: 11.16 scrub ok
Oct 08 09:47:49 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct 08 09:47:49 compute-0 ceph-mgr[73869]: [devicehealth INFO root] Check health
Oct 08 09:47:49 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e30: compute-0.ixicfj(active, since 2s), standbys: compute-2.mtagwx, compute-1.swlvov
Oct 08 09:47:49 compute-0 podman[100227]: 2025-10-08 09:47:49.921737362 +0000 UTC m=+0.073714547 container exec c5f79ead609d5155b918e60a1470581982785e4aac0e1c185a19093b2bdf84dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:47:49 compute-0 podman[100227]: 2025-10-08 09:47:49.942344411 +0000 UTC m=+0.094321596 container exec_died c5f79ead609d5155b918e60a1470581982785e4aac0e1c185a19093b2bdf84dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:47:50 compute-0 podman[100294]: 2025-10-08 09:47:50.184937604 +0000 UTC m=+0.071866099 container exec 1c113ffcb0d41c6337c256a4892f13e82bdea6ca8062b10324516aa631301ea5 (image=quay.io/ceph/haproxy:2.3, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp)
Oct 08 09:47:50 compute-0 podman[100294]: 2025-10-08 09:47:50.194549517 +0000 UTC m=+0.081478032 container exec_died 1c113ffcb0d41c6337c256a4892f13e82bdea6ca8062b10324516aa631301ea5 (image=quay.io/ceph/haproxy:2.3, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp)
Oct 08 09:47:50 compute-0 podman[100358]: 2025-10-08 09:47:50.442824208 +0000 UTC m=+0.061049417 container exec 5814788fb4b63196d097e0c60082efdca22825a3ba337ddec5c99170a1bfd89d (image=quay.io/ceph/keepalived:2.2.4, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, version=2.2.4, architecture=x86_64, name=keepalived, description=keepalived for Ceph, io.openshift.expose-services=, release=1793, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc.)
Oct 08 09:47:50 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Oct 08 09:47:50 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Oct 08 09:47:50 compute-0 podman[100358]: 2025-10-08 09:47:50.482302883 +0000 UTC m=+0.100527992 container exec_died 5814788fb4b63196d097e0c60082efdca22825a3ba337ddec5c99170a1bfd89d (image=quay.io/ceph/keepalived:2.2.4, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw, io.openshift.expose-services=, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, architecture=x86_64, build-date=2023-02-22T09:23:20)
Oct 08 09:47:50 compute-0 podman[100477]: 2025-10-08 09:47:50.723908784 +0000 UTC m=+0.064163814 container exec 8d342ba820880b4150da0520ca2c2bd15e2dae173214d906ae46d290c18c1a1e (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:47:50 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Oct 08 09:47:50 compute-0 podman[100477]: 2025-10-08 09:47:50.768499491 +0000 UTC m=+0.108754531 container exec_died 8d342ba820880b4150da0520ca2c2bd15e2dae173214d906ae46d290c18c1a1e (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:47:50 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Oct 08 09:47:50 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Oct 08 09:47:50 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 72 pg[9.e( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=72) [0]/[1] r=0 lpr=72 pi=[54,72)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:50 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 72 pg[9.1e( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=72) [0]/[1] r=0 lpr=72 pi=[54,72)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:50 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 72 pg[9.e( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=72) [0]/[1] r=0 lpr=72 pi=[54,72)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:50 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 72 pg[9.6( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=72) [0]/[1] r=0 lpr=72 pi=[54,72)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:50 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 72 pg[9.16( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=4 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=72) [0]/[1] r=0 lpr=72 pi=[54,72)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:50 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 72 pg[9.6( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=72) [0]/[1] r=0 lpr=72 pi=[54,72)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:50 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 72 pg[9.16( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=4 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=72) [0]/[1] r=0 lpr=72 pi=[54,72)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:50 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 72 pg[9.1e( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=72) [0]/[1] r=0 lpr=72 pi=[54,72)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 08 09:47:50 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:50 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:50 compute-0 ceph-mon[73572]: [08/Oct/2025:09:47:49] ENGINE Bus STARTING
Oct 08 09:47:50 compute-0 ceph-mon[73572]: [08/Oct/2025:09:47:49] ENGINE Serving on http://192.168.122.100:8765
Oct 08 09:47:50 compute-0 ceph-mon[73572]: [08/Oct/2025:09:47:49] ENGINE Serving on https://192.168.122.100:7150
Oct 08 09:47:50 compute-0 ceph-mon[73572]: [08/Oct/2025:09:47:49] ENGINE Bus STARTED
Oct 08 09:47:50 compute-0 ceph-mon[73572]: [08/Oct/2025:09:47:49] ENGINE Client ('192.168.122.100', 43154) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 08 09:47:50 compute-0 ceph-mon[73572]: 11.10 scrub starts
Oct 08 09:47:50 compute-0 ceph-mon[73572]: 11.10 scrub ok
Oct 08 09:47:50 compute-0 ceph-mon[73572]: 8.18 scrub starts
Oct 08 09:47:50 compute-0 ceph-mon[73572]: 8.18 scrub ok
Oct 08 09:47:50 compute-0 ceph-mon[73572]: pgmap v4: 353 pgs: 353 active+clean; 456 KiB data, 107 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:47:50 compute-0 ceph-mon[73572]: 8.2 scrub starts
Oct 08 09:47:50 compute-0 ceph-mon[73572]: 8.2 scrub ok
Oct 08 09:47:50 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Oct 08 09:47:50 compute-0 ceph-mon[73572]: osdmap e71: 3 total, 3 up, 3 in
Oct 08 09:47:50 compute-0 ceph-mon[73572]: mgrmap e30: compute-0.ixicfj(active, since 2s), standbys: compute-2.mtagwx, compute-1.swlvov
Oct 08 09:47:50 compute-0 ceph-mon[73572]: osdmap e72: 3 total, 3 up, 3 in
Oct 08 09:47:50 compute-0 sudo[100625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nuamnutbiwwjmaojkouavzuvdgrgjgnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916870.5064652-56-130302500601178/AnsiballZ_command.py'
Oct 08 09:47:50 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:47:50 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:47:50 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:47:50.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:47:50 compute-0 sudo[100625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:47:50 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 08 09:47:51 compute-0 podman[100616]: 2025-10-08 09:47:51.005358282 +0000 UTC m=+0.066130087 container exec 56b2a7b6ecfb4c45a674b20b3f1d60f53e5d4fac0af3b6685d98e6cab4ce59f5 (image=quay.io/ceph/grafana:10.4.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 08 09:47:51 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:51 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 08 09:47:51 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:51 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:47:51 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:47:51 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:47:51.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:47:51 compute-0 python3.9[100636]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                             pushd /var/tmp
                                             curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                             pushd repo-setup-main
                                             python3 -m venv ./venv
                                             PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                             ./venv/bin/repo-setup current-podified -b antelope
                                             popd
                                             rm -rf repo-setup-main
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:47:51 compute-0 podman[100616]: 2025-10-08 09:47:51.186738444 +0000 UTC m=+0.247510219 container exec_died 56b2a7b6ecfb4c45a674b20b3f1d60f53e5d4fac0af3b6685d98e6cab4ce59f5 (image=quay.io/ceph/grafana:10.4.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 08 09:47:51 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:51 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6600003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:51 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 08 09:47:51 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Oct 08 09:47:51 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:51 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 08 09:47:51 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Oct 08 09:47:51 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:51 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:51 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6614003f10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:51 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v7: 353 pgs: 353 active+clean; 456 KiB data, 107 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:47:51 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0)
Oct 08 09:47:51 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct 08 09:47:51 compute-0 podman[100739]: 2025-10-08 09:47:51.609477269 +0000 UTC m=+0.076108552 container exec 50d7285a7766fc915688c3cc737e186f9ad2230d7b0b7e759880375ad996bb6c (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:47:51 compute-0 podman[100739]: 2025-10-08 09:47:51.651468883 +0000 UTC m=+0.118100146 container exec_died 50d7285a7766fc915688c3cc737e186f9ad2230d7b0b7e759880375ad996bb6c (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:47:51 compute-0 sudo[99802]: pam_unix(sudo:session): session closed for user root
Oct 08 09:47:51 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 09:47:51 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:51 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 09:47:51 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Oct 08 09:47:51 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:51 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Oct 08 09:47:51 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Oct 08 09:47:51 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Oct 08 09:47:51 compute-0 sudo[100783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:47:51 compute-0 sudo[100783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:47:51 compute-0 sudo[100783]: pam_unix(sudo:session): session closed for user root
Oct 08 09:47:51 compute-0 sudo[100809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 08 09:47:51 compute-0 sudo[100809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:47:51 compute-0 ceph-mon[73572]: 8.13 scrub starts
Oct 08 09:47:51 compute-0 ceph-mon[73572]: 8.13 scrub ok
Oct 08 09:47:51 compute-0 ceph-mon[73572]: 11.1a scrub starts
Oct 08 09:47:51 compute-0 ceph-mon[73572]: 11.1a scrub ok
Oct 08 09:47:51 compute-0 ceph-mon[73572]: 8.11 scrub starts
Oct 08 09:47:51 compute-0 ceph-mon[73572]: 8.11 scrub ok
Oct 08 09:47:51 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:51 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:51 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:51 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:51 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct 08 09:47:51 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:51 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:51 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Oct 08 09:47:51 compute-0 ceph-mon[73572]: osdmap e73: 3 total, 3 up, 3 in
Oct 08 09:47:52 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e31: compute-0.ixicfj(active, since 4s), standbys: compute-2.mtagwx, compute-1.swlvov
Oct 08 09:47:52 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 08 09:47:52 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:52 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 08 09:47:52 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 73 pg[9.1e( v 45'1018 (0'0,45'1018] local-lis/les=72/73 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=72) [0]/[1] async=[0] r=0 lpr=72 pi=[54,72)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:52 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 73 pg[9.e( v 45'1018 (0'0,45'1018] local-lis/les=72/73 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=72) [0]/[1] async=[0] r=0 lpr=72 pi=[54,72)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:52 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 73 pg[9.6( v 45'1018 (0'0,45'1018] local-lis/les=72/73 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=72) [0]/[1] async=[0] r=0 lpr=72 pi=[54,72)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:52 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 73 pg[9.16( v 45'1018 (0'0,45'1018] local-lis/les=72/73 n=4 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=72) [0]/[1] async=[0] r=0 lpr=72 pi=[54,72)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:47:52 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:52 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Oct 08 09:47:52 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 08 09:47:52 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 10.13 deep-scrub starts
Oct 08 09:47:52 compute-0 sudo[100809]: pam_unix(sudo:session): session closed for user root
Oct 08 09:47:52 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 10.13 deep-scrub ok
Oct 08 09:47:52 compute-0 sudo[100864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:47:52 compute-0 sudo[100864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:47:52 compute-0 sudo[100864]: pam_unix(sudo:session): session closed for user root
Oct 08 09:47:52 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 08 09:47:52 compute-0 sudo[100889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Oct 08 09:47:52 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:52 compute-0 sudo[100889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:47:52 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 08 09:47:52 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:52 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Oct 08 09:47:52 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 08 09:47:52 compute-0 sudo[100889]: pam_unix(sudo:session): session closed for user root
Oct 08 09:47:52 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 09:47:52 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:52 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 09:47:52 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:52 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Oct 08 09:47:52 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 08 09:47:52 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:47:52 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:47:52 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 08 09:47:52 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 09:47:52 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Oct 08 09:47:52 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Oct 08 09:47:52 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Oct 08 09:47:52 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Oct 08 09:47:52 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Oct 08 09:47:52 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Oct 08 09:47:52 compute-0 sudo[100935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 08 09:47:52 compute-0 sudo[100935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:47:52 compute-0 sudo[100935]: pam_unix(sudo:session): session closed for user root
Oct 08 09:47:52 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:52 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6618003770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:52 compute-0 ceph-mon[73572]: 11.11 scrub starts
Oct 08 09:47:52 compute-0 ceph-mon[73572]: 11.11 scrub ok
Oct 08 09:47:52 compute-0 ceph-mon[73572]: 8.1b scrub starts
Oct 08 09:47:52 compute-0 ceph-mon[73572]: 8.1b scrub ok
Oct 08 09:47:52 compute-0 ceph-mon[73572]: pgmap v7: 353 pgs: 353 active+clean; 456 KiB data, 107 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:47:52 compute-0 ceph-mon[73572]: 8.1c scrub starts
Oct 08 09:47:52 compute-0 ceph-mon[73572]: 8.1c scrub ok
Oct 08 09:47:52 compute-0 ceph-mon[73572]: mgrmap e31: compute-0.ixicfj(active, since 4s), standbys: compute-2.mtagwx, compute-1.swlvov
Oct 08 09:47:52 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:52 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:52 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 08 09:47:52 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:52 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:52 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 08 09:47:52 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:52 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:52 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 08 09:47:52 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:47:52 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 09:47:52 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:47:52 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:47:52 compute-0 sudo[100960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/etc/ceph
Oct 08 09:47:52 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:47:52.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:47:52 compute-0 sudo[100960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:47:52 compute-0 sudo[100960]: pam_unix(sudo:session): session closed for user root
Oct 08 09:47:53 compute-0 sudo[100986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/etc/ceph/ceph.conf.new
Oct 08 09:47:53 compute-0 sudo[100986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:47:53 compute-0 sudo[100986]: pam_unix(sudo:session): session closed for user root
Oct 08 09:47:53 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:47:53 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:47:53 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:47:53.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:47:53 compute-0 sudo[101012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149
Oct 08 09:47:53 compute-0 sudo[101012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:47:53 compute-0 sudo[101012]: pam_unix(sudo:session): session closed for user root
Oct 08 09:47:53 compute-0 sudo[101037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/etc/ceph/ceph.conf.new
Oct 08 09:47:53 compute-0 sudo[101037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:47:53 compute-0 sudo[101037]: pam_unix(sudo:session): session closed for user root
Oct 08 09:47:53 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Oct 08 09:47:53 compute-0 sudo[101085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/etc/ceph/ceph.conf.new
Oct 08 09:47:53 compute-0 sudo[101085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:47:53 compute-0 sudo[101085]: pam_unix(sudo:session): session closed for user root
Oct 08 09:47:53 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:53 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:53 compute-0 sudo[101110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/etc/ceph/ceph.conf.new
Oct 08 09:47:53 compute-0 sudo[101110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:47:53 compute-0 sudo[101110]: pam_unix(sudo:session): session closed for user root
Oct 08 09:47:53 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Oct 08 09:47:53 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct 08 09:47:53 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct 08 09:47:53 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Oct 08 09:47:53 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 74 pg[9.16( v 45'1018 (0'0,45'1018] local-lis/les=72/73 n=4 ec=54/38 lis/c=72/54 les/c/f=73/56/0 sis=74 pruub=14.815571785s) [0] async=[0] r=-1 lpr=74 pi=[54,74)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 216.221969604s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:53 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 74 pg[9.16( v 45'1018 (0'0,45'1018] local-lis/les=72/73 n=4 ec=54/38 lis/c=72/54 les/c/f=73/56/0 sis=74 pruub=14.815517426s) [0] r=-1 lpr=74 pi=[54,74)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 216.221969604s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:53 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 74 pg[9.6( v 45'1018 (0'0,45'1018] local-lis/les=72/73 n=6 ec=54/38 lis/c=72/54 les/c/f=73/56/0 sis=74 pruub=14.815158844s) [0] async=[0] r=-1 lpr=74 pi=[54,74)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 216.221939087s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:53 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 74 pg[9.6( v 45'1018 (0'0,45'1018] local-lis/les=72/73 n=6 ec=54/38 lis/c=72/54 les/c/f=73/56/0 sis=74 pruub=14.815108299s) [0] r=-1 lpr=74 pi=[54,74)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 216.221939087s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:53 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 74 pg[9.1e( v 45'1018 (0'0,45'1018] local-lis/les=72/73 n=5 ec=54/38 lis/c=72/54 les/c/f=73/56/0 sis=74 pruub=14.810555458s) [0] async=[0] r=-1 lpr=74 pi=[54,74)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 216.217407227s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:53 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 74 pg[9.1e( v 45'1018 (0'0,45'1018] local-lis/les=72/73 n=5 ec=54/38 lis/c=72/54 les/c/f=73/56/0 sis=74 pruub=14.810498238s) [0] r=-1 lpr=74 pi=[54,74)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 216.217407227s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:53 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 74 pg[9.e( v 45'1018 (0'0,45'1018] local-lis/les=72/73 n=6 ec=54/38 lis/c=72/54 les/c/f=73/56/0 sis=74 pruub=14.814806938s) [0] async=[0] r=-1 lpr=74 pi=[54,74)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 216.221908569s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:47:53 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 74 pg[9.e( v 45'1018 (0'0,45'1018] local-lis/les=72/73 n=6 ec=54/38 lis/c=72/54 les/c/f=73/56/0 sis=74 pruub=14.814755440s) [0] r=-1 lpr=74 pi=[54,74)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 216.221908569s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:47:53 compute-0 sudo[101135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Oct 08 09:47:53 compute-0 sudo[101135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:47:53 compute-0 sudo[101135]: pam_unix(sudo:session): session closed for user root
Oct 08 09:47:53 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct 08 09:47:53 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct 08 09:47:53 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e74 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:47:53 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 12.b scrub starts
Oct 08 09:47:53 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 12.b scrub ok
Oct 08 09:47:53 compute-0 sudo[101160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config
Oct 08 09:47:53 compute-0 sudo[101160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:47:53 compute-0 sudo[101160]: pam_unix(sudo:session): session closed for user root
Oct 08 09:47:53 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct 08 09:47:53 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct 08 09:47:53 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:53 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6600003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:53 compute-0 sudo[101185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config
Oct 08 09:47:53 compute-0 sudo[101185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:47:53 compute-0 sudo[101185]: pam_unix(sudo:session): session closed for user root
Oct 08 09:47:53 compute-0 sudo[101210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf.new
Oct 08 09:47:53 compute-0 sudo[101210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:47:53 compute-0 sudo[101210]: pam_unix(sudo:session): session closed for user root
Oct 08 09:47:53 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v10: 353 pgs: 4 peering, 349 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 0 B/s wr, 13 op/s; 54 B/s, 4 objects/s recovering
Oct 08 09:47:53 compute-0 sudo[101235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149
Oct 08 09:47:53 compute-0 sudo[101235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:47:53 compute-0 sudo[101235]: pam_unix(sudo:session): session closed for user root
Oct 08 09:47:53 compute-0 sudo[101260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf.new
Oct 08 09:47:53 compute-0 sudo[101260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:47:53 compute-0 sudo[101260]: pam_unix(sudo:session): session closed for user root
Oct 08 09:47:53 compute-0 sudo[101308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf.new
Oct 08 09:47:53 compute-0 sudo[101308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:47:53 compute-0 sudo[101308]: pam_unix(sudo:session): session closed for user root
Oct 08 09:47:53 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct 08 09:47:53 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct 08 09:47:53 compute-0 sudo[101333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf.new
Oct 08 09:47:53 compute-0 sudo[101333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:47:53 compute-0 sudo[101333]: pam_unix(sudo:session): session closed for user root
Oct 08 09:47:53 compute-0 sudo[101359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf.new /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct 08 09:47:53 compute-0 sudo[101359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:47:53 compute-0 sudo[101359]: pam_unix(sudo:session): session closed for user root
Oct 08 09:47:53 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 08 09:47:53 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 08 09:47:53 compute-0 sudo[101384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 08 09:47:53 compute-0 sudo[101384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:47:53 compute-0 sudo[101384]: pam_unix(sudo:session): session closed for user root
Oct 08 09:47:53 compute-0 ceph-mon[73572]: 10.13 deep-scrub starts
Oct 08 09:47:53 compute-0 ceph-mon[73572]: 10.13 deep-scrub ok
Oct 08 09:47:53 compute-0 ceph-mon[73572]: 11.1d scrub starts
Oct 08 09:47:53 compute-0 ceph-mon[73572]: 11.1d scrub ok
Oct 08 09:47:53 compute-0 ceph-mon[73572]: 12.11 scrub starts
Oct 08 09:47:53 compute-0 ceph-mon[73572]: 12.11 scrub ok
Oct 08 09:47:53 compute-0 ceph-mon[73572]: Updating compute-0:/etc/ceph/ceph.conf
Oct 08 09:47:53 compute-0 ceph-mon[73572]: Updating compute-1:/etc/ceph/ceph.conf
Oct 08 09:47:53 compute-0 ceph-mon[73572]: Updating compute-2:/etc/ceph/ceph.conf
Oct 08 09:47:53 compute-0 ceph-mon[73572]: osdmap e74: 3 total, 3 up, 3 in
Oct 08 09:47:53 compute-0 sudo[101409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/etc/ceph
Oct 08 09:47:54 compute-0 sudo[101409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:47:54 compute-0 sudo[101409]: pam_unix(sudo:session): session closed for user root
Oct 08 09:47:54 compute-0 sudo[101434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/etc/ceph/ceph.client.admin.keyring.new
Oct 08 09:47:54 compute-0 sudo[101434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:47:54 compute-0 sudo[101434]: pam_unix(sudo:session): session closed for user root
Oct 08 09:47:54 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 08 09:47:54 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 08 09:47:54 compute-0 sudo[101459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149
Oct 08 09:47:54 compute-0 sudo[101459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:47:54 compute-0 sudo[101459]: pam_unix(sudo:session): session closed for user root
Oct 08 09:47:54 compute-0 sudo[101484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/etc/ceph/ceph.client.admin.keyring.new
Oct 08 09:47:54 compute-0 sudo[101484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:47:54 compute-0 sudo[101484]: pam_unix(sudo:session): session closed for user root
Oct 08 09:47:54 compute-0 sudo[101532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/etc/ceph/ceph.client.admin.keyring.new
Oct 08 09:47:54 compute-0 sudo[101532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:47:54 compute-0 sudo[101532]: pam_unix(sudo:session): session closed for user root
Oct 08 09:47:54 compute-0 sudo[101558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/etc/ceph/ceph.client.admin.keyring.new
Oct 08 09:47:54 compute-0 sudo[101558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:47:54 compute-0 sudo[101558]: pam_unix(sudo:session): session closed for user root
Oct 08 09:47:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Oct 08 09:47:54 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct 08 09:47:54 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct 08 09:47:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Oct 08 09:47:54 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Oct 08 09:47:54 compute-0 sudo[101583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Oct 08 09:47:54 compute-0 sudo[101583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:47:54 compute-0 sudo[101583]: pam_unix(sudo:session): session closed for user root
Oct 08 09:47:54 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct 08 09:47:54 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct 08 09:47:54 compute-0 sudo[101608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config
Oct 08 09:47:54 compute-0 sudo[101608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:47:54 compute-0 sudo[101608]: pam_unix(sudo:session): session closed for user root
Oct 08 09:47:54 compute-0 sudo[101633]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config
Oct 08 09:47:54 compute-0 sudo[101633]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:47:54 compute-0 sudo[101633]: pam_unix(sudo:session): session closed for user root
Oct 08 09:47:54 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 12.12 scrub starts
Oct 08 09:47:54 compute-0 sudo[101658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring.new
Oct 08 09:47:54 compute-0 sudo[101658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:47:54 compute-0 sudo[101658]: pam_unix(sudo:session): session closed for user root
Oct 08 09:47:54 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 12.12 scrub ok
Oct 08 09:47:54 compute-0 sudo[101686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149
Oct 08 09:47:54 compute-0 sudo[101686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:47:54 compute-0 sudo[101686]: pam_unix(sudo:session): session closed for user root
Oct 08 09:47:54 compute-0 sudo[101711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring.new
Oct 08 09:47:54 compute-0 sudo[101711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:47:54 compute-0 sudo[101711]: pam_unix(sudo:session): session closed for user root
Oct 08 09:47:54 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct 08 09:47:54 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct 08 09:47:54 compute-0 sudo[101760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring.new
Oct 08 09:47:54 compute-0 sudo[101760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:47:54 compute-0 sudo[101760]: pam_unix(sudo:session): session closed for user root
Oct 08 09:47:54 compute-0 sudo[101785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring.new
Oct 08 09:47:54 compute-0 sudo[101785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:47:54 compute-0 sudo[101785]: pam_unix(sudo:session): session closed for user root
Oct 08 09:47:54 compute-0 sudo[101810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-787292cc-8154-50c4-9e00-e9be3e817149/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring.new /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct 08 09:47:54 compute-0 sudo[101810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:47:54 compute-0 sudo[101810]: pam_unix(sudo:session): session closed for user root
Oct 08 09:47:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 09:47:54 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 08 09:47:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 09:47:54 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 08 09:47:54 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:54 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:54 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:54 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:47:54 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:47:54 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:47:54.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:47:54 compute-0 ceph-mon[73572]: Updating compute-1:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct 08 09:47:54 compute-0 ceph-mon[73572]: Updating compute-0:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct 08 09:47:54 compute-0 ceph-mon[73572]: 12.b scrub starts
Oct 08 09:47:54 compute-0 ceph-mon[73572]: 12.b scrub ok
Oct 08 09:47:54 compute-0 ceph-mon[73572]: Updating compute-2:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct 08 09:47:54 compute-0 ceph-mon[73572]: 11.7 scrub starts
Oct 08 09:47:54 compute-0 ceph-mon[73572]: 11.7 scrub ok
Oct 08 09:47:54 compute-0 ceph-mon[73572]: pgmap v10: 353 pgs: 4 peering, 349 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 0 B/s wr, 13 op/s; 54 B/s, 4 objects/s recovering
Oct 08 09:47:54 compute-0 ceph-mon[73572]: 8.c deep-scrub starts
Oct 08 09:47:54 compute-0 ceph-mon[73572]: 8.c deep-scrub ok
Oct 08 09:47:54 compute-0 ceph-mon[73572]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct 08 09:47:54 compute-0 ceph-mon[73572]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 08 09:47:54 compute-0 ceph-mon[73572]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 08 09:47:54 compute-0 ceph-mon[73572]: osdmap e75: 3 total, 3 up, 3 in
Oct 08 09:47:54 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:54 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:54 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:54 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:55 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:47:55 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:47:55 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:47:55.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:47:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:55 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6618003770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:55 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 08 09:47:55 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:55 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 08 09:47:55 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:55 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 08 09:47:55 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:55 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 08 09:47:55 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:55 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 08 09:47:55 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 09:47:55 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 08 09:47:55 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 09:47:55 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:47:55 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:47:55 compute-0 sudo[101842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:47:55 compute-0 sudo[101842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:47:55 compute-0 sudo[101842]: pam_unix(sudo:session): session closed for user root
Oct 08 09:47:55 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 12.c scrub starts
Oct 08 09:47:55 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 12.c scrub ok
Oct 08 09:47:55 compute-0 sudo[101867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 08 09:47:55 compute-0 sudo[101867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:47:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:55 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:55 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v12: 353 pgs: 4 peering, 349 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 11 op/s; 45 B/s, 4 objects/s recovering
Oct 08 09:47:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:47:55] "GET /metrics HTTP/1.1" 200 46587 "" "Prometheus/2.51.0"
Oct 08 09:47:55 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:47:55] "GET /metrics HTTP/1.1" 200 46587 "" "Prometheus/2.51.0"
Oct 08 09:47:55 compute-0 podman[101932]: 2025-10-08 09:47:55.826721856 +0000 UTC m=+0.045905658 container create f7e8890caa1958847a9f7520ad3fc0383afb1f10a2e986f3e3cff07227d74be9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_mayer, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Oct 08 09:47:55 compute-0 systemd[1]: Started libpod-conmon-f7e8890caa1958847a9f7520ad3fc0383afb1f10a2e986f3e3cff07227d74be9.scope.
Oct 08 09:47:55 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:47:55 compute-0 podman[101932]: 2025-10-08 09:47:55.890076947 +0000 UTC m=+0.109260769 container init f7e8890caa1958847a9f7520ad3fc0383afb1f10a2e986f3e3cff07227d74be9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct 08 09:47:55 compute-0 podman[101932]: 2025-10-08 09:47:55.800381177 +0000 UTC m=+0.019564989 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:47:55 compute-0 podman[101932]: 2025-10-08 09:47:55.900688656 +0000 UTC m=+0.119872448 container start f7e8890caa1958847a9f7520ad3fc0383afb1f10a2e986f3e3cff07227d74be9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_mayer, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:47:55 compute-0 priceless_mayer[101949]: 167 167
Oct 08 09:47:55 compute-0 podman[101932]: 2025-10-08 09:47:55.905127179 +0000 UTC m=+0.124310981 container attach f7e8890caa1958847a9f7520ad3fc0383afb1f10a2e986f3e3cff07227d74be9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:47:55 compute-0 systemd[1]: libpod-f7e8890caa1958847a9f7520ad3fc0383afb1f10a2e986f3e3cff07227d74be9.scope: Deactivated successfully.
Oct 08 09:47:55 compute-0 podman[101932]: 2025-10-08 09:47:55.905690783 +0000 UTC m=+0.124874595 container died f7e8890caa1958847a9f7520ad3fc0383afb1f10a2e986f3e3cff07227d74be9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:47:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-79a4a504a07fa9c02ad08830b214ef315617fc4c2ee23688f067bad3aec25b07-merged.mount: Deactivated successfully.
Oct 08 09:47:55 compute-0 podman[101932]: 2025-10-08 09:47:55.964261032 +0000 UTC m=+0.183444834 container remove f7e8890caa1958847a9f7520ad3fc0383afb1f10a2e986f3e3cff07227d74be9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct 08 09:47:55 compute-0 systemd[1]: libpod-conmon-f7e8890caa1958847a9f7520ad3fc0383afb1f10a2e986f3e3cff07227d74be9.scope: Deactivated successfully.
Oct 08 09:47:55 compute-0 ceph-mon[73572]: Updating compute-1:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct 08 09:47:55 compute-0 ceph-mon[73572]: Updating compute-0:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct 08 09:47:55 compute-0 ceph-mon[73572]: 12.12 scrub starts
Oct 08 09:47:55 compute-0 ceph-mon[73572]: 11.5 scrub starts
Oct 08 09:47:55 compute-0 ceph-mon[73572]: 11.5 scrub ok
Oct 08 09:47:55 compute-0 ceph-mon[73572]: 12.12 scrub ok
Oct 08 09:47:55 compute-0 ceph-mon[73572]: Updating compute-2:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct 08 09:47:55 compute-0 ceph-mon[73572]: 12.4 scrub starts
Oct 08 09:47:55 compute-0 ceph-mon[73572]: 12.4 scrub ok
Oct 08 09:47:55 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:55 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:55 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:55 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:55 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 09:47:55 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 09:47:55 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:47:56 compute-0 podman[101973]: 2025-10-08 09:47:56.127556804 +0000 UTC m=+0.050840354 container create 467044f6c4f5f06d893d69ebbc94f370ec98bcd6232adaedf83e91347152ab11 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_grothendieck, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct 08 09:47:56 compute-0 systemd[1]: Started libpod-conmon-467044f6c4f5f06d893d69ebbc94f370ec98bcd6232adaedf83e91347152ab11.scope.
Oct 08 09:47:56 compute-0 podman[101973]: 2025-10-08 09:47:56.098547516 +0000 UTC m=+0.021831046 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:47:56 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:47:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14cfbb479dbfb60e4f48f5701dc0e799111f4417192c0db41b1ea8fa9a20c01c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 09:47:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14cfbb479dbfb60e4f48f5701dc0e799111f4417192c0db41b1ea8fa9a20c01c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:47:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14cfbb479dbfb60e4f48f5701dc0e799111f4417192c0db41b1ea8fa9a20c01c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:47:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14cfbb479dbfb60e4f48f5701dc0e799111f4417192c0db41b1ea8fa9a20c01c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 09:47:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14cfbb479dbfb60e4f48f5701dc0e799111f4417192c0db41b1ea8fa9a20c01c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 09:47:56 compute-0 podman[101973]: 2025-10-08 09:47:56.225416841 +0000 UTC m=+0.148700371 container init 467044f6c4f5f06d893d69ebbc94f370ec98bcd6232adaedf83e91347152ab11 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Oct 08 09:47:56 compute-0 podman[101973]: 2025-10-08 09:47:56.233373413 +0000 UTC m=+0.156656933 container start 467044f6c4f5f06d893d69ebbc94f370ec98bcd6232adaedf83e91347152ab11 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:47:56 compute-0 podman[101973]: 2025-10-08 09:47:56.237196931 +0000 UTC m=+0.160480461 container attach 467044f6c4f5f06d893d69ebbc94f370ec98bcd6232adaedf83e91347152ab11 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:47:56 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Oct 08 09:47:56 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Oct 08 09:47:56 compute-0 objective_grothendieck[101989]: --> passed data devices: 0 physical, 1 LVM
Oct 08 09:47:56 compute-0 objective_grothendieck[101989]: --> All data devices are unavailable
Oct 08 09:47:56 compute-0 systemd[1]: libpod-467044f6c4f5f06d893d69ebbc94f370ec98bcd6232adaedf83e91347152ab11.scope: Deactivated successfully.
Oct 08 09:47:56 compute-0 podman[101973]: 2025-10-08 09:47:56.567941499 +0000 UTC m=+0.491225009 container died 467044f6c4f5f06d893d69ebbc94f370ec98bcd6232adaedf83e91347152ab11 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:47:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-14cfbb479dbfb60e4f48f5701dc0e799111f4417192c0db41b1ea8fa9a20c01c-merged.mount: Deactivated successfully.
Oct 08 09:47:56 compute-0 podman[101973]: 2025-10-08 09:47:56.633923386 +0000 UTC m=+0.557206906 container remove 467044f6c4f5f06d893d69ebbc94f370ec98bcd6232adaedf83e91347152ab11 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_grothendieck, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:47:56 compute-0 systemd[1]: libpod-conmon-467044f6c4f5f06d893d69ebbc94f370ec98bcd6232adaedf83e91347152ab11.scope: Deactivated successfully.
Oct 08 09:47:56 compute-0 sudo[101867]: pam_unix(sudo:session): session closed for user root
Oct 08 09:47:56 compute-0 sudo[102016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:47:56 compute-0 sudo[102016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:47:56 compute-0 sudo[102016]: pam_unix(sudo:session): session closed for user root
Oct 08 09:47:56 compute-0 sudo[102041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- lvm list --format json
Oct 08 09:47:56 compute-0 sudo[102041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:47:56 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:56 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6600003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:56 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:47:56 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 08 09:47:56 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:47:56.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 08 09:47:56 compute-0 ceph-mon[73572]: 12.c scrub starts
Oct 08 09:47:56 compute-0 ceph-mon[73572]: 12.c scrub ok
Oct 08 09:47:56 compute-0 ceph-mon[73572]: 8.4 scrub starts
Oct 08 09:47:56 compute-0 ceph-mon[73572]: 8.4 scrub ok
Oct 08 09:47:56 compute-0 ceph-mon[73572]: pgmap v12: 353 pgs: 4 peering, 349 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 11 op/s; 45 B/s, 4 objects/s recovering
Oct 08 09:47:56 compute-0 ceph-mon[73572]: 12.2 scrub starts
Oct 08 09:47:56 compute-0 ceph-mon[73572]: 12.2 scrub ok
Oct 08 09:47:57 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:47:57 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:47:57 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:47:57.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:47:57 compute-0 podman[102107]: 2025-10-08 09:47:57.201773651 +0000 UTC m=+0.076001433 container create 6b6d4c6f5aaf14123ad7f65e7457d45eaa448bf92233a9ee276df85b9d4ef0a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_sutherland, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 08 09:47:57 compute-0 podman[102107]: 2025-10-08 09:47:57.146735022 +0000 UTC m=+0.020962854 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:47:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:57 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:57 compute-0 systemd[1]: Started libpod-conmon-6b6d4c6f5aaf14123ad7f65e7457d45eaa448bf92233a9ee276df85b9d4ef0a3.scope.
Oct 08 09:47:57 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:47:57 compute-0 podman[102107]: 2025-10-08 09:47:57.304601585 +0000 UTC m=+0.178829387 container init 6b6d4c6f5aaf14123ad7f65e7457d45eaa448bf92233a9ee276df85b9d4ef0a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_sutherland, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:47:57 compute-0 podman[102107]: 2025-10-08 09:47:57.310365721 +0000 UTC m=+0.184593503 container start 6b6d4c6f5aaf14123ad7f65e7457d45eaa448bf92233a9ee276df85b9d4ef0a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct 08 09:47:57 compute-0 determined_sutherland[102128]: 167 167
Oct 08 09:47:57 compute-0 systemd[1]: libpod-6b6d4c6f5aaf14123ad7f65e7457d45eaa448bf92233a9ee276df85b9d4ef0a3.scope: Deactivated successfully.
Oct 08 09:47:57 compute-0 conmon[102128]: conmon 6b6d4c6f5aaf14123ad7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6b6d4c6f5aaf14123ad7f65e7457d45eaa448bf92233a9ee276df85b9d4ef0a3.scope/container/memory.events
Oct 08 09:47:57 compute-0 podman[102107]: 2025-10-08 09:47:57.333191502 +0000 UTC m=+0.207419294 container attach 6b6d4c6f5aaf14123ad7f65e7457d45eaa448bf92233a9ee276df85b9d4ef0a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_sutherland, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid)
Oct 08 09:47:57 compute-0 podman[102107]: 2025-10-08 09:47:57.333827678 +0000 UTC m=+0.208055460 container died 6b6d4c6f5aaf14123ad7f65e7457d45eaa448bf92233a9ee276df85b9d4ef0a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct 08 09:47:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-34a9f051e7fbadf65077511d5353aa3ea5a412bdf8e0a373c27924e7a37b6e72-merged.mount: Deactivated successfully.
Oct 08 09:47:57 compute-0 podman[102107]: 2025-10-08 09:47:57.381005857 +0000 UTC m=+0.255233639 container remove 6b6d4c6f5aaf14123ad7f65e7457d45eaa448bf92233a9ee276df85b9d4ef0a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct 08 09:47:57 compute-0 systemd[1]: libpod-conmon-6b6d4c6f5aaf14123ad7f65e7457d45eaa448bf92233a9ee276df85b9d4ef0a3.scope: Deactivated successfully.
Oct 08 09:47:57 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 12.8 scrub starts
Oct 08 09:47:57 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 12.8 scrub ok
Oct 08 09:47:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:57 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66180038f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:57 compute-0 podman[102155]: 2025-10-08 09:47:57.584811898 +0000 UTC m=+0.053737156 container create ce5bc5a496b7865775708af7465b26e38387674762ca9011df2af2f3cc324d2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:47:57 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v13: 353 pgs: 4 peering, 349 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 0 B/s wr, 9 op/s; 36 B/s, 3 objects/s recovering
Oct 08 09:47:57 compute-0 systemd[1]: Started libpod-conmon-ce5bc5a496b7865775708af7465b26e38387674762ca9011df2af2f3cc324d2f.scope.
Oct 08 09:47:57 compute-0 podman[102155]: 2025-10-08 09:47:57.55971678 +0000 UTC m=+0.028642058 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:47:57 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:47:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58d75dc9dc4dcbc3a1ca8bf441431beee6e4f690793d1f16a71e6a7dbb8590a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 09:47:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58d75dc9dc4dcbc3a1ca8bf441431beee6e4f690793d1f16a71e6a7dbb8590a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:47:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58d75dc9dc4dcbc3a1ca8bf441431beee6e4f690793d1f16a71e6a7dbb8590a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:47:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58d75dc9dc4dcbc3a1ca8bf441431beee6e4f690793d1f16a71e6a7dbb8590a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 09:47:57 compute-0 podman[102155]: 2025-10-08 09:47:57.684192895 +0000 UTC m=+0.153118183 container init ce5bc5a496b7865775708af7465b26e38387674762ca9011df2af2f3cc324d2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_lamarr, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:47:57 compute-0 podman[102155]: 2025-10-08 09:47:57.690471145 +0000 UTC m=+0.159396393 container start ce5bc5a496b7865775708af7465b26e38387674762ca9011df2af2f3cc324d2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_lamarr, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:47:57 compute-0 podman[102155]: 2025-10-08 09:47:57.696304272 +0000 UTC m=+0.165229620 container attach ce5bc5a496b7865775708af7465b26e38387674762ca9011df2af2f3cc324d2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:47:57 compute-0 blissful_lamarr[102172]: {
Oct 08 09:47:57 compute-0 blissful_lamarr[102172]:     "1": [
Oct 08 09:47:57 compute-0 blissful_lamarr[102172]:         {
Oct 08 09:47:57 compute-0 blissful_lamarr[102172]:             "devices": [
Oct 08 09:47:57 compute-0 blissful_lamarr[102172]:                 "/dev/loop3"
Oct 08 09:47:57 compute-0 blissful_lamarr[102172]:             ],
Oct 08 09:47:57 compute-0 blissful_lamarr[102172]:             "lv_name": "ceph_lv0",
Oct 08 09:47:57 compute-0 blissful_lamarr[102172]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 09:47:57 compute-0 blissful_lamarr[102172]:             "lv_size": "21470642176",
Oct 08 09:47:57 compute-0 blissful_lamarr[102172]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 08 09:47:57 compute-0 blissful_lamarr[102172]:             "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 09:47:57 compute-0 blissful_lamarr[102172]:             "name": "ceph_lv0",
Oct 08 09:47:57 compute-0 blissful_lamarr[102172]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 09:47:57 compute-0 blissful_lamarr[102172]:             "tags": {
Oct 08 09:47:57 compute-0 blissful_lamarr[102172]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 08 09:47:57 compute-0 blissful_lamarr[102172]:                 "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 09:47:57 compute-0 blissful_lamarr[102172]:                 "ceph.cephx_lockbox_secret": "",
Oct 08 09:47:57 compute-0 blissful_lamarr[102172]:                 "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct 08 09:47:57 compute-0 blissful_lamarr[102172]:                 "ceph.cluster_name": "ceph",
Oct 08 09:47:57 compute-0 blissful_lamarr[102172]:                 "ceph.crush_device_class": "",
Oct 08 09:47:57 compute-0 blissful_lamarr[102172]:                 "ceph.encrypted": "0",
Oct 08 09:47:57 compute-0 blissful_lamarr[102172]:                 "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct 08 09:47:57 compute-0 blissful_lamarr[102172]:                 "ceph.osd_id": "1",
Oct 08 09:47:57 compute-0 blissful_lamarr[102172]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 08 09:47:57 compute-0 blissful_lamarr[102172]:                 "ceph.type": "block",
Oct 08 09:47:57 compute-0 blissful_lamarr[102172]:                 "ceph.vdo": "0",
Oct 08 09:47:57 compute-0 blissful_lamarr[102172]:                 "ceph.with_tpm": "0"
Oct 08 09:47:57 compute-0 blissful_lamarr[102172]:             },
Oct 08 09:47:57 compute-0 blissful_lamarr[102172]:             "type": "block",
Oct 08 09:47:57 compute-0 blissful_lamarr[102172]:             "vg_name": "ceph_vg0"
Oct 08 09:47:57 compute-0 blissful_lamarr[102172]:         }
Oct 08 09:47:57 compute-0 blissful_lamarr[102172]:     ]
Oct 08 09:47:57 compute-0 blissful_lamarr[102172]: }
Oct 08 09:47:58 compute-0 systemd[1]: libpod-ce5bc5a496b7865775708af7465b26e38387674762ca9011df2af2f3cc324d2f.scope: Deactivated successfully.
Oct 08 09:47:58 compute-0 podman[102155]: 2025-10-08 09:47:58.01051791 +0000 UTC m=+0.479443158 container died ce5bc5a496b7865775708af7465b26e38387674762ca9011df2af2f3cc324d2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 08 09:47:58 compute-0 ceph-mon[73572]: 10.8 scrub starts
Oct 08 09:47:58 compute-0 ceph-mon[73572]: 10.8 scrub ok
Oct 08 09:47:58 compute-0 ceph-mon[73572]: 8.8 scrub starts
Oct 08 09:47:58 compute-0 ceph-mon[73572]: 8.8 scrub ok
Oct 08 09:47:58 compute-0 ceph-mon[73572]: 12.13 scrub starts
Oct 08 09:47:58 compute-0 ceph-mon[73572]: 12.13 scrub ok
Oct 08 09:47:58 compute-0 ceph-mon[73572]: 12.8 scrub starts
Oct 08 09:47:58 compute-0 ceph-mon[73572]: 12.8 scrub ok
Oct 08 09:47:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-58d75dc9dc4dcbc3a1ca8bf441431beee6e4f690793d1f16a71e6a7dbb8590a1-merged.mount: Deactivated successfully.
Oct 08 09:47:58 compute-0 sudo[100625]: pam_unix(sudo:session): session closed for user root
Oct 08 09:47:58 compute-0 podman[102155]: 2025-10-08 09:47:58.155103776 +0000 UTC m=+0.624029024 container remove ce5bc5a496b7865775708af7465b26e38387674762ca9011df2af2f3cc324d2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_lamarr, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:47:58 compute-0 systemd[1]: libpod-conmon-ce5bc5a496b7865775708af7465b26e38387674762ca9011df2af2f3cc324d2f.scope: Deactivated successfully.
Oct 08 09:47:58 compute-0 sudo[102041]: pam_unix(sudo:session): session closed for user root
Oct 08 09:47:58 compute-0 sudo[102220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:47:58 compute-0 sudo[102220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:47:58 compute-0 sudo[102220]: pam_unix(sudo:session): session closed for user root
Oct 08 09:47:58 compute-0 sudo[102245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- raw list --format json
Oct 08 09:47:58 compute-0 sudo[102245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:47:58 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e75 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:47:58 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Oct 08 09:47:58 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Oct 08 09:47:58 compute-0 sshd-session[99450]: Connection closed by 192.168.122.30 port 35646
Oct 08 09:47:58 compute-0 sshd-session[99446]: pam_unix(sshd:session): session closed for user zuul
Oct 08 09:47:58 compute-0 systemd[1]: session-38.scope: Deactivated successfully.
Oct 08 09:47:58 compute-0 systemd[1]: session-38.scope: Consumed 8.080s CPU time.
Oct 08 09:47:58 compute-0 systemd-logind[798]: Session 38 logged out. Waiting for processes to exit.
Oct 08 09:47:58 compute-0 systemd-logind[798]: Removed session 38.
Oct 08 09:47:58 compute-0 podman[102312]: 2025-10-08 09:47:58.787237935 +0000 UTC m=+0.057096933 container create e191ec23407f294c549ecb805a67db34524d8b6e78c8322e80615488bbf62ff0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_noether, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 08 09:47:58 compute-0 systemd[1]: Started libpod-conmon-e191ec23407f294c549ecb805a67db34524d8b6e78c8322e80615488bbf62ff0.scope.
Oct 08 09:47:58 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:47:58 compute-0 podman[102312]: 2025-10-08 09:47:58.767166875 +0000 UTC m=+0.037025913 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:47:58 compute-0 podman[102312]: 2025-10-08 09:47:58.86572226 +0000 UTC m=+0.135581258 container init e191ec23407f294c549ecb805a67db34524d8b6e78c8322e80615488bbf62ff0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_noether, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:47:58 compute-0 podman[102312]: 2025-10-08 09:47:58.873858667 +0000 UTC m=+0.143717655 container start e191ec23407f294c549ecb805a67db34524d8b6e78c8322e80615488bbf62ff0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:47:58 compute-0 podman[102312]: 2025-10-08 09:47:58.877258564 +0000 UTC m=+0.147117552 container attach e191ec23407f294c549ecb805a67db34524d8b6e78c8322e80615488bbf62ff0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_noether, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:47:58 compute-0 objective_noether[102330]: 167 167
Oct 08 09:47:58 compute-0 systemd[1]: libpod-e191ec23407f294c549ecb805a67db34524d8b6e78c8322e80615488bbf62ff0.scope: Deactivated successfully.
Oct 08 09:47:58 compute-0 podman[102312]: 2025-10-08 09:47:58.879648454 +0000 UTC m=+0.149507442 container died e191ec23407f294c549ecb805a67db34524d8b6e78c8322e80615488bbf62ff0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:47:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-d74f7219d7bd527352181f4312edc2e4f72c2ee5fc73abbf4a32434fbf1b18a1-merged.mount: Deactivated successfully.
Oct 08 09:47:58 compute-0 podman[102312]: 2025-10-08 09:47:58.918403109 +0000 UTC m=+0.188262097 container remove e191ec23407f294c549ecb805a67db34524d8b6e78c8322e80615488bbf62ff0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_noether, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:47:58 compute-0 systemd[1]: libpod-conmon-e191ec23407f294c549ecb805a67db34524d8b6e78c8322e80615488bbf62ff0.scope: Deactivated successfully.
Oct 08 09:47:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:58 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:58 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:47:58 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:47:58 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:47:58.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:47:59 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:47:59 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:47:59 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:47:59.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:47:59 compute-0 podman[102354]: 2025-10-08 09:47:59.076065428 +0000 UTC m=+0.043973650 container create c4d531b8593284c444a7ca8da083ac901426478775d6d36c9cc1b4d8bcf140bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_moser, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:47:59 compute-0 ceph-mon[73572]: 11.f scrub starts
Oct 08 09:47:59 compute-0 ceph-mon[73572]: 11.f scrub ok
Oct 08 09:47:59 compute-0 ceph-mon[73572]: pgmap v13: 353 pgs: 4 peering, 349 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 0 B/s wr, 9 op/s; 36 B/s, 3 objects/s recovering
Oct 08 09:47:59 compute-0 ceph-mon[73572]: 10.10 scrub starts
Oct 08 09:47:59 compute-0 ceph-mon[73572]: 10.10 scrub ok
Oct 08 09:47:59 compute-0 ceph-mon[73572]: 10.19 scrub starts
Oct 08 09:47:59 compute-0 ceph-mon[73572]: 10.19 scrub ok
Oct 08 09:47:59 compute-0 systemd[1]: Started libpod-conmon-c4d531b8593284c444a7ca8da083ac901426478775d6d36c9cc1b4d8bcf140bd.scope.
Oct 08 09:47:59 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:47:59 compute-0 podman[102354]: 2025-10-08 09:47:59.054477218 +0000 UTC m=+0.022385480 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:47:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a814274e209a7fd6fc0a84b7c30f817e2090b63a3f08b70f4af2ae31fa994e8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 09:47:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a814274e209a7fd6fc0a84b7c30f817e2090b63a3f08b70f4af2ae31fa994e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:47:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a814274e209a7fd6fc0a84b7c30f817e2090b63a3f08b70f4af2ae31fa994e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:47:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a814274e209a7fd6fc0a84b7c30f817e2090b63a3f08b70f4af2ae31fa994e8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 09:47:59 compute-0 podman[102354]: 2025-10-08 09:47:59.170833807 +0000 UTC m=+0.138742049 container init c4d531b8593284c444a7ca8da083ac901426478775d6d36c9cc1b4d8bcf140bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_moser, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:47:59 compute-0 podman[102354]: 2025-10-08 09:47:59.179928618 +0000 UTC m=+0.147836840 container start c4d531b8593284c444a7ca8da083ac901426478775d6d36c9cc1b4d8bcf140bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_moser, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:47:59 compute-0 podman[102354]: 2025-10-08 09:47:59.184184646 +0000 UTC m=+0.152092868 container attach c4d531b8593284c444a7ca8da083ac901426478775d6d36c9cc1b4d8bcf140bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Oct 08 09:47:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:59 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6600003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:59 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Oct 08 09:47:59 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Oct 08 09:47:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:59 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:47:59 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v14: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 3 op/s; 28 B/s, 2 objects/s recovering
Oct 08 09:47:59 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0)
Oct 08 09:47:59 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct 08 09:47:59 compute-0 lvm[102444]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 08 09:47:59 compute-0 lvm[102444]: VG ceph_vg0 finished
Oct 08 09:47:59 compute-0 sharp_moser[102370]: {}
Oct 08 09:47:59 compute-0 systemd[1]: libpod-c4d531b8593284c444a7ca8da083ac901426478775d6d36c9cc1b4d8bcf140bd.scope: Deactivated successfully.
Oct 08 09:47:59 compute-0 systemd[1]: libpod-c4d531b8593284c444a7ca8da083ac901426478775d6d36c9cc1b4d8bcf140bd.scope: Consumed 1.110s CPU time.
Oct 08 09:47:59 compute-0 podman[102354]: 2025-10-08 09:47:59.869939088 +0000 UTC m=+0.837847330 container died c4d531b8593284c444a7ca8da083ac901426478775d6d36c9cc1b4d8bcf140bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_moser, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 08 09:47:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a814274e209a7fd6fc0a84b7c30f817e2090b63a3f08b70f4af2ae31fa994e8-merged.mount: Deactivated successfully.
Oct 08 09:47:59 compute-0 podman[102354]: 2025-10-08 09:47:59.913419654 +0000 UTC m=+0.881327896 container remove c4d531b8593284c444a7ca8da083ac901426478775d6d36c9cc1b4d8bcf140bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 08 09:47:59 compute-0 systemd[1]: libpod-conmon-c4d531b8593284c444a7ca8da083ac901426478775d6d36c9cc1b4d8bcf140bd.scope: Deactivated successfully.
Oct 08 09:47:59 compute-0 sudo[102245]: pam_unix(sudo:session): session closed for user root
Oct 08 09:47:59 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 09:47:59 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:47:59 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 09:47:59 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:00 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Oct 08 09:48:00 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:00 compute-0 sudo[102460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 09:48:00 compute-0 sudo[102459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 08 09:48:00 compute-0 sudo[102460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:48:00 compute-0 sudo[102459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:48:00 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Oct 08 09:48:00 compute-0 sudo[102460]: pam_unix(sudo:session): session closed for user root
Oct 08 09:48:00 compute-0 sudo[102459]: pam_unix(sudo:session): session closed for user root
Oct 08 09:48:00 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Oct 08 09:48:00 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Oct 08 09:48:00 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Oct 08 09:48:00 compute-0 ceph-mon[73572]: 11.1 deep-scrub starts
Oct 08 09:48:00 compute-0 ceph-mon[73572]: 11.1 deep-scrub ok
Oct 08 09:48:00 compute-0 ceph-mon[73572]: 11.3 scrub starts
Oct 08 09:48:00 compute-0 ceph-mon[73572]: 11.3 scrub ok
Oct 08 09:48:00 compute-0 ceph-mon[73572]: 10.18 scrub starts
Oct 08 09:48:00 compute-0 ceph-mon[73572]: 10.18 scrub ok
Oct 08 09:48:00 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct 08 09:48:00 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:00 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:00 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:00 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (monmap changed)...
Oct 08 09:48:00 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (monmap changed)...
Oct 08 09:48:00 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Oct 08 09:48:00 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 08 09:48:00 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Oct 08 09:48:00 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 08 09:48:00 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:48:00 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:48:00 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Oct 08 09:48:00 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Oct 08 09:48:00 compute-0 sudo[102509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:48:00 compute-0 sudo[102509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:48:00 compute-0 sudo[102509]: pam_unix(sudo:session): session closed for user root
Oct 08 09:48:00 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 12.1c scrub starts
Oct 08 09:48:00 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 12.1c scrub ok
Oct 08 09:48:00 compute-0 sudo[102534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 _orch deploy --fsid 787292cc-8154-50c4-9e00-e9be3e817149
Oct 08 09:48:00 compute-0 sudo[102534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:48:00 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 76 pg[9.8( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=76 pruub=13.778754234s) [2] r=-1 lpr=76 pi=[54,76)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 222.315643311s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:48:00 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 76 pg[9.8( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=76 pruub=13.778483391s) [2] r=-1 lpr=76 pi=[54,76)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 222.315643311s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:48:00 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 76 pg[9.18( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=76 pruub=13.780656815s) [2] r=-1 lpr=76 pi=[54,76)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 222.318374634s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:48:00 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 76 pg[9.18( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=76 pruub=13.780499458s) [2] r=-1 lpr=76 pi=[54,76)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 222.318374634s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:48:00 compute-0 podman[102576]: 2025-10-08 09:48:00.729937251 +0000 UTC m=+0.054365394 container create 7555f7dc3c10b3d9aecfa8e871434e441cbcc962863cf0518eaae03a2f33853c (image=quay.io/ceph/ceph:v19, name=xenodochial_lalande, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:48:00 compute-0 systemd[1]: Started libpod-conmon-7555f7dc3c10b3d9aecfa8e871434e441cbcc962863cf0518eaae03a2f33853c.scope.
Oct 08 09:48:00 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:48:00 compute-0 podman[102576]: 2025-10-08 09:48:00.7086864 +0000 UTC m=+0.033114633 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:48:00 compute-0 podman[102576]: 2025-10-08 09:48:00.813918665 +0000 UTC m=+0.138346818 container init 7555f7dc3c10b3d9aecfa8e871434e441cbcc962863cf0518eaae03a2f33853c (image=quay.io/ceph/ceph:v19, name=xenodochial_lalande, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:48:00 compute-0 podman[102576]: 2025-10-08 09:48:00.825017728 +0000 UTC m=+0.149445871 container start 7555f7dc3c10b3d9aecfa8e871434e441cbcc962863cf0518eaae03a2f33853c (image=quay.io/ceph/ceph:v19, name=xenodochial_lalande, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:48:00 compute-0 podman[102576]: 2025-10-08 09:48:00.828751013 +0000 UTC m=+0.153179176 container attach 7555f7dc3c10b3d9aecfa8e871434e441cbcc962863cf0518eaae03a2f33853c (image=quay.io/ceph/ceph:v19, name=xenodochial_lalande, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:48:00 compute-0 xenodochial_lalande[102593]: 167 167
Oct 08 09:48:00 compute-0 systemd[1]: libpod-7555f7dc3c10b3d9aecfa8e871434e441cbcc962863cf0518eaae03a2f33853c.scope: Deactivated successfully.
Oct 08 09:48:00 compute-0 podman[102576]: 2025-10-08 09:48:00.833717029 +0000 UTC m=+0.158145202 container died 7555f7dc3c10b3d9aecfa8e871434e441cbcc962863cf0518eaae03a2f33853c (image=quay.io/ceph/ceph:v19, name=xenodochial_lalande, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct 08 09:48:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-af9abe252850334ceae13b54661740285ff6e5db379db9340334795f26a8d33d-merged.mount: Deactivated successfully.
Oct 08 09:48:00 compute-0 podman[102576]: 2025-10-08 09:48:00.87942181 +0000 UTC m=+0.203849953 container remove 7555f7dc3c10b3d9aecfa8e871434e441cbcc962863cf0518eaae03a2f33853c (image=quay.io/ceph/ceph:v19, name=xenodochial_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True)
Oct 08 09:48:00 compute-0 systemd[1]: libpod-conmon-7555f7dc3c10b3d9aecfa8e871434e441cbcc962863cf0518eaae03a2f33853c.scope: Deactivated successfully.
Oct 08 09:48:00 compute-0 sudo[102534]: pam_unix(sudo:session): session closed for user root
Oct 08 09:48:00 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:00 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66180038f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:00 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 09:48:00 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:00 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 09:48:00 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:00 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.ixicfj (monmap changed)...
Oct 08 09:48:00 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.ixicfj (monmap changed)...
Oct 08 09:48:00 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.ixicfj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Oct 08 09:48:00 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.ixicfj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 08 09:48:00 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct 08 09:48:00 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 08 09:48:00 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:48:00 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:48:00 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.ixicfj on compute-0
Oct 08 09:48:00 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.ixicfj on compute-0
Oct 08 09:48:00 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:48:00 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:48:00 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:48:00.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:48:01 compute-0 sudo[102612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:48:01 compute-0 sudo[102612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:48:01 compute-0 sudo[102612]: pam_unix(sudo:session): session closed for user root
Oct 08 09:48:01 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:48:01 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:48:01 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:48:01.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:48:01 compute-0 sudo[102638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 _orch deploy --fsid 787292cc-8154-50c4-9e00-e9be3e817149
Oct 08 09:48:01 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Oct 08 09:48:01 compute-0 sudo[102638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:48:01 compute-0 ceph-mon[73572]: 11.4 scrub starts
Oct 08 09:48:01 compute-0 ceph-mon[73572]: 11.4 scrub ok
Oct 08 09:48:01 compute-0 ceph-mon[73572]: pgmap v14: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 3 op/s; 28 B/s, 2 objects/s recovering
Oct 08 09:48:01 compute-0 ceph-mon[73572]: 10.12 deep-scrub starts
Oct 08 09:48:01 compute-0 ceph-mon[73572]: 10.12 deep-scrub ok
Oct 08 09:48:01 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Oct 08 09:48:01 compute-0 ceph-mon[73572]: osdmap e76: 3 total, 3 up, 3 in
Oct 08 09:48:01 compute-0 ceph-mon[73572]: Reconfiguring mon.compute-0 (monmap changed)...
Oct 08 09:48:01 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 08 09:48:01 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 08 09:48:01 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:48:01 compute-0 ceph-mon[73572]: Reconfiguring daemon mon.compute-0 on compute-0
Oct 08 09:48:01 compute-0 ceph-mon[73572]: 12.1c scrub starts
Oct 08 09:48:01 compute-0 ceph-mon[73572]: 12.1c scrub ok
Oct 08 09:48:01 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:01 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:01 compute-0 ceph-mon[73572]: Reconfiguring mgr.compute-0.ixicfj (monmap changed)...
Oct 08 09:48:01 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.ixicfj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 08 09:48:01 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 08 09:48:01 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:48:01 compute-0 ceph-mon[73572]: Reconfiguring daemon mgr.compute-0.ixicfj on compute-0
Oct 08 09:48:01 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Oct 08 09:48:01 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Oct 08 09:48:01 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 77 pg[9.8( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=77) [2]/[1] r=0 lpr=77 pi=[54,77)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:48:01 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 77 pg[9.8( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=77) [2]/[1] r=0 lpr=77 pi=[54,77)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 08 09:48:01 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 77 pg[9.18( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=77) [2]/[1] r=0 lpr=77 pi=[54,77)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:48:01 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 77 pg[9.18( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=77) [2]/[1] r=0 lpr=77 pi=[54,77)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 08 09:48:01 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:01 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:01 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Oct 08 09:48:01 compute-0 podman[102680]: 2025-10-08 09:48:01.431996587 +0000 UTC m=+0.041797243 container create afc0290718592cb82dbb196acb4ab264d61c7eaf2c9640f539abb58dea63d6f9 (image=quay.io/ceph/ceph:v19, name=amazing_hodgkin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:48:01 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Oct 08 09:48:01 compute-0 systemd[1]: Started libpod-conmon-afc0290718592cb82dbb196acb4ab264d61c7eaf2c9640f539abb58dea63d6f9.scope.
Oct 08 09:48:01 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:01 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6600003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:01 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:48:01 compute-0 podman[102680]: 2025-10-08 09:48:01.501807022 +0000 UTC m=+0.111607698 container init afc0290718592cb82dbb196acb4ab264d61c7eaf2c9640f539abb58dea63d6f9 (image=quay.io/ceph/ceph:v19, name=amazing_hodgkin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct 08 09:48:01 compute-0 podman[102680]: 2025-10-08 09:48:01.413506708 +0000 UTC m=+0.023307394 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 08 09:48:01 compute-0 podman[102680]: 2025-10-08 09:48:01.509796465 +0000 UTC m=+0.119597131 container start afc0290718592cb82dbb196acb4ab264d61c7eaf2c9640f539abb58dea63d6f9 (image=quay.io/ceph/ceph:v19, name=amazing_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:48:01 compute-0 amazing_hodgkin[102696]: 167 167
Oct 08 09:48:01 compute-0 systemd[1]: libpod-afc0290718592cb82dbb196acb4ab264d61c7eaf2c9640f539abb58dea63d6f9.scope: Deactivated successfully.
Oct 08 09:48:01 compute-0 podman[102680]: 2025-10-08 09:48:01.530901572 +0000 UTC m=+0.140702238 container attach afc0290718592cb82dbb196acb4ab264d61c7eaf2c9640f539abb58dea63d6f9 (image=quay.io/ceph/ceph:v19, name=amazing_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct 08 09:48:01 compute-0 podman[102680]: 2025-10-08 09:48:01.531936118 +0000 UTC m=+0.141736784 container died afc0290718592cb82dbb196acb4ab264d61c7eaf2c9640f539abb58dea63d6f9 (image=quay.io/ceph/ceph:v19, name=amazing_hodgkin, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 08 09:48:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e2e9c715cc8566725ca229240ac2c29167039bea26234fbb389363c4e4f2435-merged.mount: Deactivated successfully.
Oct 08 09:48:01 compute-0 podman[102680]: 2025-10-08 09:48:01.570073298 +0000 UTC m=+0.179873954 container remove afc0290718592cb82dbb196acb4ab264d61c7eaf2c9640f539abb58dea63d6f9 (image=quay.io/ceph/ceph:v19, name=amazing_hodgkin, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct 08 09:48:01 compute-0 systemd[1]: libpod-conmon-afc0290718592cb82dbb196acb4ab264d61c7eaf2c9640f539abb58dea63d6f9.scope: Deactivated successfully.
Oct 08 09:48:01 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v17: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:48:01 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0)
Oct 08 09:48:01 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct 08 09:48:01 compute-0 sudo[102638]: pam_unix(sudo:session): session closed for user root
Oct 08 09:48:01 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 09:48:01 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:01 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 09:48:01 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:01 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-0 (monmap changed)...
Oct 08 09:48:01 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-0 (monmap changed)...
Oct 08 09:48:01 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Oct 08 09:48:01 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 08 09:48:01 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:48:01 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:48:01 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-0 on compute-0
Oct 08 09:48:01 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-0 on compute-0
Oct 08 09:48:01 compute-0 sudo[102715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:48:01 compute-0 sudo[102715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:48:01 compute-0 sudo[102715]: pam_unix(sudo:session): session closed for user root
Oct 08 09:48:01 compute-0 sudo[102740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 787292cc-8154-50c4-9e00-e9be3e817149
Oct 08 09:48:01 compute-0 sudo[102740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:48:02 compute-0 podman[102784]: 2025-10-08 09:48:02.05914684 +0000 UTC m=+0.079101222 container create b5d180217ae66af6f07d5bed36b101c6e56ebe99a6c18511c1726dc27d8c8a18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:48:02 compute-0 podman[102784]: 2025-10-08 09:48:01.999644577 +0000 UTC m=+0.019598989 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:48:02 compute-0 systemd[1]: Started libpod-conmon-b5d180217ae66af6f07d5bed36b101c6e56ebe99a6c18511c1726dc27d8c8a18.scope.
Oct 08 09:48:02 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:48:02 compute-0 podman[102784]: 2025-10-08 09:48:02.13267828 +0000 UTC m=+0.152632662 container init b5d180217ae66af6f07d5bed36b101c6e56ebe99a6c18511c1726dc27d8c8a18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_varahamihira, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 08 09:48:02 compute-0 ceph-mon[73572]: 11.14 scrub starts
Oct 08 09:48:02 compute-0 ceph-mon[73572]: 11.14 scrub ok
Oct 08 09:48:02 compute-0 ceph-mon[73572]: 12.1d scrub starts
Oct 08 09:48:02 compute-0 ceph-mon[73572]: 12.1d scrub ok
Oct 08 09:48:02 compute-0 ceph-mon[73572]: osdmap e77: 3 total, 3 up, 3 in
Oct 08 09:48:02 compute-0 ceph-mon[73572]: 8.14 scrub starts
Oct 08 09:48:02 compute-0 ceph-mon[73572]: 8.14 scrub ok
Oct 08 09:48:02 compute-0 ceph-mon[73572]: 10.1b scrub starts
Oct 08 09:48:02 compute-0 ceph-mon[73572]: 10.1b scrub ok
Oct 08 09:48:02 compute-0 ceph-mon[73572]: pgmap v17: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:48:02 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct 08 09:48:02 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:02 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:02 compute-0 ceph-mon[73572]: Reconfiguring crash.compute-0 (monmap changed)...
Oct 08 09:48:02 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 08 09:48:02 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:48:02 compute-0 ceph-mon[73572]: Reconfiguring daemon crash.compute-0 on compute-0
Oct 08 09:48:02 compute-0 podman[102784]: 2025-10-08 09:48:02.137786169 +0000 UTC m=+0.157740551 container start b5d180217ae66af6f07d5bed36b101c6e56ebe99a6c18511c1726dc27d8c8a18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_varahamihira, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:48:02 compute-0 serene_varahamihira[102800]: 167 167
Oct 08 09:48:02 compute-0 systemd[1]: libpod-b5d180217ae66af6f07d5bed36b101c6e56ebe99a6c18511c1726dc27d8c8a18.scope: Deactivated successfully.
Oct 08 09:48:02 compute-0 podman[102784]: 2025-10-08 09:48:02.150187345 +0000 UTC m=+0.170141727 container attach b5d180217ae66af6f07d5bed36b101c6e56ebe99a6c18511c1726dc27d8c8a18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_varahamihira, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:48:02 compute-0 podman[102784]: 2025-10-08 09:48:02.150473281 +0000 UTC m=+0.170427663 container died b5d180217ae66af6f07d5bed36b101c6e56ebe99a6c18511c1726dc27d8c8a18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_varahamihira, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:48:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Oct 08 09:48:02 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Oct 08 09:48:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Oct 08 09:48:02 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Oct 08 09:48:02 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 78 pg[9.9( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=78 pruub=12.021979332s) [2] r=-1 lpr=78 pi=[54,78)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 222.315689087s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:48:02 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 78 pg[9.9( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=78 pruub=12.021944046s) [2] r=-1 lpr=78 pi=[54,78)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 222.315689087s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:48:02 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 78 pg[9.19( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=78 pruub=12.022686005s) [2] r=-1 lpr=78 pi=[54,78)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 222.318298340s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:48:02 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 78 pg[9.19( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=78 pruub=12.022484779s) [2] r=-1 lpr=78 pi=[54,78)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 222.318298340s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:48:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-10e3c527bb0fe897c9cd523253bdd42d01dc559f3bb5f5914ddc00ef0f60a6e1-merged.mount: Deactivated successfully.
Oct 08 09:48:02 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 78 pg[9.8( v 45'1018 (0'0,45'1018] local-lis/les=77/78 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=77) [2]/[1] async=[2] r=0 lpr=77 pi=[54,77)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:48:02 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 78 pg[9.18( v 45'1018 (0'0,45'1018] local-lis/les=77/78 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=77) [2]/[1] async=[2] r=0 lpr=77 pi=[54,77)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:48:02 compute-0 podman[102784]: 2025-10-08 09:48:02.372303091 +0000 UTC m=+0.392257473 container remove b5d180217ae66af6f07d5bed36b101c6e56ebe99a6c18511c1726dc27d8c8a18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_varahamihira, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:48:02 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 12.19 scrub starts
Oct 08 09:48:02 compute-0 systemd[1]: libpod-conmon-b5d180217ae66af6f07d5bed36b101c6e56ebe99a6c18511c1726dc27d8c8a18.scope: Deactivated successfully.
Oct 08 09:48:02 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 12.19 scrub ok
Oct 08 09:48:02 compute-0 sudo[102740]: pam_unix(sudo:session): session closed for user root
Oct 08 09:48:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 09:48:02 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 09:48:02 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:02 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)...
Oct 08 09:48:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Oct 08 09:48:02 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct 08 09:48:02 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)...
Oct 08 09:48:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:48:02 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:48:02 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on compute-0
Oct 08 09:48:02 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on compute-0
Oct 08 09:48:02 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=infra.usagestats t=2025-10-08T09:48:02.55793221Z level=info msg="Usage stats are ready to report"
Oct 08 09:48:02 compute-0 sudo[102816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:48:02 compute-0 sudo[102816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:48:02 compute-0 sudo[102816]: pam_unix(sudo:session): session closed for user root
Oct 08 09:48:02 compute-0 sudo[102841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 787292cc-8154-50c4-9e00-e9be3e817149
Oct 08 09:48:02 compute-0 sudo[102841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:48:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 09:48:02 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:48:02 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:02 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66180038f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:02 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:48:02 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:48:02 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:48:02.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:48:03 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:48:03 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 08 09:48:03 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:48:03.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 08 09:48:03 compute-0 podman[102881]: 2025-10-08 09:48:03.095850765 +0000 UTC m=+0.070650078 container create 0f55946eef5a01dd79d73c52883f5f826e0aa25190bc19c764145cfbb066ada1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_dirac, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct 08 09:48:03 compute-0 systemd[1]: Started libpod-conmon-0f55946eef5a01dd79d73c52883f5f826e0aa25190bc19c764145cfbb066ada1.scope.
Oct 08 09:48:03 compute-0 podman[102881]: 2025-10-08 09:48:03.06222409 +0000 UTC m=+0.037023383 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:48:03 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:48:03 compute-0 podman[102881]: 2025-10-08 09:48:03.219829075 +0000 UTC m=+0.194628388 container init 0f55946eef5a01dd79d73c52883f5f826e0aa25190bc19c764145cfbb066ada1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_dirac, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:48:03 compute-0 podman[102881]: 2025-10-08 09:48:03.229647976 +0000 UTC m=+0.204447239 container start 0f55946eef5a01dd79d73c52883f5f826e0aa25190bc19c764145cfbb066ada1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_dirac, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1)
Oct 08 09:48:03 compute-0 gracious_dirac[102899]: 167 167
Oct 08 09:48:03 compute-0 systemd[1]: libpod-0f55946eef5a01dd79d73c52883f5f826e0aa25190bc19c764145cfbb066ada1.scope: Deactivated successfully.
Oct 08 09:48:03 compute-0 conmon[102899]: conmon 0f55946eef5a01dd79d7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0f55946eef5a01dd79d73c52883f5f826e0aa25190bc19c764145cfbb066ada1.scope/container/memory.events
Oct 08 09:48:03 compute-0 podman[102881]: 2025-10-08 09:48:03.247311075 +0000 UTC m=+0.222110388 container attach 0f55946eef5a01dd79d73c52883f5f826e0aa25190bc19c764145cfbb066ada1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_dirac, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:48:03 compute-0 podman[102881]: 2025-10-08 09:48:03.248283189 +0000 UTC m=+0.223082492 container died 0f55946eef5a01dd79d73c52883f5f826e0aa25190bc19c764145cfbb066ada1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_dirac, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct 08 09:48:03 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:03 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66180038f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:03 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Oct 08 09:48:03 compute-0 ceph-mon[73572]: 12.1e scrub starts
Oct 08 09:48:03 compute-0 ceph-mon[73572]: 12.1e scrub ok
Oct 08 09:48:03 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Oct 08 09:48:03 compute-0 ceph-mon[73572]: osdmap e78: 3 total, 3 up, 3 in
Oct 08 09:48:03 compute-0 ceph-mon[73572]: 8.12 scrub starts
Oct 08 09:48:03 compute-0 ceph-mon[73572]: 8.12 scrub ok
Oct 08 09:48:03 compute-0 ceph-mon[73572]: 12.19 scrub starts
Oct 08 09:48:03 compute-0 ceph-mon[73572]: 12.19 scrub ok
Oct 08 09:48:03 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:03 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:03 compute-0 ceph-mon[73572]: Reconfiguring osd.1 (monmap changed)...
Oct 08 09:48:03 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct 08 09:48:03 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:48:03 compute-0 ceph-mon[73572]: Reconfiguring daemon osd.1 on compute-0
Oct 08 09:48:03 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:48:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f2e2401d1093c138be06828d805a722fde45e5cd964d8ae2065523ac9e99033-merged.mount: Deactivated successfully.
Oct 08 09:48:03 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 12.e deep-scrub starts
Oct 08 09:48:03 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 12.e deep-scrub ok
Oct 08 09:48:03 compute-0 podman[102881]: 2025-10-08 09:48:03.403766552 +0000 UTC m=+0.378565835 container remove 0f55946eef5a01dd79d73c52883f5f826e0aa25190bc19c764145cfbb066ada1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:48:03 compute-0 systemd[1]: libpod-conmon-0f55946eef5a01dd79d73c52883f5f826e0aa25190bc19c764145cfbb066ada1.scope: Deactivated successfully.
Oct 08 09:48:03 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Oct 08 09:48:03 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Oct 08 09:48:03 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 79 pg[9.9( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=79) [2]/[1] r=0 lpr=79 pi=[54,79)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:48:03 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 79 pg[9.9( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=79) [2]/[1] r=0 lpr=79 pi=[54,79)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 08 09:48:03 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 79 pg[9.8( v 45'1018 (0'0,45'1018] local-lis/les=77/78 n=6 ec=54/38 lis/c=77/54 les/c/f=78/56/0 sis=79 pruub=14.804843903s) [2] async=[2] r=-1 lpr=79 pi=[54,79)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 226.336395264s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:48:03 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 79 pg[9.8( v 45'1018 (0'0,45'1018] local-lis/les=77/78 n=6 ec=54/38 lis/c=77/54 les/c/f=78/56/0 sis=79 pruub=14.804767609s) [2] r=-1 lpr=79 pi=[54,79)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 226.336395264s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:48:03 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 79 pg[9.18( v 45'1018 (0'0,45'1018] local-lis/les=77/78 n=5 ec=54/38 lis/c=77/54 les/c/f=78/56/0 sis=79 pruub=14.803595543s) [2] async=[2] r=-1 lpr=79 pi=[54,79)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 226.336410522s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:48:03 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 79 pg[9.19( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=79) [2]/[1] r=0 lpr=79 pi=[54,79)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:48:03 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 79 pg[9.18( v 45'1018 (0'0,45'1018] local-lis/les=77/78 n=5 ec=54/38 lis/c=77/54 les/c/f=78/56/0 sis=79 pruub=14.803548813s) [2] r=-1 lpr=79 pi=[54,79)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 226.336410522s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:48:03 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 79 pg[9.19( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=79) [2]/[1] r=0 lpr=79 pi=[54,79)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 08 09:48:03 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:03 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:03 compute-0 sudo[102841]: pam_unix(sudo:session): session closed for user root
Oct 08 09:48:03 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 09:48:03 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:03 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 09:48:03 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v20: 353 pgs: 2 remapped+peering, 351 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:48:03 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:03 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Reconfiguring node-exporter.compute-0 (unknown last config time)...
Oct 08 09:48:03 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Reconfiguring node-exporter.compute-0 (unknown last config time)...
Oct 08 09:48:03 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Reconfiguring daemon node-exporter.compute-0 on compute-0
Oct 08 09:48:03 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Reconfiguring daemon node-exporter.compute-0 on compute-0
Oct 08 09:48:03 compute-0 sudo[102924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:48:03 compute-0 sudo[102924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:48:03 compute-0 sudo[102924]: pam_unix(sudo:session): session closed for user root
Oct 08 09:48:03 compute-0 sudo[102949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/node-exporter:v1.7.0 --timeout 895 _orch deploy --fsid 787292cc-8154-50c4-9e00-e9be3e817149
Oct 08 09:48:03 compute-0 sudo[102949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:48:04 compute-0 systemd[1]: Stopping Ceph node-exporter.compute-0 for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct 08 09:48:04 compute-0 podman[103024]: 2025-10-08 09:48:04.234994613 +0000 UTC m=+0.048673609 container died 0dbea514cc83cec397480471ade0bebb407738deaf60dfa42fe4b53fba64588f (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:48:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-54af9510d66390823c3b362131dbb950b9145f4e5b56d1ab94c9e3f0f29ca9ac-merged.mount: Deactivated successfully.
Oct 08 09:48:04 compute-0 podman[103024]: 2025-10-08 09:48:04.278264952 +0000 UTC m=+0.091943958 container remove 0dbea514cc83cec397480471ade0bebb407738deaf60dfa42fe4b53fba64588f (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:48:04 compute-0 bash[103024]: ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0
Oct 08 09:48:04 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@node-exporter.compute-0.service: Main process exited, code=exited, status=143/n/a
Oct 08 09:48:04 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Oct 08 09:48:04 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Oct 08 09:48:04 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@node-exporter.compute-0.service: Failed with result 'exit-code'.
Oct 08 09:48:04 compute-0 systemd[1]: Stopped Ceph node-exporter.compute-0 for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct 08 09:48:04 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@node-exporter.compute-0.service: Consumed 2.090s CPU time.
Oct 08 09:48:04 compute-0 systemd[1]: Starting Ceph node-exporter.compute-0 for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct 08 09:48:04 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Oct 08 09:48:04 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Oct 08 09:48:04 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Oct 08 09:48:04 compute-0 ceph-mon[73572]: 12.3 scrub starts
Oct 08 09:48:04 compute-0 ceph-mon[73572]: 12.3 scrub ok
Oct 08 09:48:04 compute-0 ceph-mon[73572]: 11.1b deep-scrub starts
Oct 08 09:48:04 compute-0 ceph-mon[73572]: 11.1b deep-scrub ok
Oct 08 09:48:04 compute-0 ceph-mon[73572]: 12.e deep-scrub starts
Oct 08 09:48:04 compute-0 ceph-mon[73572]: 12.e deep-scrub ok
Oct 08 09:48:04 compute-0 ceph-mon[73572]: osdmap e79: 3 total, 3 up, 3 in
Oct 08 09:48:04 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:04 compute-0 ceph-mon[73572]: pgmap v20: 353 pgs: 2 remapped+peering, 351 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:48:04 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:04 compute-0 ceph-mon[73572]: Reconfiguring node-exporter.compute-0 (unknown last config time)...
Oct 08 09:48:04 compute-0 ceph-mon[73572]: Reconfiguring daemon node-exporter.compute-0 on compute-0
Oct 08 09:48:04 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 80 pg[9.19( v 45'1018 (0'0,45'1018] local-lis/les=79/80 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=79) [2]/[1] async=[2] r=0 lpr=79 pi=[54,79)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:48:04 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 80 pg[9.9( v 45'1018 (0'0,45'1018] local-lis/les=79/80 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=79) [2]/[1] async=[2] r=0 lpr=79 pi=[54,79)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:48:04 compute-0 podman[103127]: 2025-10-08 09:48:04.656272761 +0000 UTC m=+0.046693937 container create 4eb6b712d09e2820826e6f961f0aba1bff09f13eee1664dbb03edca7615a708b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:48:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95dff869684ab02b35419a56871107ff724c8d375d95c3c72431a4297b3a8cef/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Oct 08 09:48:04 compute-0 podman[103127]: 2025-10-08 09:48:04.713431105 +0000 UTC m=+0.103852281 container init 4eb6b712d09e2820826e6f961f0aba1bff09f13eee1664dbb03edca7615a708b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:48:04 compute-0 podman[103127]: 2025-10-08 09:48:04.717931859 +0000 UTC m=+0.108353015 container start 4eb6b712d09e2820826e6f961f0aba1bff09f13eee1664dbb03edca7615a708b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:48:04 compute-0 bash[103127]: 4eb6b712d09e2820826e6f961f0aba1bff09f13eee1664dbb03edca7615a708b
Oct 08 09:48:04 compute-0 podman[103127]: 2025-10-08 09:48:04.633590825 +0000 UTC m=+0.024012021 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Oct 08 09:48:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.723Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Oct 08 09:48:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.723Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Oct 08 09:48:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.724Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Oct 08 09:48:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.724Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Oct 08 09:48:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.724Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Oct 08 09:48:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Oct 08 09:48:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Oct 08 09:48:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=arp
Oct 08 09:48:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=bcache
Oct 08 09:48:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=bonding
Oct 08 09:48:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=btrfs
Oct 08 09:48:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=conntrack
Oct 08 09:48:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=cpu
Oct 08 09:48:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=cpufreq
Oct 08 09:48:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=diskstats
Oct 08 09:48:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=dmi
Oct 08 09:48:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=edac
Oct 08 09:48:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=entropy
Oct 08 09:48:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=fibrechannel
Oct 08 09:48:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=filefd
Oct 08 09:48:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=filesystem
Oct 08 09:48:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=hwmon
Oct 08 09:48:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=infiniband
Oct 08 09:48:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=ipvs
Oct 08 09:48:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=loadavg
Oct 08 09:48:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=mdadm
Oct 08 09:48:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=meminfo
Oct 08 09:48:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=netclass
Oct 08 09:48:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=netdev
Oct 08 09:48:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=netstat
Oct 08 09:48:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=nfs
Oct 08 09:48:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=nfsd
Oct 08 09:48:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=nvme
Oct 08 09:48:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=os
Oct 08 09:48:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Oct 08 09:48:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=pressure
Oct 08 09:48:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=rapl
Oct 08 09:48:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=schedstat
Oct 08 09:48:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=selinux
Oct 08 09:48:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=sockstat
Oct 08 09:48:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=softnet
Oct 08 09:48:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=stat
Oct 08 09:48:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=tapestats
Oct 08 09:48:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=textfile
Oct 08 09:48:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=thermal_zone
Oct 08 09:48:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=time
Oct 08 09:48:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=udp_queues
Oct 08 09:48:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=uname
Oct 08 09:48:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=vmstat
Oct 08 09:48:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=xfs
Oct 08 09:48:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=zfs
Oct 08 09:48:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.726Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Oct 08 09:48:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.726Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Oct 08 09:48:04 compute-0 systemd[1]: Started Ceph node-exporter.compute-0 for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct 08 09:48:04 compute-0 sudo[102949]: pam_unix(sudo:session): session closed for user root
Oct 08 09:48:04 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 09:48:04 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:04 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 09:48:04 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:04 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Reconfiguring alertmanager.compute-0 (dependencies changed)...
Oct 08 09:48:04 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Reconfiguring alertmanager.compute-0 (dependencies changed)...
Oct 08 09:48:04 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Reconfiguring daemon alertmanager.compute-0 on compute-0
Oct 08 09:48:04 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Reconfiguring daemon alertmanager.compute-0 on compute-0
Oct 08 09:48:04 compute-0 sudo[103151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:48:04 compute-0 sudo[103151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:48:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:04 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:04 compute-0 sudo[103151]: pam_unix(sudo:session): session closed for user root
Oct 08 09:48:04 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:48:04 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:48:04 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:48:04.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:48:04 compute-0 sudo[103176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/alertmanager:v0.25.0 --timeout 895 _orch deploy --fsid 787292cc-8154-50c4-9e00-e9be3e817149
Oct 08 09:48:04 compute-0 sudo[103176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:48:05 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:48:05 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:48:05 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:48:05.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:48:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:05 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66180038f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:05 compute-0 podman[103219]: 2025-10-08 09:48:05.289621342 +0000 UTC m=+0.043022604 volume create adf8338bf778e2a8bf2a17ac62f888750645e9e71143f0095f0534229c41927b
Oct 08 09:48:05 compute-0 podman[103219]: 2025-10-08 09:48:05.298769145 +0000 UTC m=+0.052170407 container create 36eaad74f53ab0375a4524501bc18d21df4de75102e75385ed57269f755cc2ce (image=quay.io/prometheus/alertmanager:v0.25.0, name=agitated_taussig, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:48:05 compute-0 systemd[1]: Started libpod-conmon-36eaad74f53ab0375a4524501bc18d21df4de75102e75385ed57269f755cc2ce.scope.
Oct 08 09:48:05 compute-0 podman[103219]: 2025-10-08 09:48:05.269187152 +0000 UTC m=+0.022588444 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Oct 08 09:48:05 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:48:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8e5b704ece5d2d4f4dd02d747db42804a656fea49da8f4afdf3f5483f3d1a0e/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Oct 08 09:48:05 compute-0 podman[103219]: 2025-10-08 09:48:05.390515347 +0000 UTC m=+0.143916619 container init 36eaad74f53ab0375a4524501bc18d21df4de75102e75385ed57269f755cc2ce (image=quay.io/prometheus/alertmanager:v0.25.0, name=agitated_taussig, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:48:05 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 12.10 scrub starts
Oct 08 09:48:05 compute-0 podman[103219]: 2025-10-08 09:48:05.397523895 +0000 UTC m=+0.150925157 container start 36eaad74f53ab0375a4524501bc18d21df4de75102e75385ed57269f755cc2ce (image=quay.io/prometheus/alertmanager:v0.25.0, name=agitated_taussig, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:48:05 compute-0 agitated_taussig[103235]: 65534 65534
Oct 08 09:48:05 compute-0 systemd[1]: libpod-36eaad74f53ab0375a4524501bc18d21df4de75102e75385ed57269f755cc2ce.scope: Deactivated successfully.
Oct 08 09:48:05 compute-0 podman[103219]: 2025-10-08 09:48:05.401167307 +0000 UTC m=+0.154568589 container attach 36eaad74f53ab0375a4524501bc18d21df4de75102e75385ed57269f755cc2ce (image=quay.io/prometheus/alertmanager:v0.25.0, name=agitated_taussig, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:48:05 compute-0 podman[103219]: 2025-10-08 09:48:05.401564428 +0000 UTC m=+0.154965700 container died 36eaad74f53ab0375a4524501bc18d21df4de75102e75385ed57269f755cc2ce (image=quay.io/prometheus/alertmanager:v0.25.0, name=agitated_taussig, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:48:05 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 12.10 scrub ok
Oct 08 09:48:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8e5b704ece5d2d4f4dd02d747db42804a656fea49da8f4afdf3f5483f3d1a0e-merged.mount: Deactivated successfully.
Oct 08 09:48:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:05 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:05 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Oct 08 09:48:05 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v22: 353 pgs: 2 peering, 2 remapped+peering, 349 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 49 B/s, 2 objects/s recovering
Oct 08 09:48:05 compute-0 ceph-mon[73572]: 10.f deep-scrub starts
Oct 08 09:48:05 compute-0 ceph-mon[73572]: 10.f deep-scrub ok
Oct 08 09:48:05 compute-0 ceph-mon[73572]: 11.1e scrub starts
Oct 08 09:48:05 compute-0 ceph-mon[73572]: 11.1e scrub ok
Oct 08 09:48:05 compute-0 ceph-mon[73572]: 10.5 scrub starts
Oct 08 09:48:05 compute-0 ceph-mon[73572]: 10.5 scrub ok
Oct 08 09:48:05 compute-0 ceph-mon[73572]: osdmap e80: 3 total, 3 up, 3 in
Oct 08 09:48:05 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:05 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:05 compute-0 ceph-mon[73572]: Reconfiguring alertmanager.compute-0 (dependencies changed)...
Oct 08 09:48:05 compute-0 ceph-mon[73572]: Reconfiguring daemon alertmanager.compute-0 on compute-0
Oct 08 09:48:05 compute-0 podman[103219]: 2025-10-08 09:48:05.634771766 +0000 UTC m=+0.388173028 container remove 36eaad74f53ab0375a4524501bc18d21df4de75102e75385ed57269f755cc2ce (image=quay.io/prometheus/alertmanager:v0.25.0, name=agitated_taussig, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:48:05 compute-0 podman[103219]: 2025-10-08 09:48:05.648437394 +0000 UTC m=+0.401838656 volume remove adf8338bf778e2a8bf2a17ac62f888750645e9e71143f0095f0534229c41927b
Oct 08 09:48:05 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Oct 08 09:48:05 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Oct 08 09:48:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:48:05] "GET /metrics HTTP/1.1" 200 48278 "" "Prometheus/2.51.0"
Oct 08 09:48:05 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:48:05] "GET /metrics HTTP/1.1" 200 48278 "" "Prometheus/2.51.0"
Oct 08 09:48:05 compute-0 podman[103251]: 2025-10-08 09:48:05.755511976 +0000 UTC m=+0.091135898 volume create a2a0a9d8a3f4496418a0bb6851aef58b5aa2d7d827aa03106d2e326dfe9b006d
Oct 08 09:48:05 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 81 pg[9.19( v 45'1018 (0'0,45'1018] local-lis/les=79/80 n=5 ec=54/38 lis/c=79/54 les/c/f=80/56/0 sis=81 pruub=14.740247726s) [2] async=[2] r=-1 lpr=81 pi=[54,81)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 228.572326660s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:48:05 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 81 pg[9.19( v 45'1018 (0'0,45'1018] local-lis/les=79/80 n=5 ec=54/38 lis/c=79/54 les/c/f=80/56/0 sis=81 pruub=14.739789963s) [2] r=-1 lpr=81 pi=[54,81)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 228.572326660s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:48:05 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 81 pg[9.9( v 45'1018 (0'0,45'1018] local-lis/les=79/80 n=6 ec=54/38 lis/c=79/54 les/c/f=80/56/0 sis=81 pruub=14.739488602s) [2] async=[2] r=-1 lpr=81 pi=[54,81)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 228.572357178s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:48:05 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 81 pg[9.9( v 45'1018 (0'0,45'1018] local-lis/les=79/80 n=6 ec=54/38 lis/c=79/54 les/c/f=80/56/0 sis=81 pruub=14.739418030s) [2] r=-1 lpr=81 pi=[54,81)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 228.572357178s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:48:05 compute-0 podman[103251]: 2025-10-08 09:48:05.683685909 +0000 UTC m=+0.019309811 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Oct 08 09:48:05 compute-0 podman[103251]: 2025-10-08 09:48:05.787268813 +0000 UTC m=+0.122892735 container create 57cd7d8d326b1f9e36dd1f190536bf2c96f23a4f9f3c9c25505de6fd35d06f06 (image=quay.io/prometheus/alertmanager:v0.25.0, name=festive_joliot, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:48:05 compute-0 systemd[1]: Started libpod-conmon-57cd7d8d326b1f9e36dd1f190536bf2c96f23a4f9f3c9c25505de6fd35d06f06.scope.
Oct 08 09:48:05 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:48:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b644418705b553205884c22cb0a94d35ba92b050be5cd432f98bcc2cd210ee72/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Oct 08 09:48:05 compute-0 systemd[1]: libpod-conmon-36eaad74f53ab0375a4524501bc18d21df4de75102e75385ed57269f755cc2ce.scope: Deactivated successfully.
Oct 08 09:48:05 compute-0 podman[103251]: 2025-10-08 09:48:05.917166345 +0000 UTC m=+0.252790257 container init 57cd7d8d326b1f9e36dd1f190536bf2c96f23a4f9f3c9c25505de6fd35d06f06 (image=quay.io/prometheus/alertmanager:v0.25.0, name=festive_joliot, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:48:05 compute-0 podman[103251]: 2025-10-08 09:48:05.923388034 +0000 UTC m=+0.259011916 container start 57cd7d8d326b1f9e36dd1f190536bf2c96f23a4f9f3c9c25505de6fd35d06f06 (image=quay.io/prometheus/alertmanager:v0.25.0, name=festive_joliot, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:48:05 compute-0 festive_joliot[103269]: 65534 65534
Oct 08 09:48:05 compute-0 systemd[1]: libpod-57cd7d8d326b1f9e36dd1f190536bf2c96f23a4f9f3c9c25505de6fd35d06f06.scope: Deactivated successfully.
Oct 08 09:48:05 compute-0 conmon[103269]: conmon 57cd7d8d326b1f9e36dd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-57cd7d8d326b1f9e36dd1f190536bf2c96f23a4f9f3c9c25505de6fd35d06f06.scope/container/memory.events
Oct 08 09:48:05 compute-0 podman[103251]: 2025-10-08 09:48:05.963512923 +0000 UTC m=+0.299136805 container attach 57cd7d8d326b1f9e36dd1f190536bf2c96f23a4f9f3c9c25505de6fd35d06f06 (image=quay.io/prometheus/alertmanager:v0.25.0, name=festive_joliot, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:48:05 compute-0 podman[103251]: 2025-10-08 09:48:05.964633852 +0000 UTC m=+0.300257734 container died 57cd7d8d326b1f9e36dd1f190536bf2c96f23a4f9f3c9c25505de6fd35d06f06 (image=quay.io/prometheus/alertmanager:v0.25.0, name=festive_joliot, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:48:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-b644418705b553205884c22cb0a94d35ba92b050be5cd432f98bcc2cd210ee72-merged.mount: Deactivated successfully.
Oct 08 09:48:06 compute-0 podman[103251]: 2025-10-08 09:48:06.176459207 +0000 UTC m=+0.512083089 container remove 57cd7d8d326b1f9e36dd1f190536bf2c96f23a4f9f3c9c25505de6fd35d06f06 (image=quay.io/prometheus/alertmanager:v0.25.0, name=festive_joliot, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:48:06 compute-0 podman[103251]: 2025-10-08 09:48:06.188935284 +0000 UTC m=+0.524559166 volume remove a2a0a9d8a3f4496418a0bb6851aef58b5aa2d7d827aa03106d2e326dfe9b006d
Oct 08 09:48:06 compute-0 systemd[1]: libpod-conmon-57cd7d8d326b1f9e36dd1f190536bf2c96f23a4f9f3c9c25505de6fd35d06f06.scope: Deactivated successfully.
Oct 08 09:48:06 compute-0 systemd[1]: Stopping Ceph alertmanager.compute-0 for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct 08 09:48:06 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 12.6 scrub starts
Oct 08 09:48:06 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 12.6 scrub ok
Oct 08 09:48:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[97382]: ts=2025-10-08T09:48:06.403Z caller=main.go:583 level=info msg="Received SIGTERM, exiting gracefully..."
Oct 08 09:48:06 compute-0 podman[103319]: 2025-10-08 09:48:06.424270017 +0000 UTC m=+0.061698290 container died 8d342ba820880b4150da0520ca2c2bd15e2dae173214d906ae46d290c18c1a1e (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:48:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-56ce96f5b36afca03959d3dd28785acc44bc98ac7848532a544c80c3ee2cbbf3-merged.mount: Deactivated successfully.
Oct 08 09:48:06 compute-0 podman[103319]: 2025-10-08 09:48:06.551553262 +0000 UTC m=+0.188981535 container remove 8d342ba820880b4150da0520ca2c2bd15e2dae173214d906ae46d290c18c1a1e (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:48:06 compute-0 podman[103319]: 2025-10-08 09:48:06.565814934 +0000 UTC m=+0.203243217 volume remove 00310bf376a0b175ca8d85fb11d168f2f95f64f3756abaadb6e57846efdbc0ea
Oct 08 09:48:06 compute-0 bash[103319]: ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0
Oct 08 09:48:06 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Oct 08 09:48:06 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@alertmanager.compute-0.service: Deactivated successfully.
Oct 08 09:48:06 compute-0 systemd[1]: Stopped Ceph alertmanager.compute-0 for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct 08 09:48:06 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@alertmanager.compute-0.service: Consumed 1.001s CPU time.
Oct 08 09:48:06 compute-0 systemd[1]: Starting Ceph alertmanager.compute-0 for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct 08 09:48:06 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Oct 08 09:48:06 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Oct 08 09:48:06 compute-0 ceph-mon[73572]: 11.12 scrub starts
Oct 08 09:48:06 compute-0 ceph-mon[73572]: 11.12 scrub ok
Oct 08 09:48:06 compute-0 ceph-mon[73572]: 12.10 scrub starts
Oct 08 09:48:06 compute-0 ceph-mon[73572]: 12.10 scrub ok
Oct 08 09:48:06 compute-0 ceph-mon[73572]: pgmap v22: 353 pgs: 2 peering, 2 remapped+peering, 349 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 49 B/s, 2 objects/s recovering
Oct 08 09:48:06 compute-0 ceph-mon[73572]: osdmap e81: 3 total, 3 up, 3 in
Oct 08 09:48:06 compute-0 podman[103424]: 2025-10-08 09:48:06.900861002 +0000 UTC m=+0.063785883 volume create 4bbbf489bb89a0d856f47e13e48dca9902149cd60d9cee6aaa7ca7a294835ad4
Oct 08 09:48:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:06 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6600003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:06 compute-0 podman[103424]: 2025-10-08 09:48:06.943251199 +0000 UTC m=+0.106176090 container create feb968c21a5fda3cd2e7b86379d5576dbe3dadfd4cb62b92318e18ddbaaafb75 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:48:06 compute-0 podman[103424]: 2025-10-08 09:48:06.864767685 +0000 UTC m=+0.027692626 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Oct 08 09:48:06 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:48:06 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:48:06 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:48:06.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:48:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5364c9169be9e454626d1a65d154e138f0d7667590bffb08425ce0bdca000223/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Oct 08 09:48:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5364c9169be9e454626d1a65d154e138f0d7667590bffb08425ce0bdca000223/merged/etc/alertmanager supports timestamps until 2038 (0x7fffffff)
Oct 08 09:48:07 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:48:07 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 08 09:48:07 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:48:07.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 08 09:48:07 compute-0 podman[103424]: 2025-10-08 09:48:07.108301956 +0000 UTC m=+0.271226867 container init feb968c21a5fda3cd2e7b86379d5576dbe3dadfd4cb62b92318e18ddbaaafb75 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:48:07 compute-0 podman[103424]: 2025-10-08 09:48:07.113552189 +0000 UTC m=+0.276477080 container start feb968c21a5fda3cd2e7b86379d5576dbe3dadfd4cb62b92318e18ddbaaafb75 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:48:07 compute-0 bash[103424]: feb968c21a5fda3cd2e7b86379d5576dbe3dadfd4cb62b92318e18ddbaaafb75
Oct 08 09:48:07 compute-0 systemd[1]: Started Ceph alertmanager.compute-0 for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct 08 09:48:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:48:07.145Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)"
Oct 08 09:48:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:48:07.145Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)"
Oct 08 09:48:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:48:07.152Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.122.100 port=9094
Oct 08 09:48:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:48:07.154Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s
Oct 08 09:48:07 compute-0 sudo[103176]: pam_unix(sudo:session): session closed for user root
Oct 08 09:48:07 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 09:48:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:48:07.191Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml
Oct 08 09:48:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:48:07.192Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml
Oct 08 09:48:07 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:07 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 09:48:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:48:07.196Z caller=tls_config.go:232 level=info msg="Listening on" address=192.168.122.100:9093
Oct 08 09:48:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:48:07.196Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=192.168.122.100:9093
Oct 08 09:48:07 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:07 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Reconfiguring grafana.compute-0 (dependencies changed)...
Oct 08 09:48:07 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Reconfiguring grafana.compute-0 (dependencies changed)...
Oct 08 09:48:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:07 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:07 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Reconfiguring daemon grafana.compute-0 on compute-0
Oct 08 09:48:07 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Reconfiguring daemon grafana.compute-0 on compute-0
Oct 08 09:48:07 compute-0 sudo[103461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:48:07 compute-0 sudo[103461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:48:07 compute-0 sudo[103461]: pam_unix(sudo:session): session closed for user root
Oct 08 09:48:07 compute-0 sudo[103486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/grafana:10.4.0 --timeout 895 _orch deploy --fsid 787292cc-8154-50c4-9e00-e9be3e817149
Oct 08 09:48:07 compute-0 sudo[103486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:48:07 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 9.c scrub starts
Oct 08 09:48:07 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 9.c scrub ok
Oct 08 09:48:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:07 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66180038f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:07 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v25: 353 pgs: 2 peering, 2 remapped+peering, 349 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 53 B/s, 2 objects/s recovering
Oct 08 09:48:07 compute-0 podman[103528]: 2025-10-08 09:48:07.82642008 +0000 UTC m=+0.047050207 container create fdd2bd8aa4df6721cd550619c3b54f13dd5a06e4f84cc02a3ea3adb2ff24a00a (image=quay.io/ceph/grafana:10.4.0, name=youthful_rhodes, maintainer=Grafana Labs <hello@grafana.com>)
Oct 08 09:48:07 compute-0 ceph-mon[73572]: 12.6 scrub starts
Oct 08 09:48:07 compute-0 ceph-mon[73572]: 8.19 scrub starts
Oct 08 09:48:07 compute-0 ceph-mon[73572]: 12.6 scrub ok
Oct 08 09:48:07 compute-0 ceph-mon[73572]: 8.19 scrub ok
Oct 08 09:48:07 compute-0 ceph-mon[73572]: osdmap e82: 3 total, 3 up, 3 in
Oct 08 09:48:07 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:07 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:07 compute-0 podman[103528]: 2025-10-08 09:48:07.801777894 +0000 UTC m=+0.022408051 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Oct 08 09:48:07 compute-0 systemd[1]: Started libpod-conmon-fdd2bd8aa4df6721cd550619c3b54f13dd5a06e4f84cc02a3ea3adb2ff24a00a.scope.
Oct 08 09:48:07 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:48:07 compute-0 podman[103528]: 2025-10-08 09:48:07.987585667 +0000 UTC m=+0.208215794 container init fdd2bd8aa4df6721cd550619c3b54f13dd5a06e4f84cc02a3ea3adb2ff24a00a (image=quay.io/ceph/grafana:10.4.0, name=youthful_rhodes, maintainer=Grafana Labs <hello@grafana.com>)
Oct 08 09:48:07 compute-0 podman[103528]: 2025-10-08 09:48:07.995191781 +0000 UTC m=+0.215821948 container start fdd2bd8aa4df6721cd550619c3b54f13dd5a06e4f84cc02a3ea3adb2ff24a00a (image=quay.io/ceph/grafana:10.4.0, name=youthful_rhodes, maintainer=Grafana Labs <hello@grafana.com>)
Oct 08 09:48:07 compute-0 youthful_rhodes[103545]: 472 0
Oct 08 09:48:07 compute-0 systemd[1]: libpod-fdd2bd8aa4df6721cd550619c3b54f13dd5a06e4f84cc02a3ea3adb2ff24a00a.scope: Deactivated successfully.
Oct 08 09:48:07 compute-0 conmon[103545]: conmon fdd2bd8aa4df6721cd55 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fdd2bd8aa4df6721cd550619c3b54f13dd5a06e4f84cc02a3ea3adb2ff24a00a.scope/container/memory.events
Oct 08 09:48:08 compute-0 podman[103528]: 2025-10-08 09:48:08.020997667 +0000 UTC m=+0.241627794 container attach fdd2bd8aa4df6721cd550619c3b54f13dd5a06e4f84cc02a3ea3adb2ff24a00a (image=quay.io/ceph/grafana:10.4.0, name=youthful_rhodes, maintainer=Grafana Labs <hello@grafana.com>)
Oct 08 09:48:08 compute-0 podman[103528]: 2025-10-08 09:48:08.021609962 +0000 UTC m=+0.242240079 container died fdd2bd8aa4df6721cd550619c3b54f13dd5a06e4f84cc02a3ea3adb2ff24a00a (image=quay.io/ceph/grafana:10.4.0, name=youthful_rhodes, maintainer=Grafana Labs <hello@grafana.com>)
Oct 08 09:48:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-78727395e0c92f3747d7112027ec7fcd8c18678d220f2c33ae783deccacfab56-merged.mount: Deactivated successfully.
Oct 08 09:48:08 compute-0 podman[103528]: 2025-10-08 09:48:08.103547255 +0000 UTC m=+0.324177382 container remove fdd2bd8aa4df6721cd550619c3b54f13dd5a06e4f84cc02a3ea3adb2ff24a00a (image=quay.io/ceph/grafana:10.4.0, name=youthful_rhodes, maintainer=Grafana Labs <hello@grafana.com>)
Oct 08 09:48:08 compute-0 systemd[1]: libpod-conmon-fdd2bd8aa4df6721cd550619c3b54f13dd5a06e4f84cc02a3ea3adb2ff24a00a.scope: Deactivated successfully.
Oct 08 09:48:08 compute-0 podman[103563]: 2025-10-08 09:48:08.160647877 +0000 UTC m=+0.039358902 container create d9d7995ff5eb2aac8cf415720b6cbb08640e8a85a7fa26ecd9566d4e7609bb03 (image=quay.io/ceph/grafana:10.4.0, name=gifted_meninsky, maintainer=Grafana Labs <hello@grafana.com>)
Oct 08 09:48:08 compute-0 systemd[1]: Started libpod-conmon-d9d7995ff5eb2aac8cf415720b6cbb08640e8a85a7fa26ecd9566d4e7609bb03.scope.
Oct 08 09:48:08 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:48:08 compute-0 podman[103563]: 2025-10-08 09:48:08.144038925 +0000 UTC m=+0.022749980 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Oct 08 09:48:08 compute-0 podman[103563]: 2025-10-08 09:48:08.244136959 +0000 UTC m=+0.122848014 container init d9d7995ff5eb2aac8cf415720b6cbb08640e8a85a7fa26ecd9566d4e7609bb03 (image=quay.io/ceph/grafana:10.4.0, name=gifted_meninsky, maintainer=Grafana Labs <hello@grafana.com>)
Oct 08 09:48:08 compute-0 podman[103563]: 2025-10-08 09:48:08.248776927 +0000 UTC m=+0.127487962 container start d9d7995ff5eb2aac8cf415720b6cbb08640e8a85a7fa26ecd9566d4e7609bb03 (image=quay.io/ceph/grafana:10.4.0, name=gifted_meninsky, maintainer=Grafana Labs <hello@grafana.com>)
Oct 08 09:48:08 compute-0 gifted_meninsky[103579]: 472 0
Oct 08 09:48:08 compute-0 systemd[1]: libpod-d9d7995ff5eb2aac8cf415720b6cbb08640e8a85a7fa26ecd9566d4e7609bb03.scope: Deactivated successfully.
Oct 08 09:48:08 compute-0 podman[103563]: 2025-10-08 09:48:08.25479255 +0000 UTC m=+0.133503585 container attach d9d7995ff5eb2aac8cf415720b6cbb08640e8a85a7fa26ecd9566d4e7609bb03 (image=quay.io/ceph/grafana:10.4.0, name=gifted_meninsky, maintainer=Grafana Labs <hello@grafana.com>)
Oct 08 09:48:08 compute-0 podman[103563]: 2025-10-08 09:48:08.255073757 +0000 UTC m=+0.133784792 container died d9d7995ff5eb2aac8cf415720b6cbb08640e8a85a7fa26ecd9566d4e7609bb03 (image=quay.io/ceph/grafana:10.4.0, name=gifted_meninsky, maintainer=Grafana Labs <hello@grafana.com>)
Oct 08 09:48:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c00bf8b9679365c5003ecbde959dfe07a26c32c38d769c5aa65b2df66692bb8-merged.mount: Deactivated successfully.
Oct 08 09:48:08 compute-0 podman[103563]: 2025-10-08 09:48:08.304318629 +0000 UTC m=+0.183029664 container remove d9d7995ff5eb2aac8cf415720b6cbb08640e8a85a7fa26ecd9566d4e7609bb03 (image=quay.io/ceph/grafana:10.4.0, name=gifted_meninsky, maintainer=Grafana Labs <hello@grafana.com>)
Oct 08 09:48:08 compute-0 systemd[1]: libpod-conmon-d9d7995ff5eb2aac8cf415720b6cbb08640e8a85a7fa26ecd9566d4e7609bb03.scope: Deactivated successfully.
Oct 08 09:48:08 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 9.a scrub starts
Oct 08 09:48:08 compute-0 systemd[1]: Stopping Ceph grafana.compute-0 for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct 08 09:48:08 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:48:08 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 9.a scrub ok
Oct 08 09:48:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=server t=2025-10-08T09:48:08.540284677Z level=info msg="Shutdown started" reason="System signal: terminated"
Oct 08 09:48:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=ticker t=2025-10-08T09:48:08.54038647Z level=info msg=stopped last_tick=2025-10-08T09:48:00Z
Oct 08 09:48:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=grafana-apiserver t=2025-10-08T09:48:08.540631576Z level=info msg="StorageObjectCountTracker pruner is exiting"
Oct 08 09:48:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=tracing t=2025-10-08T09:48:08.540701078Z level=info msg="Closing tracing"
Oct 08 09:48:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=sqlstore.transactions t=2025-10-08T09:48:08.55254803Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Oct 08 09:48:08 compute-0 podman[103625]: 2025-10-08 09:48:08.571015089 +0000 UTC m=+0.080691502 container died 56b2a7b6ecfb4c45a674b20b3f1d60f53e5d4fac0af3b6685d98e6cab4ce59f5 (image=quay.io/ceph/grafana:10.4.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 08 09:48:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-299d1132a49e90b1d598865e6a36f1a7dd2aea77757b20cf4893ea1efcfcb275-merged.mount: Deactivated successfully.
Oct 08 09:48:08 compute-0 podman[103625]: 2025-10-08 09:48:08.723485405 +0000 UTC m=+0.233161818 container remove 56b2a7b6ecfb4c45a674b20b3f1d60f53e5d4fac0af3b6685d98e6cab4ce59f5 (image=quay.io/ceph/grafana:10.4.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 08 09:48:08 compute-0 bash[103625]: ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0
Oct 08 09:48:08 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@grafana.compute-0.service: Deactivated successfully.
Oct 08 09:48:08 compute-0 systemd[1]: Stopped Ceph grafana.compute-0 for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct 08 09:48:08 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@grafana.compute-0.service: Consumed 4.168s CPU time.
Oct 08 09:48:08 compute-0 systemd[1]: Starting Ceph grafana.compute-0 for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct 08 09:48:08 compute-0 ceph-mon[73572]: Reconfiguring grafana.compute-0 (dependencies changed)...
Oct 08 09:48:08 compute-0 ceph-mon[73572]: Reconfiguring daemon grafana.compute-0 on compute-0
Oct 08 09:48:08 compute-0 ceph-mon[73572]: 9.e scrub starts
Oct 08 09:48:08 compute-0 ceph-mon[73572]: 9.c scrub starts
Oct 08 09:48:08 compute-0 ceph-mon[73572]: 9.e scrub ok
Oct 08 09:48:08 compute-0 ceph-mon[73572]: 9.c scrub ok
Oct 08 09:48:08 compute-0 ceph-mon[73572]: pgmap v25: 353 pgs: 2 peering, 2 remapped+peering, 349 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 53 B/s, 2 objects/s recovering
Oct 08 09:48:08 compute-0 ceph-mon[73572]: 9.9 scrub starts
Oct 08 09:48:08 compute-0 ceph-mon[73572]: 9.9 scrub ok
Oct 08 09:48:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:08 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:08 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:48:08 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:48:08 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:48:08.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:48:09 compute-0 podman[103722]: 2025-10-08 09:48:09.028329524 +0000 UTC m=+0.041446015 container create 73efcf403aa6fcb4b04e1ef714b7bf4169b576e4a4c1b34cdc0b458f62db65fd (image=quay.io/ceph/grafana:10.4.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 08 09:48:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efa13ae157c185f497141d0b5b68c767226f216566a5a7abb57a48ff9ac4fad8/merged/etc/grafana/certs supports timestamps until 2038 (0x7fffffff)
Oct 08 09:48:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efa13ae157c185f497141d0b5b68c767226f216566a5a7abb57a48ff9ac4fad8/merged/etc/grafana/grafana.ini supports timestamps until 2038 (0x7fffffff)
Oct 08 09:48:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efa13ae157c185f497141d0b5b68c767226f216566a5a7abb57a48ff9ac4fad8/merged/etc/grafana/provisioning/datasources supports timestamps until 2038 (0x7fffffff)
Oct 08 09:48:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efa13ae157c185f497141d0b5b68c767226f216566a5a7abb57a48ff9ac4fad8/merged/etc/grafana/provisioning/dashboards supports timestamps until 2038 (0x7fffffff)
Oct 08 09:48:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efa13ae157c185f497141d0b5b68c767226f216566a5a7abb57a48ff9ac4fad8/merged/var/lib/grafana/grafana.db supports timestamps until 2038 (0x7fffffff)
Oct 08 09:48:09 compute-0 podman[103722]: 2025-10-08 09:48:09.080205933 +0000 UTC m=+0.093322454 container init 73efcf403aa6fcb4b04e1ef714b7bf4169b576e4a4c1b34cdc0b458f62db65fd (image=quay.io/ceph/grafana:10.4.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 08 09:48:09 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:48:09 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:48:09 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:48:09.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:48:09 compute-0 podman[103722]: 2025-10-08 09:48:09.087863877 +0000 UTC m=+0.100980358 container start 73efcf403aa6fcb4b04e1ef714b7bf4169b576e4a4c1b34cdc0b458f62db65fd (image=quay.io/ceph/grafana:10.4.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 08 09:48:09 compute-0 bash[103722]: 73efcf403aa6fcb4b04e1ef714b7bf4169b576e4a4c1b34cdc0b458f62db65fd
Oct 08 09:48:09 compute-0 podman[103722]: 2025-10-08 09:48:09.008655874 +0000 UTC m=+0.021772385 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Oct 08 09:48:09 compute-0 systemd[1]: Started Ceph grafana.compute-0 for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct 08 09:48:09 compute-0 sudo[103486]: pam_unix(sudo:session): session closed for user root
Oct 08 09:48:09 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 09:48:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:48:09.154Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000175636s
Oct 08 09:48:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=settings t=2025-10-08T09:48:09.240225061Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2025-10-08T09:48:09Z
Oct 08 09:48:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=settings t=2025-10-08T09:48:09.240519408Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini
Oct 08 09:48:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=settings t=2025-10-08T09:48:09.240531859Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini
Oct 08 09:48:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=settings t=2025-10-08T09:48:09.240536139Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana"
Oct 08 09:48:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=settings t=2025-10-08T09:48:09.240539849Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana"
Oct 08 09:48:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=settings t=2025-10-08T09:48:09.240543649Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins"
Oct 08 09:48:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=settings t=2025-10-08T09:48:09.240548189Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning"
Oct 08 09:48:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=settings t=2025-10-08T09:48:09.240553129Z level=info msg="Config overridden from command line" arg="default.log.mode=console"
Oct 08 09:48:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=settings t=2025-10-08T09:48:09.240559239Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana"
Oct 08 09:48:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=settings t=2025-10-08T09:48:09.240563559Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana"
Oct 08 09:48:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=settings t=2025-10-08T09:48:09.24059578Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
Oct 08 09:48:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=settings t=2025-10-08T09:48:09.2406031Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
Oct 08 09:48:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=settings t=2025-10-08T09:48:09.2406071Z level=info msg=Target target=[all]
Oct 08 09:48:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=settings t=2025-10-08T09:48:09.240616051Z level=info msg="Path Home" path=/usr/share/grafana
Oct 08 09:48:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=settings t=2025-10-08T09:48:09.240620751Z level=info msg="Path Data" path=/var/lib/grafana
Oct 08 09:48:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=settings t=2025-10-08T09:48:09.240625011Z level=info msg="Path Logs" path=/var/log/grafana
Oct 08 09:48:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=settings t=2025-10-08T09:48:09.240630091Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins
Oct 08 09:48:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=settings t=2025-10-08T09:48:09.240633951Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning
Oct 08 09:48:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=settings t=2025-10-08T09:48:09.240637651Z level=info msg="App mode production"
Oct 08 09:48:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=sqlstore t=2025-10-08T09:48:09.240951689Z level=info msg="Connecting to DB" dbtype=sqlite3
Oct 08 09:48:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=sqlstore t=2025-10-08T09:48:09.24096921Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r-----
Oct 08 09:48:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=migrator t=2025-10-08T09:48:09.241666367Z level=info msg="Starting DB migrations"
Oct 08 09:48:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=migrator t=2025-10-08T09:48:09.262017984Z level=info msg="migrations completed" performed=0 skipped=547 duration=585.895µs
Oct 08 09:48:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=sqlstore t=2025-10-08T09:48:09.26300259Z level=info msg="Created default organization"
Oct 08 09:48:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:09 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6600003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=secrets t=2025-10-08T09:48:09.263556724Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1
Oct 08 09:48:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=plugin.store t=2025-10-08T09:48:09.283725356Z level=info msg="Loading plugins..."
Oct 08 09:48:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=local.finder t=2025-10-08T09:48:09.375415598Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled
Oct 08 09:48:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=plugin.store t=2025-10-08T09:48:09.375447288Z level=info msg="Plugins loaded" count=55 duration=91.723102ms
Oct 08 09:48:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=query_data t=2025-10-08T09:48:09.378442605Z level=info msg="Query Service initialization"
Oct 08 09:48:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=live.push_http t=2025-10-08T09:48:09.381347488Z level=info msg="Live Push Gateway initialization"
Oct 08 09:48:09 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 9.0 deep-scrub starts
Oct 08 09:48:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=ngalert.migration t=2025-10-08T09:48:09.38414702Z level=info msg=Starting
Oct 08 09:48:09 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:09 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 09:48:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=ngalert.state.manager t=2025-10-08T09:48:09.401784697Z level=info msg="Running in alternative execution of Error/NoData mode"
Oct 08 09:48:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=infra.usagestats.collector t=2025-10-08T09:48:09.403881371Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2
Oct 08 09:48:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=provisioning.datasources t=2025-10-08T09:48:09.406455357Z level=info msg="inserting datasource from configuration" name=Dashboard1 uid=P43CA22E17D0F9596
Oct 08 09:48:09 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 9.0 deep-scrub ok
Oct 08 09:48:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=provisioning.alerting t=2025-10-08T09:48:09.434540301Z level=info msg="starting to provision alerting"
Oct 08 09:48:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=provisioning.alerting t=2025-10-08T09:48:09.434575012Z level=info msg="finished to provision alerting"
Oct 08 09:48:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=ngalert.state.manager t=2025-10-08T09:48:09.434842157Z level=info msg="Warming state cache for startup"
Oct 08 09:48:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=ngalert.multiorg.alertmanager t=2025-10-08T09:48:09.435222737Z level=info msg="Starting MultiOrg Alertmanager"
Oct 08 09:48:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=grafanaStorageLogger t=2025-10-08T09:48:09.435647308Z level=info msg="Storage starting"
Oct 08 09:48:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=http.server t=2025-10-08T09:48:09.438285355Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
Oct 08 09:48:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=http.server t=2025-10-08T09:48:09.438632344Z level=info msg="HTTP Server Listen" address=192.168.122.100:3000 protocol=https subUrl= socket=
Oct 08 09:48:09 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:09 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-1 (monmap changed)...
Oct 08 09:48:09 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-1 (monmap changed)...
Oct 08 09:48:09 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Oct 08 09:48:09 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 08 09:48:09 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:48:09 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:48:09 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-1 on compute-1
Oct 08 09:48:09 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-1 on compute-1
Oct 08 09:48:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=provisioning.dashboard t=2025-10-08T09:48:09.47465635Z level=info msg="starting to provision dashboards"
Oct 08 09:48:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=ngalert.state.manager t=2025-10-08T09:48:09.489635411Z level=info msg="State cache has been initialized" states=0 duration=54.784004ms
Oct 08 09:48:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=ngalert.scheduler t=2025-10-08T09:48:09.489694782Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1
Oct 08 09:48:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=ticker t=2025-10-08T09:48:09.489782125Z level=info msg=starting first_tick=2025-10-08T09:48:10Z
Oct 08 09:48:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:09 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66180038f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=provisioning.dashboard t=2025-10-08T09:48:09.497518561Z level=info msg="finished to provision dashboards"
Oct 08 09:48:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=plugins.update.checker t=2025-10-08T09:48:09.51164422Z level=info msg="Update check succeeded" duration=76.076824ms
Oct 08 09:48:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=grafana.update.checker t=2025-10-08T09:48:09.511882036Z level=info msg="Update check succeeded" duration=75.661673ms
Oct 08 09:48:09 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v26: 353 pgs: 353 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 28 B/s, 0 objects/s recovering
Oct 08 09:48:09 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0)
Oct 08 09:48:09 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct 08 09:48:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=grafana-apiserver t=2025-10-08T09:48:09.670318004Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager"
Oct 08 09:48:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=grafana-apiserver t=2025-10-08T09:48:09.670717074Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager"
Oct 08 09:48:10 compute-0 ceph-mon[73572]: 9.a scrub starts
Oct 08 09:48:10 compute-0 ceph-mon[73572]: 9.a scrub ok
Oct 08 09:48:10 compute-0 ceph-mon[73572]: 9.6 deep-scrub starts
Oct 08 09:48:10 compute-0 ceph-mon[73572]: 9.6 deep-scrub ok
Oct 08 09:48:10 compute-0 ceph-mon[73572]: 9.19 scrub starts
Oct 08 09:48:10 compute-0 ceph-mon[73572]: 9.19 scrub ok
Oct 08 09:48:10 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:10 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:10 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 08 09:48:10 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:48:10 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct 08 09:48:10 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Oct 08 09:48:10 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Oct 08 09:48:10 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Oct 08 09:48:10 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Oct 08 09:48:10 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Oct 08 09:48:10 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Oct 08 09:48:10 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 83 pg[9.a( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=9 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=83 pruub=11.736685753s) [0] r=-1 lpr=83 pi=[54,83)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 230.315979004s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:48:10 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 83 pg[9.a( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=9 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=83 pruub=11.736638069s) [0] r=-1 lpr=83 pi=[54,83)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 230.315979004s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:48:10 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 83 pg[9.1a( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=83 pruub=11.738619804s) [0] r=-1 lpr=83 pi=[54,83)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 230.318161011s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:48:10 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 83 pg[9.1a( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=83 pruub=11.738594055s) [0] r=-1 lpr=83 pi=[54,83)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 230.318161011s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:48:10 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 08 09:48:10 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:10 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 08 09:48:10 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:10 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)...
Oct 08 09:48:10 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)...
Oct 08 09:48:10 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Oct 08 09:48:10 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct 08 09:48:10 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:48:10 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:48:10 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on compute-1
Oct 08 09:48:10 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on compute-1
Oct 08 09:48:10 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:10 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:10 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:48:10 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:48:10 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:48:10.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:48:11 compute-0 ceph-mon[73572]: 9.0 deep-scrub starts
Oct 08 09:48:11 compute-0 ceph-mon[73572]: 9.0 deep-scrub ok
Oct 08 09:48:11 compute-0 ceph-mon[73572]: Reconfiguring crash.compute-1 (monmap changed)...
Oct 08 09:48:11 compute-0 ceph-mon[73572]: Reconfiguring daemon crash.compute-1 on compute-1
Oct 08 09:48:11 compute-0 ceph-mon[73572]: 9.1e scrub starts
Oct 08 09:48:11 compute-0 ceph-mon[73572]: 9.1e scrub ok
Oct 08 09:48:11 compute-0 ceph-mon[73572]: pgmap v26: 353 pgs: 353 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 28 B/s, 0 objects/s recovering
Oct 08 09:48:11 compute-0 ceph-mon[73572]: 12.9 scrub starts
Oct 08 09:48:11 compute-0 ceph-mon[73572]: 12.9 scrub ok
Oct 08 09:48:11 compute-0 ceph-mon[73572]: 9.1 scrub starts
Oct 08 09:48:11 compute-0 ceph-mon[73572]: 9.1 scrub ok
Oct 08 09:48:11 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Oct 08 09:48:11 compute-0 ceph-mon[73572]: osdmap e83: 3 total, 3 up, 3 in
Oct 08 09:48:11 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:11 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:11 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct 08 09:48:11 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:48:11 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:48:11 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 08 09:48:11 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:48:11.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 08 09:48:11 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:11 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:11 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Oct 08 09:48:11 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Oct 08 09:48:11 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:11 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f80012a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:11 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Oct 08 09:48:11 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Oct 08 09:48:11 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Oct 08 09:48:11 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 84 pg[9.a( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=9 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=84) [0]/[1] r=0 lpr=84 pi=[54,84)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:48:11 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 84 pg[9.a( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=9 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=84) [0]/[1] r=0 lpr=84 pi=[54,84)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 08 09:48:11 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 84 pg[9.1a( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=84) [0]/[1] r=0 lpr=84 pi=[54,84)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:48:11 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 84 pg[9.1a( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=84) [0]/[1] r=0 lpr=84 pi=[54,84)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 08 09:48:11 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 08 09:48:11 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v29: 353 pgs: 353 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:48:11 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0)
Oct 08 09:48:11 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct 08 09:48:11 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:11 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 08 09:48:11 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:11 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-1 (monmap changed)...
Oct 08 09:48:11 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-1 (monmap changed)...
Oct 08 09:48:11 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Oct 08 09:48:11 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 08 09:48:11 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Oct 08 09:48:11 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 08 09:48:11 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:48:11 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:48:11 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-1 on compute-1
Oct 08 09:48:11 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-1 on compute-1
Oct 08 09:48:12 compute-0 ceph-mon[73572]: 10.1e scrub starts
Oct 08 09:48:12 compute-0 ceph-mon[73572]: 10.1e scrub ok
Oct 08 09:48:12 compute-0 ceph-mon[73572]: Reconfiguring osd.0 (monmap changed)...
Oct 08 09:48:12 compute-0 ceph-mon[73572]: Reconfiguring daemon osd.0 on compute-1
Oct 08 09:48:12 compute-0 ceph-mon[73572]: 9.4 scrub starts
Oct 08 09:48:12 compute-0 ceph-mon[73572]: 9.4 scrub ok
Oct 08 09:48:12 compute-0 ceph-mon[73572]: osdmap e84: 3 total, 3 up, 3 in
Oct 08 09:48:12 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct 08 09:48:12 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:12 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:12 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 08 09:48:12 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 08 09:48:12 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:48:12 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 08 09:48:12 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:12 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 08 09:48:12 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:12 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Reconfiguring node-exporter.compute-1 (unknown last config time)...
Oct 08 09:48:12 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Reconfiguring node-exporter.compute-1 (unknown last config time)...
Oct 08 09:48:12 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Reconfiguring daemon node-exporter.compute-1 on compute-1
Oct 08 09:48:12 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Reconfiguring daemon node-exporter.compute-1 on compute-1
Oct 08 09:48:12 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Oct 08 09:48:12 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Oct 08 09:48:12 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Oct 08 09:48:12 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Oct 08 09:48:12 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Oct 08 09:48:12 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Oct 08 09:48:12 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 85 pg[9.1a( v 45'1018 (0'0,45'1018] local-lis/les=84/85 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=84) [0]/[1] async=[0] r=0 lpr=84 pi=[54,84)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:48:12 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 85 pg[9.a( v 45'1018 (0'0,45'1018] local-lis/les=84/85 n=9 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=84) [0]/[1] async=[0] r=0 lpr=84 pi=[54,84)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:48:12 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:12 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66180038f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:12 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:48:12 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:48:12 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:48:12.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:48:13 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:48:13 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:48:13 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:48:13.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:48:13 compute-0 ceph-mon[73572]: 10.11 scrub starts
Oct 08 09:48:13 compute-0 ceph-mon[73572]: 10.11 scrub ok
Oct 08 09:48:13 compute-0 ceph-mon[73572]: pgmap v29: 353 pgs: 353 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:48:13 compute-0 ceph-mon[73572]: Reconfiguring mon.compute-1 (monmap changed)...
Oct 08 09:48:13 compute-0 ceph-mon[73572]: Reconfiguring daemon mon.compute-1 on compute-1
Oct 08 09:48:13 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:13 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:13 compute-0 ceph-mon[73572]: 9.1c scrub starts
Oct 08 09:48:13 compute-0 ceph-mon[73572]: 9.1c scrub ok
Oct 08 09:48:13 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Oct 08 09:48:13 compute-0 ceph-mon[73572]: osdmap e85: 3 total, 3 up, 3 in
Oct 08 09:48:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:13 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:48:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Oct 08 09:48:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Oct 08 09:48:13 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Oct 08 09:48:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 08 09:48:13 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 86 pg[9.1a( v 45'1018 (0'0,45'1018] local-lis/les=84/85 n=5 ec=54/38 lis/c=84/54 les/c/f=85/56/0 sis=86 pruub=15.144413948s) [0] async=[0] r=-1 lpr=86 pi=[54,86)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 236.623565674s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:48:13 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 86 pg[9.1a( v 45'1018 (0'0,45'1018] local-lis/les=84/85 n=5 ec=54/38 lis/c=84/54 les/c/f=85/56/0 sis=86 pruub=15.144309998s) [0] r=-1 lpr=86 pi=[54,86)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 236.623565674s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:48:13 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 86 pg[9.a( v 45'1018 (0'0,45'1018] local-lis/les=84/85 n=9 ec=54/38 lis/c=84/54 les/c/f=85/56/0 sis=86 pruub=15.144071579s) [0] async=[0] r=-1 lpr=86 pi=[54,86)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 236.623596191s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:48:13 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 86 pg[9.a( v 45'1018 (0'0,45'1018] local-lis/les=84/85 n=9 ec=54/38 lis/c=84/54 les/c/f=85/56/0 sis=86 pruub=15.143853188s) [0] r=-1 lpr=86 pi=[54,86)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 236.623596191s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:48:13 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 08 09:48:13 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:13 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-2 (monmap changed)...
Oct 08 09:48:13 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-2 (monmap changed)...
Oct 08 09:48:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Oct 08 09:48:13 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 08 09:48:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Oct 08 09:48:13 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 08 09:48:13 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Oct 08 09:48:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:48:13 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:48:13 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-2 on compute-2
Oct 08 09:48:13 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-2 on compute-2
Oct 08 09:48:13 compute-0 ceph-osd[81751]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Oct 08 09:48:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:13 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:13 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v32: 353 pgs: 2 peering, 351 active+clean; 455 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Oct 08 09:48:14 compute-0 sshd-session[103771]: Accepted publickey for zuul from 192.168.122.30 port 41332 ssh2: ECDSA SHA256:7LvTHAj52RCSkKXOsIbzSDrEw7lwj23D0dAJ0Qgx0Rg
Oct 08 09:48:14 compute-0 systemd-logind[798]: New session 40 of user zuul.
Oct 08 09:48:14 compute-0 systemd[1]: Started Session 40 of User zuul.
Oct 08 09:48:14 compute-0 sshd-session[103771]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 08 09:48:14 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 08 09:48:14 compute-0 ceph-mon[73572]: Reconfiguring node-exporter.compute-1 (unknown last config time)...
Oct 08 09:48:14 compute-0 ceph-mon[73572]: Reconfiguring daemon node-exporter.compute-1 on compute-1
Oct 08 09:48:14 compute-0 ceph-mon[73572]: 11.17 deep-scrub starts
Oct 08 09:48:14 compute-0 ceph-mon[73572]: 11.17 deep-scrub ok
Oct 08 09:48:14 compute-0 ceph-mon[73572]: osdmap e86: 3 total, 3 up, 3 in
Oct 08 09:48:14 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:14 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:14 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 08 09:48:14 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 08 09:48:14 compute-0 ceph-mon[73572]: 9.12 scrub starts
Oct 08 09:48:14 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:48:14 compute-0 ceph-mon[73572]: 9.12 scrub ok
Oct 08 09:48:14 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:14 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 08 09:48:14 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:14 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-2.mtagwx (monmap changed)...
Oct 08 09:48:14 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-2.mtagwx (monmap changed)...
Oct 08 09:48:14 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.mtagwx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Oct 08 09:48:14 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.mtagwx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 08 09:48:14 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct 08 09:48:14 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 08 09:48:14 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:48:14 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:48:14 compute-0 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-2.mtagwx on compute-2
Oct 08 09:48:14 compute-0 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-2.mtagwx on compute-2
Oct 08 09:48:14 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Oct 08 09:48:14 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Oct 08 09:48:14 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Oct 08 09:48:14 compute-0 python3.9[103925]: ansible-ansible.legacy.ping Invoked with data=pong
Oct 08 09:48:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:14 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8002180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:14 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:48:14 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:48:14 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:48:14.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:48:15 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:48:15 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:48:15 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:48:15.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:48:15 compute-0 ceph-mon[73572]: Reconfiguring mon.compute-2 (monmap changed)...
Oct 08 09:48:15 compute-0 ceph-mon[73572]: Reconfiguring daemon mon.compute-2 on compute-2
Oct 08 09:48:15 compute-0 ceph-mon[73572]: 10.4 scrub starts
Oct 08 09:48:15 compute-0 ceph-mon[73572]: 10.4 scrub ok
Oct 08 09:48:15 compute-0 ceph-mon[73572]: pgmap v32: 353 pgs: 2 peering, 351 active+clean; 455 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Oct 08 09:48:15 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:15 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:15 compute-0 ceph-mon[73572]: Reconfiguring mgr.compute-2.mtagwx (monmap changed)...
Oct 08 09:48:15 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.mtagwx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 08 09:48:15 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 08 09:48:15 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:48:15 compute-0 ceph-mon[73572]: Reconfiguring daemon mgr.compute-2.mtagwx on compute-2
Oct 08 09:48:15 compute-0 ceph-mon[73572]: osdmap e87: 3 total, 3 up, 3 in
Oct 08 09:48:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:15 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66180038f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:15 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:15 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 08 09:48:15 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v34: 353 pgs: 2 peering, 351 active+clean; 455 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 26 B/s, 0 objects/s recovering
Oct 08 09:48:15 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:15 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 08 09:48:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:48:15] "GET /metrics HTTP/1.1" 200 48278 "" "Prometheus/2.51.0"
Oct 08 09:48:15 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:48:15] "GET /metrics HTTP/1.1" 200 48278 "" "Prometheus/2.51.0"
Oct 08 09:48:15 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:15 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard get-alertmanager-api-host"} v 0)
Oct 08 09:48:15 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Oct 08 09:48:15 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Oct 08 09:48:15 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard get-grafana-api-url"} v 0)
Oct 08 09:48:15 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Oct 08 09:48:15 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Oct 08 09:48:15 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"} v 0)
Oct 08 09:48:15 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Oct 08 09:48:15 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Oct 08 09:48:15 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_URL}] v 0)
Oct 08 09:48:15 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:15 compute-0 ceph-mgr[73869]: [prometheus INFO root] Restarting engine...
Oct 08 09:48:15 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.error] [08/Oct/2025:09:48:15] ENGINE Bus STOPPING
Oct 08 09:48:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: [08/Oct/2025:09:48:15] ENGINE Bus STOPPING
Oct 08 09:48:15 compute-0 sudo[104102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:48:15 compute-0 sudo[104102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:48:15 compute-0 sudo[104102]: pam_unix(sudo:session): session closed for user root
Oct 08 09:48:16 compute-0 sudo[104127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Oct 08 09:48:16 compute-0 sudo[104127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:48:16 compute-0 python3.9[104100]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 08 09:48:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: [08/Oct/2025:09:48:16] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down
Oct 08 09:48:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: [08/Oct/2025:09:48:16] ENGINE Bus STOPPED
Oct 08 09:48:16 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.error] [08/Oct/2025:09:48:16] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down
Oct 08 09:48:16 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.error] [08/Oct/2025:09:48:16] ENGINE Bus STOPPED
Oct 08 09:48:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: [08/Oct/2025:09:48:16] ENGINE Bus STARTING
Oct 08 09:48:16 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.error] [08/Oct/2025:09:48:16] ENGINE Bus STARTING
Oct 08 09:48:16 compute-0 ceph-mon[73572]: 12.17 scrub starts
Oct 08 09:48:16 compute-0 ceph-mon[73572]: 9.1a scrub starts
Oct 08 09:48:16 compute-0 ceph-mon[73572]: 12.17 scrub ok
Oct 08 09:48:16 compute-0 ceph-mon[73572]: 9.1a scrub ok
Oct 08 09:48:16 compute-0 ceph-mon[73572]: 8.d scrub starts
Oct 08 09:48:16 compute-0 ceph-mon[73572]: pgmap v34: 353 pgs: 2 peering, 351 active+clean; 455 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 26 B/s, 0 objects/s recovering
Oct 08 09:48:16 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:16 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:16 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Oct 08 09:48:16 compute-0 ceph-mon[73572]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Oct 08 09:48:16 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Oct 08 09:48:16 compute-0 ceph-mon[73572]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Oct 08 09:48:16 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Oct 08 09:48:16 compute-0 ceph-mon[73572]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Oct 08 09:48:16 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: [08/Oct/2025:09:48:16] ENGINE Serving on http://:::9283
Oct 08 09:48:16 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.error] [08/Oct/2025:09:48:16] ENGINE Serving on http://:::9283
Oct 08 09:48:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: [08/Oct/2025:09:48:16] ENGINE Bus STARTED
Oct 08 09:48:16 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.error] [08/Oct/2025:09:48:16] ENGINE Bus STARTED
Oct 08 09:48:16 compute-0 ceph-mgr[73869]: [prometheus INFO root] Engine started.
Oct 08 09:48:16 compute-0 podman[104264]: 2025-10-08 09:48:16.636782009 +0000 UTC m=+0.109298220 container exec 01c666addd8584f02eb7d29040d97e45d8cb7f834fd134029c1f7cca550aeb9d (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1)
Oct 08 09:48:16 compute-0 podman[104264]: 2025-10-08 09:48:16.73048324 +0000 UTC m=+0.202999431 container exec_died 01c666addd8584f02eb7d29040d97e45d8cb7f834fd134029c1f7cca550aeb9d (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Oct 08 09:48:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:16 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:16 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:48:16 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:48:16 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:48:16.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:48:17 compute-0 sudo[104480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pntrycwemwgbcrchzfbztswwwbergslt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916896.6406224-93-73999641421458/AnsiballZ_command.py'
Oct 08 09:48:17 compute-0 sudo[104480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:48:17 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:48:17 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:48:17 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:48:17.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:48:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:48:17.156Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.001769584s
Oct 08 09:48:17 compute-0 python3.9[104486]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:48:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:17 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8002180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:17 compute-0 sudo[104480]: pam_unix(sudo:session): session closed for user root
Oct 08 09:48:17 compute-0 podman[104531]: 2025-10-08 09:48:17.336770893 +0000 UTC m=+0.101264675 container exec 4eb6b712d09e2820826e6f961f0aba1bff09f13eee1664dbb03edca7615a708b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:48:17 compute-0 podman[104580]: 2025-10-08 09:48:17.401285653 +0000 UTC m=+0.049878999 container exec_died 4eb6b712d09e2820826e6f961f0aba1bff09f13eee1664dbb03edca7615a708b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:48:17 compute-0 ceph-mon[73572]: 8.d scrub ok
Oct 08 09:48:17 compute-0 podman[104531]: 2025-10-08 09:48:17.426883704 +0000 UTC m=+0.191377506 container exec_died 4eb6b712d09e2820826e6f961f0aba1bff09f13eee1664dbb03edca7615a708b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:48:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:17 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6618003a90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:17 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v35: 353 pgs: 2 peering, 351 active+clean; 455 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Oct 08 09:48:17 compute-0 podman[104629]: 2025-10-08 09:48:17.69423122 +0000 UTC m=+0.075095990 container exec c5f79ead609d5155b918e60a1470581982785e4aac0e1c185a19093b2bdf84dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:48:17 compute-0 podman[104674]: 2025-10-08 09:48:17.763176573 +0000 UTC m=+0.052777784 container exec_died c5f79ead609d5155b918e60a1470581982785e4aac0e1c185a19093b2bdf84dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 08 09:48:17 compute-0 podman[104629]: 2025-10-08 09:48:17.803698763 +0000 UTC m=+0.184563543 container exec_died c5f79ead609d5155b918e60a1470581982785e4aac0e1c185a19093b2bdf84dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:48:17 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 09:48:17 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:48:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:48:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:48:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:48:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:48:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:48:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:48:18 compute-0 podman[104748]: 2025-10-08 09:48:18.188574227 +0000 UTC m=+0.122138756 container exec 1c113ffcb0d41c6337c256a4892f13e82bdea6ca8062b10324516aa631301ea5 (image=quay.io/ceph/haproxy:2.3, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp)
Oct 08 09:48:18 compute-0 podman[104780]: 2025-10-08 09:48:18.255917039 +0000 UTC m=+0.050407332 container exec_died 1c113ffcb0d41c6337c256a4892f13e82bdea6ca8062b10324516aa631301ea5 (image=quay.io/ceph/haproxy:2.3, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp)
Oct 08 09:48:18 compute-0 podman[104748]: 2025-10-08 09:48:18.30240977 +0000 UTC m=+0.235974269 container exec_died 1c113ffcb0d41c6337c256a4892f13e82bdea6ca8062b10324516aa631301ea5 (image=quay.io/ceph/haproxy:2.3, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp)
Oct 08 09:48:18 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 08 09:48:18 compute-0 sudo[104859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lldxiqwleiieisudjuqmonqllwgxutki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916897.6911483-129-179729066396769/AnsiballZ_stat.py'
Oct 08 09:48:18 compute-0 sudo[104859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:48:18 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:48:18 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:18 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 08 09:48:18 compute-0 ceph-mon[73572]: 8.1f scrub starts
Oct 08 09:48:18 compute-0 ceph-mon[73572]: 8.1f scrub ok
Oct 08 09:48:18 compute-0 ceph-mon[73572]: pgmap v35: 353 pgs: 2 peering, 351 active+clean; 455 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Oct 08 09:48:18 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:48:18 compute-0 python3.9[104863]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 09:48:18 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:18 compute-0 sudo[104859]: pam_unix(sudo:session): session closed for user root
Oct 08 09:48:18 compute-0 podman[104890]: 2025-10-08 09:48:18.673655907 +0000 UTC m=+0.181436593 container exec 5814788fb4b63196d097e0c60082efdca22825a3ba337ddec5c99170a1bfd89d (image=quay.io/ceph/keepalived:2.2.4, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, distribution-scope=public, io.openshift.tags=Ceph keepalived, architecture=x86_64, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, name=keepalived, io.openshift.expose-services=, io.buildah.version=1.28.2)
Oct 08 09:48:18 compute-0 podman[104934]: 2025-10-08 09:48:18.786186038 +0000 UTC m=+0.084986461 container exec_died 5814788fb4b63196d097e0c60082efdca22825a3ba337ddec5c99170a1bfd89d (image=quay.io/ceph/keepalived:2.2.4, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, architecture=x86_64, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, vcs-type=git, vendor=Red Hat, Inc., version=2.2.4, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, release=1793, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph.)
Oct 08 09:48:18 compute-0 podman[104890]: 2025-10-08 09:48:18.795699851 +0000 UTC m=+0.303480537 container exec_died 5814788fb4b63196d097e0c60082efdca22825a3ba337ddec5c99170a1bfd89d (image=quay.io/ceph/keepalived:2.2.4, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, distribution-scope=public, version=2.2.4, io.openshift.expose-services=, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, vendor=Red Hat, Inc., description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct 08 09:48:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:18 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:18 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:48:18 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 08 09:48:18 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:48:18.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 08 09:48:19 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:48:19 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 08 09:48:19 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:48:19.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 08 09:48:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:19 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:19 compute-0 podman[105032]: 2025-10-08 09:48:19.296412409 +0000 UTC m=+0.176297332 container exec feb968c21a5fda3cd2e7b86379d5576dbe3dadfd4cb62b92318e18ddbaaafb75 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:48:19 compute-0 sudo[105134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idugcryaztnqrzmmxuddvxxvgghawqhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916898.9246168-162-37490388496315/AnsiballZ_file.py'
Oct 08 09:48:19 compute-0 sudo[105134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:48:19 compute-0 podman[105135]: 2025-10-08 09:48:19.445367876 +0000 UTC m=+0.104311403 container exec_died feb968c21a5fda3cd2e7b86379d5576dbe3dadfd4cb62b92318e18ddbaaafb75 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:48:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:19 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8002b50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:19 compute-0 podman[105032]: 2025-10-08 09:48:19.530023957 +0000 UTC m=+0.409908850 container exec_died feb968c21a5fda3cd2e7b86379d5576dbe3dadfd4cb62b92318e18ddbaaafb75 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:48:19 compute-0 python3.9[105147]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:48:19 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v36: 353 pgs: 353 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:48:19 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0)
Oct 08 09:48:19 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct 08 09:48:19 compute-0 sudo[105134]: pam_unix(sudo:session): session closed for user root
Oct 08 09:48:19 compute-0 ceph-mon[73572]: 8.f scrub starts
Oct 08 09:48:19 compute-0 ceph-mon[73572]: 8.f scrub ok
Oct 08 09:48:19 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:19 compute-0 ceph-mon[73572]: 8.6 scrub starts
Oct 08 09:48:19 compute-0 ceph-mon[73572]: 8.6 scrub ok
Oct 08 09:48:19 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:19 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Oct 08 09:48:19 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Oct 08 09:48:19 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Oct 08 09:48:19 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Oct 08 09:48:20 compute-0 podman[105257]: 2025-10-08 09:48:20.103545408 +0000 UTC m=+0.190115384 container exec 73efcf403aa6fcb4b04e1ef714b7bf4169b576e4a4c1b34cdc0b458f62db65fd (image=quay.io/ceph/grafana:10.4.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 08 09:48:20 compute-0 sudo[105310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 09:48:20 compute-0 sudo[105310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:48:20 compute-0 sudo[105310]: pam_unix(sudo:session): session closed for user root
Oct 08 09:48:20 compute-0 podman[105257]: 2025-10-08 09:48:20.270351808 +0000 UTC m=+0.356921774 container exec_died 73efcf403aa6fcb4b04e1ef714b7bf4169b576e4a4c1b34cdc0b458f62db65fd (image=quay.io/ceph/grafana:10.4.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 08 09:48:20 compute-0 python3.9[105386]: ansible-ansible.builtin.service_facts Invoked
Oct 08 09:48:20 compute-0 network[105427]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 08 09:48:20 compute-0 network[105428]: 'network-scripts' will be removed from distribution in near future.
Oct 08 09:48:20 compute-0 network[105429]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 08 09:48:20 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/094820 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 08 09:48:20 compute-0 ceph-mon[73572]: 9.d deep-scrub starts
Oct 08 09:48:20 compute-0 ceph-mon[73572]: 9.d deep-scrub ok
Oct 08 09:48:20 compute-0 ceph-mon[73572]: pgmap v36: 353 pgs: 353 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:48:20 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct 08 09:48:20 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Oct 08 09:48:20 compute-0 ceph-mon[73572]: osdmap e88: 3 total, 3 up, 3 in
Oct 08 09:48:20 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:20 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6618003ab0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:21 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:48:21 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:48:21 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:48:20.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:48:21 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:48:21 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:48:21 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:48:21.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:48:21 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:21 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6618003ab0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:21 compute-0 podman[105504]: 2025-10-08 09:48:21.461809415 +0000 UTC m=+0.089898745 container exec 50d7285a7766fc915688c3cc737e186f9ad2230d7b0b7e759880375ad996bb6c (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:48:21 compute-0 podman[105504]: 2025-10-08 09:48:21.501450613 +0000 UTC m=+0.129539943 container exec_died 50d7285a7766fc915688c3cc737e186f9ad2230d7b0b7e759880375ad996bb6c (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:48:21 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:21 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:21 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v38: 353 pgs: 353 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:48:21 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0)
Oct 08 09:48:21 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct 08 09:48:21 compute-0 sudo[104127]: pam_unix(sudo:session): session closed for user root
Oct 08 09:48:21 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 09:48:21 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Oct 08 09:48:22 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:22 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 09:48:22 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Oct 08 09:48:22 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Oct 08 09:48:22 compute-0 ceph-mon[73572]: 9.3 scrub starts
Oct 08 09:48:22 compute-0 ceph-mon[73572]: 9.3 scrub ok
Oct 08 09:48:22 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct 08 09:48:22 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Oct 08 09:48:22 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:22 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:48:22 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:48:22 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 08 09:48:22 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 09:48:22 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 08 09:48:22 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:22 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 08 09:48:22 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:22 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 08 09:48:22 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 09:48:22 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 08 09:48:22 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 09:48:22 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:48:22 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:48:22 compute-0 sudo[105618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:48:22 compute-0 sudo[105618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:48:22 compute-0 sudo[105618]: pam_unix(sudo:session): session closed for user root
Oct 08 09:48:22 compute-0 sudo[105647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 08 09:48:22 compute-0 sudo[105647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:48:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:22 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:23 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:48:23 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:48:23 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:48:23.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:48:23 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:48:23 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:48:23 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:48:23.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:48:23 compute-0 podman[105732]: 2025-10-08 09:48:23.126438522 +0000 UTC m=+0.059135444 container create 2e66e2cea46da113d40ab68e481d9c37169d918f5eef581b7327777cd2a53929 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_wright, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1)
Oct 08 09:48:23 compute-0 systemd[92032]: Starting Mark boot as successful...
Oct 08 09:48:23 compute-0 systemd[92032]: Finished Mark boot as successful.
Oct 08 09:48:23 compute-0 podman[105732]: 2025-10-08 09:48:23.089908524 +0000 UTC m=+0.022605446 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:48:23 compute-0 systemd[1]: Started libpod-conmon-2e66e2cea46da113d40ab68e481d9c37169d918f5eef581b7327777cd2a53929.scope.
Oct 08 09:48:23 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:48:23 compute-0 podman[105732]: 2025-10-08 09:48:23.267294033 +0000 UTC m=+0.199990955 container init 2e66e2cea46da113d40ab68e481d9c37169d918f5eef581b7327777cd2a53929 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325)
Oct 08 09:48:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:23 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:23 compute-0 podman[105732]: 2025-10-08 09:48:23.27388798 +0000 UTC m=+0.206584872 container start 2e66e2cea46da113d40ab68e481d9c37169d918f5eef581b7327777cd2a53929 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_wright, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct 08 09:48:23 compute-0 vibrant_wright[105774]: 167 167
Oct 08 09:48:23 compute-0 systemd[1]: libpod-2e66e2cea46da113d40ab68e481d9c37169d918f5eef581b7327777cd2a53929.scope: Deactivated successfully.
Oct 08 09:48:23 compute-0 podman[105732]: 2025-10-08 09:48:23.323347028 +0000 UTC m=+0.256043950 container attach 2e66e2cea46da113d40ab68e481d9c37169d918f5eef581b7327777cd2a53929 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_wright, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Oct 08 09:48:23 compute-0 podman[105732]: 2025-10-08 09:48:23.324135248 +0000 UTC m=+0.256832150 container died 2e66e2cea46da113d40ab68e481d9c37169d918f5eef581b7327777cd2a53929 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_wright, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 08 09:48:23 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Oct 08 09:48:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:23 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-1490fd9de60f279ae6053b646ec82cc486bc94dc5bb0005afe0e4c3a80771161-merged.mount: Deactivated successfully.
Oct 08 09:48:23 compute-0 ceph-mon[73572]: 9.b scrub starts
Oct 08 09:48:23 compute-0 ceph-mon[73572]: 9.b scrub ok
Oct 08 09:48:23 compute-0 ceph-mon[73572]: pgmap v38: 353 pgs: 353 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:48:23 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:23 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Oct 08 09:48:23 compute-0 ceph-mon[73572]: osdmap e89: 3 total, 3 up, 3 in
Oct 08 09:48:23 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:23 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:48:23 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 09:48:23 compute-0 ceph-mon[73572]: 9.7 scrub starts
Oct 08 09:48:23 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:23 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:23 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 09:48:23 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 09:48:23 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:48:23 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v40: 353 pgs: 2 unknown, 351 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:48:23 compute-0 podman[105732]: 2025-10-08 09:48:23.806347946 +0000 UTC m=+0.739044848 container remove 2e66e2cea46da113d40ab68e481d9c37169d918f5eef581b7327777cd2a53929 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_wright, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:48:23 compute-0 systemd[1]: libpod-conmon-2e66e2cea46da113d40ab68e481d9c37169d918f5eef581b7327777cd2a53929.scope: Deactivated successfully.
Oct 08 09:48:23 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Oct 08 09:48:24 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Oct 08 09:48:24 compute-0 podman[105858]: 2025-10-08 09:48:24.065413982 +0000 UTC m=+0.114042760 container create 1332b4f349ce4693cf98f6c6209b90219e8cc4fe193f061bed8459cd5638621e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_galileo, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Oct 08 09:48:24 compute-0 podman[105858]: 2025-10-08 09:48:23.972822579 +0000 UTC m=+0.021451407 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:48:24 compute-0 systemd[1]: Started libpod-conmon-1332b4f349ce4693cf98f6c6209b90219e8cc4fe193f061bed8459cd5638621e.scope.
Oct 08 09:48:24 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:48:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/341c9f2ddfa65e4d1445758cb22ab47fad14978d3252daa2a056952eab874831/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 09:48:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/341c9f2ddfa65e4d1445758cb22ab47fad14978d3252daa2a056952eab874831/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:48:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/341c9f2ddfa65e4d1445758cb22ab47fad14978d3252daa2a056952eab874831/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:48:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/341c9f2ddfa65e4d1445758cb22ab47fad14978d3252daa2a056952eab874831/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 09:48:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/341c9f2ddfa65e4d1445758cb22ab47fad14978d3252daa2a056952eab874831/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 09:48:24 compute-0 podman[105858]: 2025-10-08 09:48:24.259434264 +0000 UTC m=+0.308063062 container init 1332b4f349ce4693cf98f6c6209b90219e8cc4fe193f061bed8459cd5638621e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_galileo, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:48:24 compute-0 podman[105858]: 2025-10-08 09:48:24.268912765 +0000 UTC m=+0.317541563 container start 1332b4f349ce4693cf98f6c6209b90219e8cc4fe193f061bed8459cd5638621e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_galileo, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:48:24 compute-0 podman[105858]: 2025-10-08 09:48:24.302853228 +0000 UTC m=+0.351482006 container attach 1332b4f349ce4693cf98f6c6209b90219e8cc4fe193f061bed8459cd5638621e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_galileo, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:48:24 compute-0 python3.9[105938]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:48:24 compute-0 sharp_galileo[105941]: --> passed data devices: 0 physical, 1 LVM
Oct 08 09:48:24 compute-0 sharp_galileo[105941]: --> All data devices are unavailable
Oct 08 09:48:24 compute-0 systemd[1]: libpod-1332b4f349ce4693cf98f6c6209b90219e8cc4fe193f061bed8459cd5638621e.scope: Deactivated successfully.
Oct 08 09:48:24 compute-0 podman[105858]: 2025-10-08 09:48:24.582963609 +0000 UTC m=+0.631592427 container died 1332b4f349ce4693cf98f6c6209b90219e8cc4fe193f061bed8459cd5638621e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_galileo, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:48:24 compute-0 ceph-mon[73572]: 9.7 scrub ok
Oct 08 09:48:24 compute-0 ceph-mon[73572]: 9.13 deep-scrub starts
Oct 08 09:48:24 compute-0 ceph-mon[73572]: 9.13 deep-scrub ok
Oct 08 09:48:24 compute-0 ceph-mon[73572]: pgmap v40: 353 pgs: 2 unknown, 351 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:48:24 compute-0 ceph-mon[73572]: osdmap e90: 3 total, 3 up, 3 in
Oct 08 09:48:24 compute-0 ceph-mon[73572]: 9.18 scrub starts
Oct 08 09:48:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-341c9f2ddfa65e4d1445758cb22ab47fad14978d3252daa2a056952eab874831-merged.mount: Deactivated successfully.
Oct 08 09:48:24 compute-0 podman[105858]: 2025-10-08 09:48:24.897545415 +0000 UTC m=+0.946174193 container remove 1332b4f349ce4693cf98f6c6209b90219e8cc4fe193f061bed8459cd5638621e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid)
Oct 08 09:48:24 compute-0 systemd[1]: libpod-conmon-1332b4f349ce4693cf98f6c6209b90219e8cc4fe193f061bed8459cd5638621e.scope: Deactivated successfully.
Oct 08 09:48:24 compute-0 sudo[105647]: pam_unix(sudo:session): session closed for user root
Oct 08 09:48:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:24 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8002b50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:24 compute-0 sudo[106118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:48:24 compute-0 sudo[106118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:48:24 compute-0 sudo[106118]: pam_unix(sudo:session): session closed for user root
Oct 08 09:48:25 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Oct 08 09:48:25 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:48:25 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:48:25 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:48:25.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:48:25 compute-0 sudo[106144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- lvm list --format json
Oct 08 09:48:25 compute-0 sudo[106144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:48:25 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:48:25 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:48:25 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:48:25.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:48:25 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Oct 08 09:48:25 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Oct 08 09:48:25 compute-0 python3.9[106117]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 08 09:48:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:25 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:25 compute-0 podman[106216]: 2025-10-08 09:48:25.486107938 +0000 UTC m=+0.099224124 container create 61562118fe62206aa1d04e1ca3513a635be741f48453b430ade0e8cd64f0e71a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_tesla, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Oct 08 09:48:25 compute-0 podman[106216]: 2025-10-08 09:48:25.412361062 +0000 UTC m=+0.025477278 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:48:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:25 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66140014d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:25 compute-0 systemd[1]: Started libpod-conmon-61562118fe62206aa1d04e1ca3513a635be741f48453b430ade0e8cd64f0e71a.scope.
Oct 08 09:48:25 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:48:25 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v43: 353 pgs: 2 unknown, 351 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:48:25 compute-0 podman[106216]: 2025-10-08 09:48:25.642128894 +0000 UTC m=+0.255245100 container init 61562118fe62206aa1d04e1ca3513a635be741f48453b430ade0e8cd64f0e71a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_tesla, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:48:25 compute-0 podman[106216]: 2025-10-08 09:48:25.649257785 +0000 UTC m=+0.262374011 container start 61562118fe62206aa1d04e1ca3513a635be741f48453b430ade0e8cd64f0e71a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_tesla, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:48:25 compute-0 inspiring_tesla[106257]: 167 167
Oct 08 09:48:25 compute-0 systemd[1]: libpod-61562118fe62206aa1d04e1ca3513a635be741f48453b430ade0e8cd64f0e71a.scope: Deactivated successfully.
Oct 08 09:48:25 compute-0 podman[106216]: 2025-10-08 09:48:25.695958283 +0000 UTC m=+0.309074469 container attach 61562118fe62206aa1d04e1ca3513a635be741f48453b430ade0e8cd64f0e71a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_tesla, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:48:25 compute-0 podman[106216]: 2025-10-08 09:48:25.696879616 +0000 UTC m=+0.309995792 container died 61562118fe62206aa1d04e1ca3513a635be741f48453b430ade0e8cd64f0e71a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_tesla, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:48:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c0434133753b410a5505efd5e966f544aaa12c6418008eb2c977ac04e8947f7-merged.mount: Deactivated successfully.
Oct 08 09:48:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:48:25] "GET /metrics HTTP/1.1" 200 48278 "" "Prometheus/2.51.0"
Oct 08 09:48:25 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:48:25] "GET /metrics HTTP/1.1" 200 48278 "" "Prometheus/2.51.0"
Oct 08 09:48:25 compute-0 podman[106216]: 2025-10-08 09:48:25.899657011 +0000 UTC m=+0.512773197 container remove 61562118fe62206aa1d04e1ca3513a635be741f48453b430ade0e8cd64f0e71a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct 08 09:48:25 compute-0 systemd[1]: libpod-conmon-61562118fe62206aa1d04e1ca3513a635be741f48453b430ade0e8cd64f0e71a.scope: Deactivated successfully.
Oct 08 09:48:26 compute-0 podman[106296]: 2025-10-08 09:48:26.094739051 +0000 UTC m=+0.058826907 container create 4e74146b2b57a603c2010d087fd0fbded5a5e716ccf6d1d2cacd1530de4140f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct 08 09:48:26 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Oct 08 09:48:26 compute-0 ceph-mon[73572]: 9.18 scrub ok
Oct 08 09:48:26 compute-0 ceph-mon[73572]: osdmap e91: 3 total, 3 up, 3 in
Oct 08 09:48:26 compute-0 ceph-mon[73572]: 9.1f scrub starts
Oct 08 09:48:26 compute-0 systemd[1]: Started libpod-conmon-4e74146b2b57a603c2010d087fd0fbded5a5e716ccf6d1d2cacd1530de4140f4.scope.
Oct 08 09:48:26 compute-0 podman[106296]: 2025-10-08 09:48:26.056692473 +0000 UTC m=+0.020780349 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:48:26 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Oct 08 09:48:26 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:48:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad1da561748051cc71169d7967a8f4d0e5de2411280357829b78c160bd071a70/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 09:48:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad1da561748051cc71169d7967a8f4d0e5de2411280357829b78c160bd071a70/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:48:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad1da561748051cc71169d7967a8f4d0e5de2411280357829b78c160bd071a70/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:48:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad1da561748051cc71169d7967a8f4d0e5de2411280357829b78c160bd071a70/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 09:48:26 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Oct 08 09:48:26 compute-0 podman[106296]: 2025-10-08 09:48:26.250052169 +0000 UTC m=+0.214140025 container init 4e74146b2b57a603c2010d087fd0fbded5a5e716ccf6d1d2cacd1530de4140f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_albattani, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct 08 09:48:26 compute-0 podman[106296]: 2025-10-08 09:48:26.262253639 +0000 UTC m=+0.226341485 container start 4e74146b2b57a603c2010d087fd0fbded5a5e716ccf6d1d2cacd1530de4140f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Oct 08 09:48:26 compute-0 podman[106296]: 2025-10-08 09:48:26.316165449 +0000 UTC m=+0.280253385 container attach 4e74146b2b57a603c2010d087fd0fbded5a5e716ccf6d1d2cacd1530de4140f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_albattani, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:48:26 compute-0 determined_albattani[106323]: {
Oct 08 09:48:26 compute-0 determined_albattani[106323]:     "1": [
Oct 08 09:48:26 compute-0 determined_albattani[106323]:         {
Oct 08 09:48:26 compute-0 determined_albattani[106323]:             "devices": [
Oct 08 09:48:26 compute-0 determined_albattani[106323]:                 "/dev/loop3"
Oct 08 09:48:26 compute-0 determined_albattani[106323]:             ],
Oct 08 09:48:26 compute-0 determined_albattani[106323]:             "lv_name": "ceph_lv0",
Oct 08 09:48:26 compute-0 determined_albattani[106323]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 09:48:26 compute-0 determined_albattani[106323]:             "lv_size": "21470642176",
Oct 08 09:48:26 compute-0 determined_albattani[106323]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 08 09:48:26 compute-0 determined_albattani[106323]:             "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 09:48:26 compute-0 determined_albattani[106323]:             "name": "ceph_lv0",
Oct 08 09:48:26 compute-0 determined_albattani[106323]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 09:48:26 compute-0 determined_albattani[106323]:             "tags": {
Oct 08 09:48:26 compute-0 determined_albattani[106323]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 08 09:48:26 compute-0 determined_albattani[106323]:                 "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 09:48:26 compute-0 determined_albattani[106323]:                 "ceph.cephx_lockbox_secret": "",
Oct 08 09:48:26 compute-0 determined_albattani[106323]:                 "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct 08 09:48:26 compute-0 determined_albattani[106323]:                 "ceph.cluster_name": "ceph",
Oct 08 09:48:26 compute-0 determined_albattani[106323]:                 "ceph.crush_device_class": "",
Oct 08 09:48:26 compute-0 determined_albattani[106323]:                 "ceph.encrypted": "0",
Oct 08 09:48:26 compute-0 determined_albattani[106323]:                 "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct 08 09:48:26 compute-0 determined_albattani[106323]:                 "ceph.osd_id": "1",
Oct 08 09:48:26 compute-0 determined_albattani[106323]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 08 09:48:26 compute-0 determined_albattani[106323]:                 "ceph.type": "block",
Oct 08 09:48:26 compute-0 determined_albattani[106323]:                 "ceph.vdo": "0",
Oct 08 09:48:26 compute-0 determined_albattani[106323]:                 "ceph.with_tpm": "0"
Oct 08 09:48:26 compute-0 determined_albattani[106323]:             },
Oct 08 09:48:26 compute-0 determined_albattani[106323]:             "type": "block",
Oct 08 09:48:26 compute-0 determined_albattani[106323]:             "vg_name": "ceph_vg0"
Oct 08 09:48:26 compute-0 determined_albattani[106323]:         }
Oct 08 09:48:26 compute-0 determined_albattani[106323]:     ]
Oct 08 09:48:26 compute-0 determined_albattani[106323]: }
Oct 08 09:48:26 compute-0 systemd[1]: libpod-4e74146b2b57a603c2010d087fd0fbded5a5e716ccf6d1d2cacd1530de4140f4.scope: Deactivated successfully.
Oct 08 09:48:26 compute-0 podman[106296]: 2025-10-08 09:48:26.583352971 +0000 UTC m=+0.547440847 container died 4e74146b2b57a603c2010d087fd0fbded5a5e716ccf6d1d2cacd1530de4140f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_albattani, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:48:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad1da561748051cc71169d7967a8f4d0e5de2411280357829b78c160bd071a70-merged.mount: Deactivated successfully.
Oct 08 09:48:26 compute-0 python3.9[106434]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 08 09:48:26 compute-0 podman[106296]: 2025-10-08 09:48:26.886384995 +0000 UTC m=+0.850472841 container remove 4e74146b2b57a603c2010d087fd0fbded5a5e716ccf6d1d2cacd1530de4140f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_albattani, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Oct 08 09:48:26 compute-0 systemd[1]: libpod-conmon-4e74146b2b57a603c2010d087fd0fbded5a5e716ccf6d1d2cacd1530de4140f4.scope: Deactivated successfully.
Oct 08 09:48:26 compute-0 sudo[106144]: pam_unix(sudo:session): session closed for user root
Oct 08 09:48:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:26 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:26 compute-0 sudo[106455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:48:27 compute-0 sudo[106455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:48:27 compute-0 sudo[106455]: pam_unix(sudo:session): session closed for user root
Oct 08 09:48:27 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:48:27 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:48:27 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:48:27.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:48:27 compute-0 sudo[106481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- raw list --format json
Oct 08 09:48:27 compute-0 sudo[106481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:48:27 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:48:27 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:48:27 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:48:27.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:48:27 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Oct 08 09:48:27 compute-0 ceph-mon[73572]: 9.1f scrub ok
Oct 08 09:48:27 compute-0 ceph-mon[73572]: pgmap v43: 353 pgs: 2 unknown, 351 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:48:27 compute-0 ceph-mon[73572]: osdmap e92: 3 total, 3 up, 3 in
Oct 08 09:48:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:27 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:27 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Oct 08 09:48:27 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Oct 08 09:48:27 compute-0 podman[106602]: 2025-10-08 09:48:27.495091148 +0000 UTC m=+0.091646520 container create f9c8b3fc4a64f6269514c95d0d99444f53a6e9d678377a9b12587d171f3e80f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1)
Oct 08 09:48:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:27 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:27 compute-0 podman[106602]: 2025-10-08 09:48:27.424487634 +0000 UTC m=+0.021043036 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:48:27 compute-0 systemd[1]: Started libpod-conmon-f9c8b3fc4a64f6269514c95d0d99444f53a6e9d678377a9b12587d171f3e80f1.scope.
Oct 08 09:48:27 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:48:27 compute-0 podman[106602]: 2025-10-08 09:48:27.629732921 +0000 UTC m=+0.226288313 container init f9c8b3fc4a64f6269514c95d0d99444f53a6e9d678377a9b12587d171f3e80f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_bhabha, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 08 09:48:27 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v46: 353 pgs: 2 unknown, 351 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:48:27 compute-0 podman[106602]: 2025-10-08 09:48:27.636272648 +0000 UTC m=+0.232828020 container start f9c8b3fc4a64f6269514c95d0d99444f53a6e9d678377a9b12587d171f3e80f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_bhabha, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:48:27 compute-0 vibrant_bhabha[106687]: 167 167
Oct 08 09:48:27 compute-0 systemd[1]: libpod-f9c8b3fc4a64f6269514c95d0d99444f53a6e9d678377a9b12587d171f3e80f1.scope: Deactivated successfully.
Oct 08 09:48:27 compute-0 podman[106602]: 2025-10-08 09:48:27.684015861 +0000 UTC m=+0.280571273 container attach f9c8b3fc4a64f6269514c95d0d99444f53a6e9d678377a9b12587d171f3e80f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_bhabha, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:48:27 compute-0 podman[106602]: 2025-10-08 09:48:27.684821022 +0000 UTC m=+0.281376404 container died f9c8b3fc4a64f6269514c95d0d99444f53a6e9d678377a9b12587d171f3e80f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_bhabha, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:48:27 compute-0 sudo[106729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftfsrtoddytfwruiycywuhmnuegwwkra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916907.3751755-306-256664262050775/AnsiballZ_setup.py'
Oct 08 09:48:27 compute-0 sudo[106729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:48:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-06dd95b1d7eaf74a835513406bd038f36123f2eb5446fad567019a555a2b109a-merged.mount: Deactivated successfully.
Oct 08 09:48:27 compute-0 podman[106602]: 2025-10-08 09:48:27.964708407 +0000 UTC m=+0.561263819 container remove f9c8b3fc4a64f6269514c95d0d99444f53a6e9d678377a9b12587d171f3e80f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_bhabha, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:48:28 compute-0 python3.9[106731]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 08 09:48:28 compute-0 systemd[1]: libpod-conmon-f9c8b3fc4a64f6269514c95d0d99444f53a6e9d678377a9b12587d171f3e80f1.scope: Deactivated successfully.
Oct 08 09:48:28 compute-0 podman[106746]: 2025-10-08 09:48:28.133068076 +0000 UTC m=+0.072341420 container create 3f3931a0e12108feef7cc63358beb7411d0a1182e10cf1ceee0edc89e5448ba2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:48:28 compute-0 podman[106746]: 2025-10-08 09:48:28.080615523 +0000 UTC m=+0.019888887 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:48:28 compute-0 systemd[1]: Started libpod-conmon-3f3931a0e12108feef7cc63358beb7411d0a1182e10cf1ceee0edc89e5448ba2.scope.
Oct 08 09:48:28 compute-0 sudo[106729]: pam_unix(sudo:session): session closed for user root
Oct 08 09:48:28 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:48:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b288599fd60511384094b6d6443b7e523b9fb741e100a0c9d9e9d105a62b871e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 09:48:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b288599fd60511384094b6d6443b7e523b9fb741e100a0c9d9e9d105a62b871e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:48:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b288599fd60511384094b6d6443b7e523b9fb741e100a0c9d9e9d105a62b871e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:48:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b288599fd60511384094b6d6443b7e523b9fb741e100a0c9d9e9d105a62b871e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 09:48:28 compute-0 podman[106746]: 2025-10-08 09:48:28.337630636 +0000 UTC m=+0.276904050 container init 3f3931a0e12108feef7cc63358beb7411d0a1182e10cf1ceee0edc89e5448ba2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_jepsen, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:48:28 compute-0 podman[106746]: 2025-10-08 09:48:28.34446513 +0000 UTC m=+0.283738514 container start 3f3931a0e12108feef7cc63358beb7411d0a1182e10cf1ceee0edc89e5448ba2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:48:28 compute-0 podman[106746]: 2025-10-08 09:48:28.367336012 +0000 UTC m=+0.306609376 container attach 3f3931a0e12108feef7cc63358beb7411d0a1182e10cf1ceee0edc89e5448ba2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:48:28 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:48:28 compute-0 ceph-mon[73572]: 9.1b scrub starts
Oct 08 09:48:28 compute-0 ceph-mon[73572]: 9.1b scrub ok
Oct 08 09:48:28 compute-0 ceph-mon[73572]: osdmap e93: 3 total, 3 up, 3 in
Oct 08 09:48:28 compute-0 ceph-mon[73572]: 9.1d scrub starts
Oct 08 09:48:28 compute-0 ceph-mon[73572]: 9.1d scrub ok
Oct 08 09:48:28 compute-0 ceph-mon[73572]: pgmap v46: 353 pgs: 2 unknown, 351 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:48:28 compute-0 sudo[106883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypvrcaeejutmybjbrtoelldbbeuoshvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916907.3751755-306-256664262050775/AnsiballZ_dnf.py'
Oct 08 09:48:28 compute-0 sudo[106883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:48:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:28 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66140014d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:28 compute-0 python3.9[106889]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 08 09:48:28 compute-0 lvm[106914]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 08 09:48:28 compute-0 lvm[106914]: VG ceph_vg0 finished
Oct 08 09:48:29 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:48:29 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:48:29 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:48:29.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:48:29 compute-0 flamboyant_jepsen[106763]: {}
Oct 08 09:48:29 compute-0 systemd[1]: libpod-3f3931a0e12108feef7cc63358beb7411d0a1182e10cf1ceee0edc89e5448ba2.scope: Deactivated successfully.
Oct 08 09:48:29 compute-0 systemd[1]: libpod-3f3931a0e12108feef7cc63358beb7411d0a1182e10cf1ceee0edc89e5448ba2.scope: Consumed 1.017s CPU time.
Oct 08 09:48:29 compute-0 podman[106746]: 2025-10-08 09:48:29.078790438 +0000 UTC m=+1.018063802 container died 3f3931a0e12108feef7cc63358beb7411d0a1182e10cf1ceee0edc89e5448ba2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_jepsen, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:48:29 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:48:29 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 08 09:48:29 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:48:29.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 08 09:48:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:29 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6628004290 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-b288599fd60511384094b6d6443b7e523b9fb741e100a0c9d9e9d105a62b871e-merged.mount: Deactivated successfully.
Oct 08 09:48:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:29 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:29 compute-0 ceph-mon[73572]: 9.8 scrub starts
Oct 08 09:48:29 compute-0 ceph-mon[73572]: 9.8 scrub ok
Oct 08 09:48:29 compute-0 podman[106746]: 2025-10-08 09:48:29.56118332 +0000 UTC m=+1.500456664 container remove 3f3931a0e12108feef7cc63358beb7411d0a1182e10cf1ceee0edc89e5448ba2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:48:29 compute-0 sudo[106481]: pam_unix(sudo:session): session closed for user root
Oct 08 09:48:29 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 09:48:29 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v47: 353 pgs: 353 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:48:29 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0)
Oct 08 09:48:29 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct 08 09:48:29 compute-0 systemd[1]: libpod-conmon-3f3931a0e12108feef7cc63358beb7411d0a1182e10cf1ceee0edc89e5448ba2.scope: Deactivated successfully.
Oct 08 09:48:29 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:29 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 09:48:29 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:29 compute-0 sudo[106946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 08 09:48:29 compute-0 sudo[106946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:48:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:29 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 09:48:29 compute-0 sudo[106946]: pam_unix(sudo:session): session closed for user root
Oct 08 09:48:30 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Oct 08 09:48:30 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Oct 08 09:48:30 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Oct 08 09:48:30 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Oct 08 09:48:30 compute-0 ceph-mon[73572]: 9.f deep-scrub starts
Oct 08 09:48:30 compute-0 ceph-mon[73572]: 9.f deep-scrub ok
Oct 08 09:48:30 compute-0 ceph-mon[73572]: 9.5 scrub starts
Oct 08 09:48:30 compute-0 ceph-mon[73572]: 9.5 scrub ok
Oct 08 09:48:30 compute-0 ceph-mon[73572]: pgmap v47: 353 pgs: 353 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:48:30 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct 08 09:48:30 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:30 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:48:30 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:30 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003ad0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:31 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:48:31 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:48:31 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:48:31.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:48:31 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:48:31 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:48:31 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:48:31.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:48:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:31 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66140014d0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:31 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6628004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:31 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v49: 353 pgs: 353 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:48:31 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0)
Oct 08 09:48:31 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct 08 09:48:31 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Oct 08 09:48:31 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Oct 08 09:48:31 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Oct 08 09:48:31 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Oct 08 09:48:31 compute-0 ceph-mon[73572]: osdmap e94: 3 total, 3 up, 3 in
Oct 08 09:48:31 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct 08 09:48:31 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Oct 08 09:48:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Oct 08 09:48:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Oct 08 09:48:32 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Oct 08 09:48:32 compute-0 ceph-mon[73572]: pgmap v49: 353 pgs: 353 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:48:32 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Oct 08 09:48:32 compute-0 ceph-mon[73572]: osdmap e95: 3 total, 3 up, 3 in
Oct 08 09:48:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 09:48:32 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:48:32 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:32 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 09:48:32 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:32 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 09:48:32 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:32 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:33 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:48:33 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:48:33 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:48:33.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:48:33 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:48:33 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:48:33 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:48:33.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:48:33 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:33 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003ad0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:48:33 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:33 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6614001670 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:33 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v52: 353 pgs: 2 remapped+peering, 351 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:48:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Oct 08 09:48:33 compute-0 ceph-mon[73572]: osdmap e96: 3 total, 3 up, 3 in
Oct 08 09:48:33 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:48:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Oct 08 09:48:33 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Oct 08 09:48:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Oct 08 09:48:34 compute-0 ceph-mon[73572]: pgmap v52: 353 pgs: 2 remapped+peering, 351 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:48:34 compute-0 ceph-mon[73572]: osdmap e97: 3 total, 3 up, 3 in
Oct 08 09:48:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Oct 08 09:48:34 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Oct 08 09:48:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:34 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6628004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:35 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:48:35 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:48:35 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:48:35.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:48:35 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:48:35 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:48:35 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:48:35.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:48:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:35 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6628004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:35 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003ad0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:35 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v55: 353 pgs: 2 remapped+peering, 351 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:48:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:48:35] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Oct 08 09:48:35 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:48:35] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Oct 08 09:48:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:35 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 08 09:48:35 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Oct 08 09:48:35 compute-0 ceph-mon[73572]: osdmap e98: 3 total, 3 up, 3 in
Oct 08 09:48:35 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Oct 08 09:48:35 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Oct 08 09:48:36 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Oct 08 09:48:36 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:36 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66140036a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:36 compute-0 ceph-mon[73572]: pgmap v55: 353 pgs: 2 remapped+peering, 351 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:48:36 compute-0 ceph-mon[73572]: osdmap e99: 3 total, 3 up, 3 in
Oct 08 09:48:37 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Oct 08 09:48:37 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Oct 08 09:48:37 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:48:37 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:48:37 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:48:37.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:48:37 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:48:37 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:48:37 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:48:37.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:48:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:37 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66140036a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:37 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:37 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v58: 353 pgs: 2 remapped+peering, 351 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:48:38 compute-0 ceph-mon[73572]: osdmap e100: 3 total, 3 up, 3 in
Oct 08 09:48:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:48:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:38 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003ad0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:39 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:48:39 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:48:39 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:48:39.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:48:39 compute-0 ceph-mon[73572]: pgmap v58: 353 pgs: 2 remapped+peering, 351 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:48:39 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:48:39 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:48:39 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:48:39.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:48:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:39 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66140036a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:39 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66140036a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:39 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v59: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 348 B/s rd, 174 B/s wr, 0 op/s; 37 B/s, 2 objects/s recovering
Oct 08 09:48:39 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0)
Oct 08 09:48:39 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Oct 08 09:48:40 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Oct 08 09:48:40 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Oct 08 09:48:40 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Oct 08 09:48:40 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Oct 08 09:48:40 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Oct 08 09:48:40 compute-0 sudo[107041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 09:48:40 compute-0 sudo[107041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:48:40 compute-0 sudo[107041]: pam_unix(sudo:session): session closed for user root
Oct 08 09:48:40 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 101 pg[9.10( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=2 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=101 pruub=13.476019859s) [0] r=-1 lpr=101 pi=[54,101)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 262.316497803s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:48:40 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 101 pg[9.10( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=2 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=101 pruub=13.475879669s) [0] r=-1 lpr=101 pi=[54,101)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 262.316497803s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:48:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:40 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:41 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:48:41 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:48:41 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:48:41.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:48:41 compute-0 ceph-mon[73572]: pgmap v59: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 348 B/s rd, 174 B/s wr, 0 op/s; 37 B/s, 2 objects/s recovering
Oct 08 09:48:41 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Oct 08 09:48:41 compute-0 ceph-mon[73572]: osdmap e101: 3 total, 3 up, 3 in
Oct 08 09:48:41 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:48:41 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:48:41 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:48:41.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:48:41 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Oct 08 09:48:41 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Oct 08 09:48:41 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Oct 08 09:48:41 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 102 pg[9.10( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=2 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=102) [0]/[1] r=0 lpr=102 pi=[54,102)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:48:41 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 102 pg[9.10( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=2 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=102) [0]/[1] r=0 lpr=102 pi=[54,102)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 08 09:48:41 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:41 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003ad0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:41 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:41 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66140036a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:41 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v62: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 358 B/s rd, 179 B/s wr, 0 op/s; 38 B/s, 2 objects/s recovering
Oct 08 09:48:41 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0)
Oct 08 09:48:41 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Oct 08 09:48:42 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Oct 08 09:48:42 compute-0 ceph-mon[73572]: osdmap e102: 3 total, 3 up, 3 in
Oct 08 09:48:42 compute-0 ceph-mon[73572]: pgmap v62: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 358 B/s rd, 179 B/s wr, 0 op/s; 38 B/s, 2 objects/s recovering
Oct 08 09:48:42 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Oct 08 09:48:42 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Oct 08 09:48:42 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Oct 08 09:48:42 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Oct 08 09:48:42 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 103 pg[9.11( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=103 pruub=12.052393913s) [0] r=-1 lpr=103 pi=[54,103)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 262.316528320s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:48:42 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 103 pg[9.11( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=103 pruub=12.052357674s) [0] r=-1 lpr=103 pi=[54,103)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 262.316528320s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:48:42 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 103 pg[9.10( v 45'1018 (0'0,45'1018] local-lis/les=102/103 n=2 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=102) [0]/[1] async=[0] r=0 lpr=102 pi=[54,102)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:48:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/094842 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 08 09:48:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:42 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66140036a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:43 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:48:43 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:48:43 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:48:43.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:48:43 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:48:43 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:48:43 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:48:43.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:48:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Oct 08 09:48:43 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Oct 08 09:48:43 compute-0 ceph-mon[73572]: osdmap e103: 3 total, 3 up, 3 in
Oct 08 09:48:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Oct 08 09:48:43 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Oct 08 09:48:43 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 104 pg[9.10( v 45'1018 (0'0,45'1018] local-lis/les=102/103 n=2 ec=54/38 lis/c=102/54 les/c/f=103/56/0 sis=104 pruub=14.987093925s) [0] async=[0] r=-1 lpr=104 pi=[54,104)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 266.272033691s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:48:43 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 104 pg[9.10( v 45'1018 (0'0,45'1018] local-lis/les=102/103 n=2 ec=54/38 lis/c=102/54 les/c/f=103/56/0 sis=104 pruub=14.987010956s) [0] r=-1 lpr=104 pi=[54,104)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 266.272033691s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:48:43 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 104 pg[9.11( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=104) [0]/[1] r=0 lpr=104 pi=[54,104)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:48:43 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 104 pg[9.11( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=104) [0]/[1] r=0 lpr=104 pi=[54,104)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 08 09:48:43 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:43 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:48:43 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:43 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6600002550 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:43 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v65: 353 pgs: 1 remapped+peering, 1 peering, 351 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s; 0 B/s, 0 objects/s recovering
Oct 08 09:48:44 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Oct 08 09:48:44 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Oct 08 09:48:44 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Oct 08 09:48:44 compute-0 ceph-mon[73572]: osdmap e104: 3 total, 3 up, 3 in
Oct 08 09:48:44 compute-0 ceph-mon[73572]: pgmap v65: 353 pgs: 1 remapped+peering, 1 peering, 351 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s; 0 B/s, 0 objects/s recovering
Oct 08 09:48:44 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 105 pg[9.11( v 45'1018 (0'0,45'1018] local-lis/les=104/105 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=104) [0]/[1] async=[0] r=0 lpr=104 pi=[54,104)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:48:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:44 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:45 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:48:45 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:48:45 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:48:45.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:48:45 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:48:45 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:48:45 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:48:45.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:48:45 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Oct 08 09:48:45 compute-0 ceph-mon[73572]: osdmap e105: 3 total, 3 up, 3 in
Oct 08 09:48:45 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Oct 08 09:48:45 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Oct 08 09:48:45 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 106 pg[9.11( v 45'1018 (0'0,45'1018] local-lis/les=104/105 n=5 ec=54/38 lis/c=104/54 les/c/f=105/56/0 sis=106 pruub=14.974084854s) [0] async=[0] r=-1 lpr=106 pi=[54,106)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 268.305603027s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:48:45 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 106 pg[9.11( v 45'1018 (0'0,45'1018] local-lis/les=104/105 n=5 ec=54/38 lis/c=104/54 les/c/f=105/56/0 sis=106 pruub=14.974020958s) [0] r=-1 lpr=106 pi=[54,106)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 268.305603027s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:48:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6628004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:45 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v68: 353 pgs: 1 remapped+peering, 1 peering, 351 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s; 0 B/s, 0 objects/s recovering
Oct 08 09:48:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:48:45] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Oct 08 09:48:45 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:48:45] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Oct 08 09:48:46 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Oct 08 09:48:46 compute-0 ceph-mon[73572]: osdmap e106: 3 total, 3 up, 3 in
Oct 08 09:48:46 compute-0 ceph-mon[73572]: pgmap v68: 353 pgs: 1 remapped+peering, 1 peering, 351 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s; 0 B/s, 0 objects/s recovering
Oct 08 09:48:46 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Oct 08 09:48:46 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Oct 08 09:48:46 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:46 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6600002550 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:47 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:48:47 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:48:47 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:48:47.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:48:47 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:48:47 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:48:47 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:48:47.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:48:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:47 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:47 compute-0 ceph-mon[73572]: osdmap e107: 3 total, 3 up, 3 in
Oct 08 09:48:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:47 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6628004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_09:48:47
Oct 08 09:48:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 08 09:48:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Some PGs (0.005666) are inactive; try again later
Oct 08 09:48:47 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v70: 353 pgs: 1 remapped+peering, 1 peering, 351 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 230 B/s rd, 0 B/s wr, 0 op/s; 0 B/s, 0 objects/s recovering
Oct 08 09:48:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct 08 09:48:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:48:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 08 09:48:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:48:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:48:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:48:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:48:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:48:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:48:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:48:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:48:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:48:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 08 09:48:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:48:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:48:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:48:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 08 09:48:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:48:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 08 09:48:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:48:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:48:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:48:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 08 09:48:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:48:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 09:48:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 09:48:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:48:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:48:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:48:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 08 09:48:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 09:48:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:48:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:48:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 09:48:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 09:48:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 09:48:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:48:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:48:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 08 09:48:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 09:48:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 09:48:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 09:48:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 09:48:48 compute-0 ceph-mon[73572]: pgmap v70: 353 pgs: 1 remapped+peering, 1 peering, 351 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 230 B/s rd, 0 B/s wr, 0 op/s; 0 B/s, 0 objects/s recovering
Oct 08 09:48:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:48:48 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:48:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:48 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:49 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:48:49 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:48:49 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:48:49.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:48:49 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:48:49 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:48:49 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:48:49.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:48:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:49 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6600002550 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:49 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:49 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v71: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Oct 08 09:48:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0)
Oct 08 09:48:49 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Oct 08 09:48:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Oct 08 09:48:49 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Oct 08 09:48:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Oct 08 09:48:49 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Oct 08 09:48:49 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Oct 08 09:48:50 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 108 pg[9.12( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=4 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=108 pruub=11.404694557s) [0] r=-1 lpr=108 pi=[54,108)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 270.319274902s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:48:50 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 108 pg[9.12( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=4 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=108 pruub=11.404635429s) [0] r=-1 lpr=108 pi=[54,108)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 270.319274902s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:48:50 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Oct 08 09:48:50 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:50 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6628004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:51 compute-0 ceph-mon[73572]: pgmap v71: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Oct 08 09:48:51 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Oct 08 09:48:51 compute-0 ceph-mon[73572]: osdmap e108: 3 total, 3 up, 3 in
Oct 08 09:48:51 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:48:51 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:48:51 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:48:51.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:48:51 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Oct 08 09:48:51 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Oct 08 09:48:51 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 109 pg[9.12( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=4 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=109) [0]/[1] r=0 lpr=109 pi=[54,109)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:48:51 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 109 pg[9.12( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=4 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=109) [0]/[1] r=0 lpr=109 pi=[54,109)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 08 09:48:51 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:48:51 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:48:51 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:48:51.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:48:51 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:51 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:51 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:51 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6600002550 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:51 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v74: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Oct 08 09:48:51 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0)
Oct 08 09:48:51 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Oct 08 09:48:52 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Oct 08 09:48:52 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Oct 08 09:48:52 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Oct 08 09:48:52 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Oct 08 09:48:52 compute-0 ceph-mon[73572]: osdmap e109: 3 total, 3 up, 3 in
Oct 08 09:48:52 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Oct 08 09:48:52 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 110 pg[9.12( v 45'1018 (0'0,45'1018] local-lis/les=109/110 n=4 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=109) [0]/[1] async=[0] r=0 lpr=109 pi=[54,109)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:48:52 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:52 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:53 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:48:53 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:48:53 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:48:53.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:48:53 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Oct 08 09:48:53 compute-0 ceph-mon[73572]: pgmap v74: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Oct 08 09:48:53 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Oct 08 09:48:53 compute-0 ceph-mon[73572]: osdmap e110: 3 total, 3 up, 3 in
Oct 08 09:48:53 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:48:53 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:48:53 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:48:53.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:48:53 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Oct 08 09:48:53 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Oct 08 09:48:53 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 111 pg[9.12( v 45'1018 (0'0,45'1018] local-lis/les=109/110 n=4 ec=54/38 lis/c=109/54 les/c/f=110/56/0 sis=111 pruub=15.415892601s) [0] async=[0] r=-1 lpr=111 pi=[54,111)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 276.629608154s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:48:53 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 111 pg[9.12( v 45'1018 (0'0,45'1018] local-lis/les=109/110 n=4 ec=54/38 lis/c=109/54 les/c/f=110/56/0 sis=111 pruub=15.415235519s) [0] r=-1 lpr=111 pi=[54,111)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 276.629608154s@ mbc={}] state<Start>: transitioning to Stray
Oct 08 09:48:53 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:53 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6628004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:53 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:48:53 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:53 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:53 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v77: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s; 27 B/s, 0 objects/s recovering
Oct 08 09:48:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Oct 08 09:48:54 compute-0 ceph-mon[73572]: osdmap e111: 3 total, 3 up, 3 in
Oct 08 09:48:54 compute-0 ceph-mon[73572]: pgmap v77: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s; 27 B/s, 0 objects/s recovering
Oct 08 09:48:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Oct 08 09:48:54 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Oct 08 09:48:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:54 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6600002550 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:55 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:48:55 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:48:55 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:48:55.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:48:55 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:48:55 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:48:55 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:48:55.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:48:55 compute-0 ceph-mon[73572]: osdmap e112: 3 total, 3 up, 3 in
Oct 08 09:48:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:55 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:55 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6628004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:55 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v79: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 224 B/s rd, 0 op/s; 24 B/s, 0 objects/s recovering
Oct 08 09:48:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:48:55] "GET /metrics HTTP/1.1" 200 48254 "" "Prometheus/2.51.0"
Oct 08 09:48:55 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:48:55] "GET /metrics HTTP/1.1" 200 48254 "" "Prometheus/2.51.0"
Oct 08 09:48:56 compute-0 ceph-mon[73572]: pgmap v79: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 224 B/s rd, 0 op/s; 24 B/s, 0 objects/s recovering
Oct 08 09:48:56 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:56 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:57 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:48:57 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:48:57 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:48:57.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:48:57 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:48:57 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:48:57 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:48:57.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:48:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:57 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66000032e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:57 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c003630 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:57 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v80: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Oct 08 09:48:58 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:48:58 compute-0 ceph-mon[73572]: pgmap v80: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Oct 08 09:48:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:58 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6628004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:59 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:48:59 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 09:48:59 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:48:59.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 09:48:59 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:48:59 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:48:59 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:48:59.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:48:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:59 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:59 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66000032e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:48:59 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v81: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 406 B/s rd, 0 op/s; 14 B/s, 0 objects/s recovering
Oct 08 09:48:59 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0)
Oct 08 09:48:59 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Oct 08 09:48:59 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Oct 08 09:48:59 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Oct 08 09:48:59 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Oct 08 09:48:59 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Oct 08 09:48:59 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Oct 08 09:49:00 compute-0 sudo[107164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 09:49:00 compute-0 sudo[107164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:49:00 compute-0 sudo[107164]: pam_unix(sudo:session): session closed for user root
Oct 08 09:49:00 compute-0 ceph-mon[73572]: pgmap v81: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 406 B/s rd, 0 op/s; 14 B/s, 0 objects/s recovering
Oct 08 09:49:00 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Oct 08 09:49:00 compute-0 ceph-mon[73572]: osdmap e113: 3 total, 3 up, 3 in
Oct 08 09:49:00 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:00 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c003630 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:49:01 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:49:01 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:49:01 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:49:01.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:49:01 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:49:01 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:49:01 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:49:01.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:49:01 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:01 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6628004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:49:01 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:01 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:49:01 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v83: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:49:01 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0)
Oct 08 09:49:01 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Oct 08 09:49:01 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Oct 08 09:49:01 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Oct 08 09:49:01 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Oct 08 09:49:01 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Oct 08 09:49:01 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Oct 08 09:49:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Oct 08 09:49:02 compute-0 ceph-mon[73572]: pgmap v83: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:49:02 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Oct 08 09:49:02 compute-0 ceph-mon[73572]: osdmap e114: 3 total, 3 up, 3 in
Oct 08 09:49:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 09:49:02 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:49:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Oct 08 09:49:02 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Oct 08 09:49:02 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:02 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66000032e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:49:03 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:49:03 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:49:03 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:49:03.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:49:03 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:49:03 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:49:03 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:49:03.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:49:03 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:03 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c003630 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:49:03 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:49:03 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:03 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6628004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:49:03 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v86: 353 pgs: 1 remapped+peering, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 08 09:49:03 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:49:03 compute-0 ceph-mon[73572]: osdmap e115: 3 total, 3 up, 3 in
Oct 08 09:49:03 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Oct 08 09:49:03 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Oct 08 09:49:03 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Oct 08 09:49:04 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Oct 08 09:49:04 compute-0 ceph-mon[73572]: pgmap v86: 353 pgs: 1 remapped+peering, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 08 09:49:04 compute-0 ceph-mon[73572]: osdmap e116: 3 total, 3 up, 3 in
Oct 08 09:49:04 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Oct 08 09:49:04 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Oct 08 09:49:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:04 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:49:05 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:49:05 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:49:05 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:49:05.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:49:05 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:49:05 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:49:05 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:49:05.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:49:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:05 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6600004380 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:49:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:05 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c003630 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:49:05 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v89: 353 pgs: 1 remapped+peering, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:49:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:49:05] "GET /metrics HTTP/1.1" 200 48248 "" "Prometheus/2.51.0"
Oct 08 09:49:05 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:49:05] "GET /metrics HTTP/1.1" 200 48248 "" "Prometheus/2.51.0"
Oct 08 09:49:05 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Oct 08 09:49:05 compute-0 ceph-mon[73572]: osdmap e117: 3 total, 3 up, 3 in
Oct 08 09:49:06 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Oct 08 09:49:06 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Oct 08 09:49:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:06 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6628004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:49:07 compute-0 ceph-mon[73572]: pgmap v89: 353 pgs: 1 remapped+peering, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:49:07 compute-0 ceph-mon[73572]: osdmap e118: 3 total, 3 up, 3 in
Oct 08 09:49:07 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:49:07 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:49:07 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:49:07.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:49:07 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:49:07 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:49:07 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:49:07.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:49:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:07 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:49:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:07 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6600004380 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:49:07 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v91: 353 pgs: 1 remapped+peering, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 212 B/s rd, 0 op/s
Oct 08 09:49:08 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 08 09:49:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:09 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c003630 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:49:09 compute-0 ceph-mon[73572]: pgmap v91: 353 pgs: 1 remapped+peering, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 212 B/s rd, 0 op/s
Oct 08 09:49:09 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:49:09 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 09:49:09 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:49:09.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 09:49:09 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:49:09 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:49:09 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:49:09.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:49:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:09 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6628004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:49:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:09 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:49:09 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v92: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Oct 08 09:49:09 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0)
Oct 08 09:49:09 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Oct 08 09:49:10 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Oct 08 09:49:10 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Oct 08 09:49:10 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Oct 08 09:49:10 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Oct 08 09:49:10 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Oct 08 09:49:11 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:11 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6600004380 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:49:11 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:49:11 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:49:11 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:49:11.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:49:11 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Oct 08 09:49:11 compute-0 ceph-mon[73572]: pgmap v92: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Oct 08 09:49:11 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Oct 08 09:49:11 compute-0 ceph-mon[73572]: osdmap e119: 3 total, 3 up, 3 in
Oct 08 09:49:11 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Oct 08 09:49:11 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:49:11 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:49:11 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:49:11.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:49:11 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Oct 08 09:49:11 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:11 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c003630 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:49:11 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:11 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6628004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:49:11 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v95: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Oct 08 09:49:11 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0)
Oct 08 09:49:11 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Oct 08 09:49:12 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Oct 08 09:49:12 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Oct 08 09:49:12 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Oct 08 09:49:12 compute-0 ceph-mon[73572]: osdmap e120: 3 total, 3 up, 3 in
Oct 08 09:49:12 compute-0 ceph-mon[73572]: pgmap v95: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Oct 08 09:49:12 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Oct 08 09:49:12 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Oct 08 09:49:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:13 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:49:13 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:49:13 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 09:49:13 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:49:13.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 09:49:13 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:49:13 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:49:13 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:49:13.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:49:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Oct 08 09:49:13 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Oct 08 09:49:13 compute-0 ceph-mon[73572]: osdmap e121: 3 total, 3 up, 3 in
Oct 08 09:49:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Oct 08 09:49:13 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Oct 08 09:49:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:13 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6600004380 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:49:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:49:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:13 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c003630 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:49:13 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v98: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s; 27 B/s, 0 objects/s recovering
Oct 08 09:49:14 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Oct 08 09:49:14 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Oct 08 09:49:14 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Oct 08 09:49:14 compute-0 ceph-mon[73572]: osdmap e122: 3 total, 3 up, 3 in
Oct 08 09:49:14 compute-0 ceph-mon[73572]: pgmap v98: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s; 27 B/s, 0 objects/s recovering
Oct 08 09:49:14 compute-0 sudo[106883]: pam_unix(sudo:session): session closed for user root
Oct 08 09:49:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:15 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6628004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:49:15 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:49:15 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 09:49:15 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:49:15.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 09:49:15 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:49:15 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:49:15 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:49:15.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:49:15 compute-0 ceph-mon[73572]: osdmap e123: 3 total, 3 up, 3 in
Oct 08 09:49:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:15 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:49:15 compute-0 sudo[107356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybhnnsvymdomwimslatuvidthomyhvyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916955.0514839-342-243825374213554/AnsiballZ_command.py'
Oct 08 09:49:15 compute-0 sudo[107356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:49:15 compute-0 python3.9[107358]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:49:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:15 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6614001ef0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:49:15 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v100: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 227 B/s rd, 0 op/s; 24 B/s, 0 objects/s recovering
Oct 08 09:49:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:49:15] "GET /metrics HTTP/1.1" 200 48248 "" "Prometheus/2.51.0"
Oct 08 09:49:15 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:49:15] "GET /metrics HTTP/1.1" 200 48248 "" "Prometheus/2.51.0"
Oct 08 09:49:16 compute-0 ceph-mon[73572]: pgmap v100: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 227 B/s rd, 0 op/s; 24 B/s, 0 objects/s recovering
Oct 08 09:49:16 compute-0 sudo[107356]: pam_unix(sudo:session): session closed for user root
Oct 08 09:49:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:17 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608000f30 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:49:17 compute-0 ceph-mgr[73869]: [dashboard INFO request] [192.168.122.100:52740] [POST] [200] [0.118s] [4.0B] [8d746302-ff19-4c72-b43b-3193d3c1e5e8] /api/prometheus_receiver
Oct 08 09:49:17 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:49:17 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:49:17 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:49:17.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:49:17 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:49:17 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:49:17 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:49:17.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:49:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:17 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:49:17 compute-0 sudo[107647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbcdjuermfoepwkcnldsbtqcrmttxlag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916956.731725-366-263991960851265/AnsiballZ_selinux.py'
Oct 08 09:49:17 compute-0 sudo[107647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:49:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:17 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6628004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:49:17 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v101: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Oct 08 09:49:17 compute-0 python3.9[107649]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Oct 08 09:49:17 compute-0 sudo[107647]: pam_unix(sudo:session): session closed for user root
Oct 08 09:49:17 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 09:49:17 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:49:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:49:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:49:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:49:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fa0e4fbbd00>)]
Oct 08 09:49:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Oct 08 09:49:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:49:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fa0e4fbb8e0>)]
Oct 08 09:49:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Oct 08 09:49:17 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:49:18 compute-0 sudo[107800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uytwwiolbowvzieqcqlujbqrcxerhwrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916958.1288056-399-250603795936886/AnsiballZ_command.py'
Oct 08 09:49:18 compute-0 sudo[107800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:49:18 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:49:18 compute-0 python3.9[107802]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Oct 08 09:49:18 compute-0 sudo[107800]: pam_unix(sudo:session): session closed for user root
Oct 08 09:49:18 compute-0 ceph-mon[73572]: pgmap v101: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Oct 08 09:49:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:19 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6628004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:49:19 compute-0 sudo[107953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrlvhrixarposgvyrixvrzukeiyzggcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916958.8266754-423-37347519404488/AnsiballZ_file.py'
Oct 08 09:49:19 compute-0 sudo[107953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:49:19 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:49:19 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:49:19 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:49:19.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:49:19 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:49:19 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:49:19 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:49:19.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:49:19 compute-0 python3.9[107955]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:49:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:19 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608000f30 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:49:19 compute-0 sudo[107953]: pam_unix(sudo:session): session closed for user root
Oct 08 09:49:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:19 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:49:19 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v102: 353 pgs: 353 active+clean; 458 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 410 B/s rd, 410 B/s wr, 0 op/s; 14 B/s, 0 objects/s recovering
Oct 08 09:49:19 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0)
Oct 08 09:49:19 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Oct 08 09:49:19 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Oct 08 09:49:19 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Oct 08 09:49:19 compute-0 sudo[108106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqpxuwlyoyjvswgqbgxlkoqbprxganuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916959.554876-447-159545689727765/AnsiballZ_mount.py'
Oct 08 09:49:19 compute-0 sudo[108106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:49:19 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Oct 08 09:49:19 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Oct 08 09:49:20 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Oct 08 09:49:20 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e32: compute-0.ixicfj(active, since 92s), standbys: compute-2.mtagwx, compute-1.swlvov
Oct 08 09:49:20 compute-0 python3.9[108108]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Oct 08 09:49:20 compute-0 sudo[108106]: pam_unix(sudo:session): session closed for user root
Oct 08 09:49:20 compute-0 sudo[108133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 09:49:20 compute-0 sudo[108133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:49:20 compute-0 sudo[108133]: pam_unix(sudo:session): session closed for user root
Oct 08 09:49:20 compute-0 ceph-mon[73572]: pgmap v102: 353 pgs: 353 active+clean; 458 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 410 B/s rd, 410 B/s wr, 0 op/s; 14 B/s, 0 objects/s recovering
Oct 08 09:49:20 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Oct 08 09:49:20 compute-0 ceph-mon[73572]: osdmap e124: 3 total, 3 up, 3 in
Oct 08 09:49:20 compute-0 ceph-mon[73572]: mgrmap e32: compute-0.ixicfj(active, since 92s), standbys: compute-2.mtagwx, compute-1.swlvov
Oct 08 09:49:21 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:21 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6628004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:49:21 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:49:21 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:49:21 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:49:21.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:49:21 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:49:21 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:49:21 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:49:21.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:49:21 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:21 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6628004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:49:21 compute-0 sudo[108284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxbthouztyvzckrfeormlvopwmbdlnmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916961.0973916-531-206065424552087/AnsiballZ_file.py'
Oct 08 09:49:21 compute-0 sudo[108284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:49:21 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:21 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608000f30 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:49:21 compute-0 python3.9[108286]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:49:21 compute-0 sudo[108284]: pam_unix(sudo:session): session closed for user root
Oct 08 09:49:21 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v104: 353 pgs: 353 active+clean; 458 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 383 B/s wr, 0 op/s
Oct 08 09:49:21 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0)
Oct 08 09:49:21 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Oct 08 09:49:22 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Oct 08 09:49:22 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Oct 08 09:49:22 compute-0 sudo[108437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcdmksoqbmympsxbttbdmqkwympqopjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916961.7711852-555-276184941845401/AnsiballZ_stat.py'
Oct 08 09:49:22 compute-0 sudo[108437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:49:22 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Oct 08 09:49:22 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Oct 08 09:49:22 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Oct 08 09:49:22 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 125 pg[9.19( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=81/81 les/c/f=82/82/0 sis=125) [1] r=0 lpr=125 pi=[81,125)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:49:22 compute-0 python3.9[108439]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:49:22 compute-0 sudo[108437]: pam_unix(sudo:session): session closed for user root
Oct 08 09:49:22 compute-0 sudo[108515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpwgpjgvygbafsvolmiomfuhmnzozvli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916961.7711852-555-276184941845401/AnsiballZ_file.py'
Oct 08 09:49:22 compute-0 sudo[108515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:49:22 compute-0 python3.9[108517]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:49:22 compute-0 sudo[108515]: pam_unix(sudo:session): session closed for user root
Oct 08 09:49:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:23 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6628004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:49:23 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Oct 08 09:49:23 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Oct 08 09:49:23 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Oct 08 09:49:23 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 126 pg[9.19( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=81/81 les/c/f=82/82/0 sis=126) [1]/[2] r=-1 lpr=126 pi=[81,126)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:49:23 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 126 pg[9.19( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=81/81 les/c/f=82/82/0 sis=126) [1]/[2] r=-1 lpr=126 pi=[81,126)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 08 09:49:23 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:49:23 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:49:23 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:49:23.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:49:23 compute-0 ceph-mon[73572]: pgmap v104: 353 pgs: 353 active+clean; 458 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 383 B/s wr, 0 op/s
Oct 08 09:49:23 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Oct 08 09:49:23 compute-0 ceph-mon[73572]: osdmap e125: 3 total, 3 up, 3 in
Oct 08 09:49:23 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:49:23 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:49:23 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:49:23.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:49:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:23 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6628004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:49:23 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:49:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:23 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6628004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:49:23 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v107: 353 pgs: 1 remapped+peering, 352 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 511 B/s wr, 0 op/s
Oct 08 09:49:24 compute-0 sudo[108669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epcrnlekriviputeeijiffzvkfmaljxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916963.6476047-627-45608243488284/AnsiballZ_getent.py'
Oct 08 09:49:24 compute-0 sudo[108669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:49:24 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Oct 08 09:49:24 compute-0 ceph-mon[73572]: osdmap e126: 3 total, 3 up, 3 in
Oct 08 09:49:24 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Oct 08 09:49:24 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Oct 08 09:49:24 compute-0 python3.9[108671]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Oct 08 09:49:24 compute-0 sudo[108669]: pam_unix(sudo:session): session closed for user root
Oct 08 09:49:24 compute-0 sudo[108822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojubyvqdueicorspygtqcmyqaixvhpno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916964.6184225-657-101317837421945/AnsiballZ_getent.py'
Oct 08 09:49:24 compute-0 sudo[108822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:49:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:25 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66080032f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:49:25 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:49:25 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:49:25 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:49:25.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:49:25 compute-0 ceph-mon[73572]: pgmap v107: 353 pgs: 1 remapped+peering, 352 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 511 B/s wr, 0 op/s
Oct 08 09:49:25 compute-0 ceph-mon[73572]: osdmap e127: 3 total, 3 up, 3 in
Oct 08 09:49:25 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Oct 08 09:49:25 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Oct 08 09:49:25 compute-0 python3.9[108824]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Oct 08 09:49:25 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Oct 08 09:49:25 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:49:25 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:49:25 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:49:25.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:49:25 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 128 pg[9.19( v 45'1018 (0'0,45'1018] local-lis/les=0/0 n=7 ec=54/38 lis/c=126/81 les/c/f=127/82/0 sis=128) [1] r=0 lpr=128 pi=[81,128)/1 luod=0'0 crt=45'1018 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:49:25 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 128 pg[9.19( v 45'1018 (0'0,45'1018] local-lis/les=0/0 n=7 ec=54/38 lis/c=126/81 les/c/f=127/82/0 sis=128) [1] r=0 lpr=128 pi=[81,128)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:49:25 compute-0 sudo[108822]: pam_unix(sudo:session): session closed for user root
Oct 08 09:49:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:25 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:49:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:25 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:49:25 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v110: 353 pgs: 1 remapped+peering, 352 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:49:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:49:25] "GET /metrics HTTP/1.1" 200 48252 "" "Prometheus/2.51.0"
Oct 08 09:49:25 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:49:25] "GET /metrics HTTP/1.1" 200 48252 "" "Prometheus/2.51.0"
Oct 08 09:49:25 compute-0 sudo[108976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-savplwnxcanvccefurzgapkwqfizkkoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916965.3418245-681-79439256639244/AnsiballZ_group.py'
Oct 08 09:49:25 compute-0 sudo[108976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:49:26 compute-0 python3.9[108978]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 08 09:49:26 compute-0 sudo[108976]: pam_unix(sudo:session): session closed for user root
Oct 08 09:49:26 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Oct 08 09:49:26 compute-0 ceph-mon[73572]: osdmap e128: 3 total, 3 up, 3 in
Oct 08 09:49:26 compute-0 ceph-mon[73572]: pgmap v110: 353 pgs: 1 remapped+peering, 352 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:49:26 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Oct 08 09:49:26 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Oct 08 09:49:26 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 129 pg[9.19( v 45'1018 (0'0,45'1018] local-lis/les=128/129 n=7 ec=54/38 lis/c=126/81 les/c/f=127/82/0 sis=128) [1] r=0 lpr=128 pi=[81,128)/1 crt=45'1018 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:49:26 compute-0 sudo[109129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvgcizadrnwkhevtqwbcieiratdjjpna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916966.4007971-708-277076993542366/AnsiballZ_file.py'
Oct 08 09:49:26 compute-0 sudo[109129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:49:26 compute-0 python3.9[109131]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Oct 08 09:49:26 compute-0 sudo[109129]: pam_unix(sudo:session): session closed for user root
Oct 08 09:49:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:49:26.953Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 09:49:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:49:26.953Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:49:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:27 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:49:27 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:49:27 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:49:27 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:49:27.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:49:27 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:49:27 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:49:27 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:49:27.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:49:27 compute-0 ceph-mon[73572]: osdmap e129: 3 total, 3 up, 3 in
Oct 08 09:49:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:27 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:49:27 compute-0 sudo[109282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqcarfkbxiynybxdmudilruahohzdjyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916967.251713-741-15847940256966/AnsiballZ_dnf.py'
Oct 08 09:49:27 compute-0 sudo[109282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:49:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:27 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6628004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:49:27 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v112: 353 pgs: 1 remapped+peering, 352 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 224 B/s rd, 0 op/s
Oct 08 09:49:27 compute-0 python3.9[109284]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 08 09:49:28 compute-0 ceph-mon[73572]: pgmap v112: 353 pgs: 1 remapped+peering, 352 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 224 B/s rd, 0 op/s
Oct 08 09:49:28 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:49:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:29 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66080032f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:49:29 compute-0 sudo[109282]: pam_unix(sudo:session): session closed for user root
Oct 08 09:49:29 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:49:29 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:49:29 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:49:29.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:49:29 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:49:29 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:49:29 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:49:29.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:49:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:29 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:49:29 compute-0 sudo[109437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfdijjfpgklcneajxsngvintpwirqqjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916969.24023-765-183968498727726/AnsiballZ_file.py'
Oct 08 09:49:29 compute-0 sudo[109437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:49:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:29 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:49:29 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v113: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s; 18 B/s, 1 objects/s recovering
Oct 08 09:49:29 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0)
Oct 08 09:49:29 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Oct 08 09:49:29 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Oct 08 09:49:29 compute-0 python3.9[109439]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:49:29 compute-0 sudo[109437]: pam_unix(sudo:session): session closed for user root
Oct 08 09:49:29 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Oct 08 09:49:29 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Oct 08 09:49:29 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Oct 08 09:49:29 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Oct 08 09:49:29 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 130 pg[9.1a( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=86/86 les/c/f=87/87/0 sis=130) [1] r=0 lpr=130 pi=[86,130)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:49:30 compute-0 sudo[109496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:49:30 compute-0 sudo[109496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:49:30 compute-0 sudo[109496]: pam_unix(sudo:session): session closed for user root
Oct 08 09:49:30 compute-0 sudo[109543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Oct 08 09:49:30 compute-0 sudo[109543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:49:30 compute-0 sudo[109640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyczaemyzjwuvyskgsswgrxtbrqqbuie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916969.9730542-789-66808046698302/AnsiballZ_stat.py'
Oct 08 09:49:30 compute-0 sudo[109640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:49:30 compute-0 python3.9[109642]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:49:30 compute-0 sudo[109640]: pam_unix(sudo:session): session closed for user root
Oct 08 09:49:30 compute-0 podman[109739]: 2025-10-08 09:49:30.630838013 +0000 UTC m=+0.054208407 container exec 01c666addd8584f02eb7d29040d97e45d8cb7f834fd134029c1f7cca550aeb9d (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct 08 09:49:30 compute-0 sudo[109810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erltdmbutyuzhxpeymikohgyfqjauxpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916969.9730542-789-66808046698302/AnsiballZ_file.py'
Oct 08 09:49:30 compute-0 sudo[109810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:49:30 compute-0 podman[109739]: 2025-10-08 09:49:30.727475165 +0000 UTC m=+0.150845529 container exec_died 01c666addd8584f02eb7d29040d97e45d8cb7f834fd134029c1f7cca550aeb9d (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:49:30 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Oct 08 09:49:30 compute-0 ceph-mon[73572]: pgmap v113: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s; 18 B/s, 1 objects/s recovering
Oct 08 09:49:30 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Oct 08 09:49:30 compute-0 ceph-mon[73572]: osdmap e130: 3 total, 3 up, 3 in
Oct 08 09:49:30 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Oct 08 09:49:30 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Oct 08 09:49:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 131 pg[9.1a( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=86/86 les/c/f=87/87/0 sis=131) [1]/[0] r=-1 lpr=131 pi=[86,131)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:49:30 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 131 pg[9.1a( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=86/86 les/c/f=87/87/0 sis=131) [1]/[0] r=-1 lpr=131 pi=[86,131)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 08 09:49:30 compute-0 python3.9[109812]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:49:30 compute-0 sudo[109810]: pam_unix(sudo:session): session closed for user root
Oct 08 09:49:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:31 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6628004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:49:31 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:49:31 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:49:31 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:49:31.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:49:31 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:49:31 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:49:31 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:49:31.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:49:31 compute-0 podman[109997]: 2025-10-08 09:49:31.222917738 +0000 UTC m=+0.050502294 container exec 4eb6b712d09e2820826e6f961f0aba1bff09f13eee1664dbb03edca7615a708b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:49:31 compute-0 podman[109997]: 2025-10-08 09:49:31.234518305 +0000 UTC m=+0.062102861 container exec_died 4eb6b712d09e2820826e6f961f0aba1bff09f13eee1664dbb03edca7615a708b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:49:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:31 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66080032f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:49:31 compute-0 sudo[110147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwxxscowgcldqyucvqjrfgmszgoblpsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916971.1368992-828-112761258944648/AnsiballZ_stat.py'
Oct 08 09:49:31 compute-0 sudo[110147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:49:31 compute-0 podman[110157]: 2025-10-08 09:49:31.477841345 +0000 UTC m=+0.052381697 container exec c5f79ead609d5155b918e60a1470581982785e4aac0e1c185a19093b2bdf84dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:49:31 compute-0 podman[110157]: 2025-10-08 09:49:31.491394666 +0000 UTC m=+0.065934988 container exec_died c5f79ead609d5155b918e60a1470581982785e4aac0e1c185a19093b2bdf84dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:49:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:31 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:49:31 compute-0 python3.9[110156]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:49:31 compute-0 sudo[110147]: pam_unix(sudo:session): session closed for user root
Oct 08 09:49:31 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v116: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s; 18 B/s, 1 objects/s recovering
Oct 08 09:49:31 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0)
Oct 08 09:49:31 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Oct 08 09:49:31 compute-0 podman[110224]: 2025-10-08 09:49:31.684507853 +0000 UTC m=+0.048741276 container exec 1c113ffcb0d41c6337c256a4892f13e82bdea6ca8062b10324516aa631301ea5 (image=quay.io/ceph/haproxy:2.3, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp)
Oct 08 09:49:31 compute-0 podman[110224]: 2025-10-08 09:49:31.741757132 +0000 UTC m=+0.105990535 container exec_died 1c113ffcb0d41c6337c256a4892f13e82bdea6ca8062b10324516aa631301ea5 (image=quay.io/ceph/haproxy:2.3, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp)
Oct 08 09:49:31 compute-0 sudo[110349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onbjorbidymncttytodqlokuwlhmmfty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916971.1368992-828-112761258944648/AnsiballZ_file.py'
Oct 08 09:49:31 compute-0 sudo[110349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:49:31 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Oct 08 09:49:31 compute-0 ceph-mon[73572]: osdmap e131: 3 total, 3 up, 3 in
Oct 08 09:49:31 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Oct 08 09:49:31 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Oct 08 09:49:31 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Oct 08 09:49:31 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Oct 08 09:49:31 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 132 pg[9.1b( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=65/65 les/c/f=66/66/0 sis=132) [1] r=0 lpr=132 pi=[65,132)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:49:31 compute-0 podman[110367]: 2025-10-08 09:49:31.934871768 +0000 UTC m=+0.051715015 container exec 5814788fb4b63196d097e0c60082efdca22825a3ba337ddec5c99170a1bfd89d (image=quay.io/ceph/keepalived:2.2.4, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., version=2.2.4, com.redhat.component=keepalived-container, name=keepalived, release=1793, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public)
Oct 08 09:49:31 compute-0 podman[110367]: 2025-10-08 09:49:31.970959931 +0000 UTC m=+0.087803158 container exec_died 5814788fb4b63196d097e0c60082efdca22825a3ba337ddec5c99170a1bfd89d (image=quay.io/ceph/keepalived:2.2.4, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, release=1793, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, name=keepalived, architecture=x86_64, vcs-type=git, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Oct 08 09:49:32 compute-0 python3.9[110353]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:49:32 compute-0 sudo[110349]: pam_unix(sudo:session): session closed for user root
Oct 08 09:49:32 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/094932 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 08 09:49:32 compute-0 podman[110456]: 2025-10-08 09:49:32.18727457 +0000 UTC m=+0.052024165 container exec feb968c21a5fda3cd2e7b86379d5576dbe3dadfd4cb62b92318e18ddbaaafb75 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:49:32 compute-0 podman[110456]: 2025-10-08 09:49:32.211378754 +0000 UTC m=+0.076128239 container exec_died feb968c21a5fda3cd2e7b86379d5576dbe3dadfd4cb62b92318e18ddbaaafb75 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:49:32 compute-0 podman[110528]: 2025-10-08 09:49:32.438943359 +0000 UTC m=+0.055804172 container exec 73efcf403aa6fcb4b04e1ef714b7bf4169b576e4a4c1b34cdc0b458f62db65fd (image=quay.io/ceph/grafana:10.4.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 08 09:49:32 compute-0 podman[110528]: 2025-10-08 09:49:32.606502034 +0000 UTC m=+0.223362837 container exec_died 73efcf403aa6fcb4b04e1ef714b7bf4169b576e4a4c1b34cdc0b458f62db65fd (image=quay.io/ceph/grafana:10.4.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 08 09:49:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 09:49:32 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:49:32 compute-0 sudo[110752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-coimvkscrwrocitgprxjmqakgsiqsjps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916972.6154618-873-185869940692560/AnsiballZ_dnf.py'
Oct 08 09:49:32 compute-0 sudo[110752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:49:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Oct 08 09:49:32 compute-0 ceph-mon[73572]: pgmap v116: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s; 18 B/s, 1 objects/s recovering
Oct 08 09:49:32 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Oct 08 09:49:32 compute-0 ceph-mon[73572]: osdmap e132: 3 total, 3 up, 3 in
Oct 08 09:49:32 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:49:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Oct 08 09:49:32 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Oct 08 09:49:32 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 133 pg[9.1a( v 45'1018 (0'0,45'1018] local-lis/les=0/0 n=4 ec=54/38 lis/c=131/86 les/c/f=132/87/0 sis=133) [1] r=0 lpr=133 pi=[86,133)/1 luod=0'0 crt=45'1018 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:49:32 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 133 pg[9.1b( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=65/65 les/c/f=66/66/0 sis=133) [1]/[2] r=-1 lpr=133 pi=[65,133)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:49:32 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 133 pg[9.1b( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=65/65 les/c/f=66/66/0 sis=133) [1]/[2] r=-1 lpr=133 pi=[65,133)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 08 09:49:32 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 133 pg[9.1a( v 45'1018 (0'0,45'1018] local-lis/les=0/0 n=4 ec=54/38 lis/c=131/86 les/c/f=132/87/0 sis=133) [1] r=0 lpr=133 pi=[86,133)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:49:32 compute-0 podman[110767]: 2025-10-08 09:49:32.949816496 +0000 UTC m=+0.065073090 container exec 50d7285a7766fc915688c3cc737e186f9ad2230d7b0b7e759880375ad996bb6c (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:49:32 compute-0 podman[110767]: 2025-10-08 09:49:32.99764658 +0000 UTC m=+0.112903174 container exec_died 50d7285a7766fc915688c3cc737e186f9ad2230d7b0b7e759880375ad996bb6c (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:49:33 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:33 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c003630 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:49:33 compute-0 sudo[109543]: pam_unix(sudo:session): session closed for user root
Oct 08 09:49:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 09:49:33 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:49:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 09:49:33 compute-0 python3.9[110759]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 08 09:49:33 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:49:33 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:49:33 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:49:33 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:49:33.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:49:33 compute-0 sudo[110814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:49:33 compute-0 sudo[110814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:49:33 compute-0 sudo[110814]: pam_unix(sudo:session): session closed for user root
Oct 08 09:49:33 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:49:33 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:49:33 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:49:33.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:49:33 compute-0 sudo[110839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 08 09:49:33 compute-0 sudo[110839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:49:33 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:33 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6628004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:49:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:49:33 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:33 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66080032f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:49:33 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v119: 353 pgs: 1 remapped+peering, 1 peering, 351 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s; 27 B/s, 0 objects/s recovering
Oct 08 09:49:33 compute-0 sudo[110839]: pam_unix(sudo:session): session closed for user root
Oct 08 09:49:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:49:33 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:49:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 08 09:49:33 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 09:49:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 08 09:49:33 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:49:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 08 09:49:33 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:49:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 08 09:49:33 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 09:49:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 08 09:49:33 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 09:49:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:49:33 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:49:33 compute-0 sudo[110895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:49:33 compute-0 sudo[110895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:49:33 compute-0 sudo[110895]: pam_unix(sudo:session): session closed for user root
Oct 08 09:49:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Oct 08 09:49:33 compute-0 ceph-mon[73572]: osdmap e133: 3 total, 3 up, 3 in
Oct 08 09:49:33 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:49:33 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:49:33 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:49:33 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 09:49:33 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:49:33 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:49:33 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 09:49:33 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 09:49:33 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:49:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Oct 08 09:49:33 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Oct 08 09:49:33 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 134 pg[9.1a( v 45'1018 (0'0,45'1018] local-lis/les=133/134 n=4 ec=54/38 lis/c=131/86 les/c/f=132/87/0 sis=133) [1] r=0 lpr=133 pi=[86,133)/1 crt=45'1018 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:49:33 compute-0 sudo[110921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 08 09:49:33 compute-0 sudo[110921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:49:34 compute-0 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Oct 08 09:49:34 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:49:34.109571) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 08 09:49:34 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Oct 08 09:49:34 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916974109671, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 2762, "num_deletes": 251, "total_data_size": 6580274, "memory_usage": 6685736, "flush_reason": "Manual Compaction"}
Oct 08 09:49:34 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Oct 08 09:49:34 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916974150444, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 6136445, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8018, "largest_seqno": 10779, "table_properties": {"data_size": 6123232, "index_size": 8555, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3589, "raw_key_size": 31061, "raw_average_key_size": 21, "raw_value_size": 6095197, "raw_average_value_size": 4304, "num_data_blocks": 374, "num_entries": 1416, "num_filter_entries": 1416, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916868, "oldest_key_time": 1759916868, "file_creation_time": 1759916974, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Oct 08 09:49:34 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 40884 microseconds, and 10022 cpu microseconds.
Oct 08 09:49:34 compute-0 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 08 09:49:34 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:49:34.150494) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 6136445 bytes OK
Oct 08 09:49:34 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:49:34.150514) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Oct 08 09:49:34 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:49:34.152124) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Oct 08 09:49:34 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:49:34.152136) EVENT_LOG_v1 {"time_micros": 1759916974152133, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 08 09:49:34 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:49:34.152152) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 08 09:49:34 compute-0 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 6567733, prev total WAL file size 6567733, number of live WAL files 2.
Oct 08 09:49:34 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 09:49:34 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:49:34.153487) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Oct 08 09:49:34 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 08 09:49:34 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(5992KB)], [23(11MB)]
Oct 08 09:49:34 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916974153534, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 18138861, "oldest_snapshot_seqno": -1}
Oct 08 09:49:34 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 4032 keys, 14239944 bytes, temperature: kUnknown
Oct 08 09:49:34 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916974261555, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 14239944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14207795, "index_size": 20967, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10117, "raw_key_size": 103008, "raw_average_key_size": 25, "raw_value_size": 14128784, "raw_average_value_size": 3504, "num_data_blocks": 900, "num_entries": 4032, "num_filter_entries": 4032, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916581, "oldest_key_time": 0, "file_creation_time": 1759916974, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Oct 08 09:49:34 compute-0 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 08 09:49:34 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:49:34.261771) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 14239944 bytes
Oct 08 09:49:34 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:49:34.271691) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 167.8 rd, 131.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(5.9, 11.4 +0.0 blob) out(13.6 +0.0 blob), read-write-amplify(5.3) write-amplify(2.3) OK, records in: 4564, records dropped: 532 output_compression: NoCompression
Oct 08 09:49:34 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:49:34.271737) EVENT_LOG_v1 {"time_micros": 1759916974271720, "job": 8, "event": "compaction_finished", "compaction_time_micros": 108078, "compaction_time_cpu_micros": 28401, "output_level": 6, "num_output_files": 1, "total_output_size": 14239944, "num_input_records": 4564, "num_output_records": 4032, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 08 09:49:34 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 09:49:34 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916974272898, "job": 8, "event": "table_file_deletion", "file_number": 25}
Oct 08 09:49:34 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 09:49:34 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916974275131, "job": 8, "event": "table_file_deletion", "file_number": 23}
Oct 08 09:49:34 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:49:34.153392) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 09:49:34 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:49:34.275166) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 09:49:34 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:49:34.275170) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 09:49:34 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:49:34.275172) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 09:49:34 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:49:34.275173) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 09:49:34 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:49:34.275174) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 09:49:34 compute-0 sudo[110752]: pam_unix(sudo:session): session closed for user root
Oct 08 09:49:34 compute-0 podman[110994]: 2025-10-08 09:49:34.421846609 +0000 UTC m=+0.040460589 container create 89f09d6f8780091f2af2f0dc4820d443aef08b43dfa41a46b761aacfa41b92f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_chatterjee, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:49:34 compute-0 systemd[1]: Started libpod-conmon-89f09d6f8780091f2af2f0dc4820d443aef08b43dfa41a46b761aacfa41b92f8.scope.
Oct 08 09:49:34 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:49:34 compute-0 podman[110994]: 2025-10-08 09:49:34.487727445 +0000 UTC m=+0.106341435 container init 89f09d6f8780091f2af2f0dc4820d443aef08b43dfa41a46b761aacfa41b92f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_chatterjee, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Oct 08 09:49:34 compute-0 podman[110994]: 2025-10-08 09:49:34.494039335 +0000 UTC m=+0.112653315 container start 89f09d6f8780091f2af2f0dc4820d443aef08b43dfa41a46b761aacfa41b92f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_chatterjee, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct 08 09:49:34 compute-0 podman[110994]: 2025-10-08 09:49:34.497966746 +0000 UTC m=+0.116580776 container attach 89f09d6f8780091f2af2f0dc4820d443aef08b43dfa41a46b761aacfa41b92f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Oct 08 09:49:34 compute-0 funny_chatterjee[111026]: 167 167
Oct 08 09:49:34 compute-0 systemd[1]: libpod-89f09d6f8780091f2af2f0dc4820d443aef08b43dfa41a46b761aacfa41b92f8.scope: Deactivated successfully.
Oct 08 09:49:34 compute-0 podman[110994]: 2025-10-08 09:49:34.499314782 +0000 UTC m=+0.117928762 container died 89f09d6f8780091f2af2f0dc4820d443aef08b43dfa41a46b761aacfa41b92f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_chatterjee, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default)
Oct 08 09:49:34 compute-0 podman[110994]: 2025-10-08 09:49:34.407287004 +0000 UTC m=+0.025901014 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:49:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc1b60c698e92c72ad9c206b5601ff42a9ac402d82892ed03471eb83e6d2b0cd-merged.mount: Deactivated successfully.
Oct 08 09:49:34 compute-0 podman[110994]: 2025-10-08 09:49:34.557054566 +0000 UTC m=+0.175668546 container remove 89f09d6f8780091f2af2f0dc4820d443aef08b43dfa41a46b761aacfa41b92f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Oct 08 09:49:34 compute-0 systemd[1]: libpod-conmon-89f09d6f8780091f2af2f0dc4820d443aef08b43dfa41a46b761aacfa41b92f8.scope: Deactivated successfully.
Oct 08 09:49:34 compute-0 podman[111052]: 2025-10-08 09:49:34.722758289 +0000 UTC m=+0.043245823 container create 49d2dbc852b0006646ebad04ea83d02d10db610dd795e040e3cde60964bd81ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_elion, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:49:34 compute-0 systemd[1]: Started libpod-conmon-49d2dbc852b0006646ebad04ea83d02d10db610dd795e040e3cde60964bd81ab.scope.
Oct 08 09:49:34 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:49:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63f0e0fe779506391a5bf4882eeb228fa413ae6eb8d0177ac901fa9b827aad74/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 09:49:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63f0e0fe779506391a5bf4882eeb228fa413ae6eb8d0177ac901fa9b827aad74/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:49:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63f0e0fe779506391a5bf4882eeb228fa413ae6eb8d0177ac901fa9b827aad74/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:49:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63f0e0fe779506391a5bf4882eeb228fa413ae6eb8d0177ac901fa9b827aad74/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 09:49:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63f0e0fe779506391a5bf4882eeb228fa413ae6eb8d0177ac901fa9b827aad74/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 09:49:34 compute-0 podman[111052]: 2025-10-08 09:49:34.789900967 +0000 UTC m=+0.110388601 container init 49d2dbc852b0006646ebad04ea83d02d10db610dd795e040e3cde60964bd81ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_elion, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:49:34 compute-0 podman[111052]: 2025-10-08 09:49:34.701578413 +0000 UTC m=+0.022065977 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:49:34 compute-0 podman[111052]: 2025-10-08 09:49:34.798063699 +0000 UTC m=+0.118551263 container start 49d2dbc852b0006646ebad04ea83d02d10db610dd795e040e3cde60964bd81ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_elion, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:49:34 compute-0 podman[111052]: 2025-10-08 09:49:34.801845435 +0000 UTC m=+0.122332989 container attach 49d2dbc852b0006646ebad04ea83d02d10db610dd795e040e3cde60964bd81ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_elion, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 08 09:49:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Oct 08 09:49:34 compute-0 ceph-mon[73572]: pgmap v119: 353 pgs: 1 remapped+peering, 1 peering, 351 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s; 27 B/s, 0 objects/s recovering
Oct 08 09:49:34 compute-0 ceph-mon[73572]: osdmap e134: 3 total, 3 up, 3 in
Oct 08 09:49:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Oct 08 09:49:34 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Oct 08 09:49:34 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 135 pg[9.1b( v 45'1018 (0'0,45'1018] local-lis/les=0/0 n=2 ec=54/38 lis/c=133/65 les/c/f=134/66/0 sis=135) [1] r=0 lpr=135 pi=[65,135)/1 luod=0'0 crt=45'1018 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:49:34 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 135 pg[9.1b( v 45'1018 (0'0,45'1018] local-lis/les=0/0 n=2 ec=54/38 lis/c=133/65 les/c/f=134/66/0 sis=135) [1] r=0 lpr=135 pi=[65,135)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:49:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:35 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:49:35 compute-0 beautiful_elion[111068]: --> passed data devices: 0 physical, 1 LVM
Oct 08 09:49:35 compute-0 beautiful_elion[111068]: --> All data devices are unavailable
Oct 08 09:49:35 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:49:35 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:49:35 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:49:35.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:49:35 compute-0 systemd[1]: libpod-49d2dbc852b0006646ebad04ea83d02d10db610dd795e040e3cde60964bd81ab.scope: Deactivated successfully.
Oct 08 09:49:35 compute-0 podman[111052]: 2025-10-08 09:49:35.143391549 +0000 UTC m=+0.463879143 container died 49d2dbc852b0006646ebad04ea83d02d10db610dd795e040e3cde60964bd81ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_elion, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct 08 09:49:35 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:49:35 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:49:35 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:49:35.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:49:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-63f0e0fe779506391a5bf4882eeb228fa413ae6eb8d0177ac901fa9b827aad74-merged.mount: Deactivated successfully.
Oct 08 09:49:35 compute-0 python3.9[111205]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 09:49:35 compute-0 podman[111052]: 2025-10-08 09:49:35.230616756 +0000 UTC m=+0.551104300 container remove 49d2dbc852b0006646ebad04ea83d02d10db610dd795e040e3cde60964bd81ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_elion, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Oct 08 09:49:35 compute-0 systemd[1]: libpod-conmon-49d2dbc852b0006646ebad04ea83d02d10db610dd795e040e3cde60964bd81ab.scope: Deactivated successfully.
Oct 08 09:49:35 compute-0 sudo[110921]: pam_unix(sudo:session): session closed for user root
Oct 08 09:49:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:35 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c003630 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:49:35 compute-0 sudo[111225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:49:35 compute-0 sudo[111225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:49:35 compute-0 sudo[111225]: pam_unix(sudo:session): session closed for user root
Oct 08 09:49:35 compute-0 sudo[111274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- lvm list --format json
Oct 08 09:49:35 compute-0 sudo[111274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:49:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:35 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6628004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:49:35 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v122: 353 pgs: 1 remapped+peering, 1 peering, 351 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s; 27 B/s, 0 objects/s recovering
Oct 08 09:49:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:49:35] "GET /metrics HTTP/1.1" 200 48250 "" "Prometheus/2.51.0"
Oct 08 09:49:35 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:49:35] "GET /metrics HTTP/1.1" 200 48250 "" "Prometheus/2.51.0"
Oct 08 09:49:35 compute-0 podman[111430]: 2025-10-08 09:49:35.845170389 +0000 UTC m=+0.041408551 container create 41c46bae905d0794000789a2ff32bed9a191da350d6214fa805686e94b2803bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:49:35 compute-0 systemd[1]: Started libpod-conmon-41c46bae905d0794000789a2ff32bed9a191da350d6214fa805686e94b2803bc.scope.
Oct 08 09:49:35 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:49:35 compute-0 podman[111430]: 2025-10-08 09:49:35.82869776 +0000 UTC m=+0.024935942 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:49:35 compute-0 podman[111430]: 2025-10-08 09:49:35.930893096 +0000 UTC m=+0.127131318 container init 41c46bae905d0794000789a2ff32bed9a191da350d6214fa805686e94b2803bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_beaver, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 08 09:49:35 compute-0 podman[111430]: 2025-10-08 09:49:35.939201703 +0000 UTC m=+0.135439875 container start 41c46bae905d0794000789a2ff32bed9a191da350d6214fa805686e94b2803bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_beaver, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 08 09:49:35 compute-0 jolly_beaver[111483]: 167 167
Oct 08 09:49:35 compute-0 systemd[1]: libpod-41c46bae905d0794000789a2ff32bed9a191da350d6214fa805686e94b2803bc.scope: Deactivated successfully.
Oct 08 09:49:35 compute-0 podman[111430]: 2025-10-08 09:49:35.944551591 +0000 UTC m=+0.140789833 container attach 41c46bae905d0794000789a2ff32bed9a191da350d6214fa805686e94b2803bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_beaver, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:49:35 compute-0 podman[111430]: 2025-10-08 09:49:35.946165766 +0000 UTC m=+0.142403968 container died 41c46bae905d0794000789a2ff32bed9a191da350d6214fa805686e94b2803bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_beaver, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:49:35 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Oct 08 09:49:35 compute-0 ceph-mon[73572]: osdmap e135: 3 total, 3 up, 3 in
Oct 08 09:49:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa69376e6181cd047839dade1d1e883cf1b9b3645683a2f959c140e6514ff7b8-merged.mount: Deactivated successfully.
Oct 08 09:49:35 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Oct 08 09:49:35 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Oct 08 09:49:35 compute-0 podman[111430]: 2025-10-08 09:49:35.989877562 +0000 UTC m=+0.186115724 container remove 41c46bae905d0794000789a2ff32bed9a191da350d6214fa805686e94b2803bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct 08 09:49:35 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 136 pg[9.1b( v 45'1018 (0'0,45'1018] local-lis/les=135/136 n=2 ec=54/38 lis/c=133/65 les/c/f=134/66/0 sis=135) [1] r=0 lpr=135 pi=[65,135)/1 crt=45'1018 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:49:36 compute-0 systemd[1]: libpod-conmon-41c46bae905d0794000789a2ff32bed9a191da350d6214fa805686e94b2803bc.scope: Deactivated successfully.
Oct 08 09:49:36 compute-0 python3.9[111482]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Oct 08 09:49:36 compute-0 podman[111506]: 2025-10-08 09:49:36.140302676 +0000 UTC m=+0.038892668 container create 0d304d740120f68b084273dd917ac36c3a54b575d9b8f56b73226d7a31483ecd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_wing, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS)
Oct 08 09:49:36 compute-0 systemd[1]: Started libpod-conmon-0d304d740120f68b084273dd917ac36c3a54b575d9b8f56b73226d7a31483ecd.scope.
Oct 08 09:49:36 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:49:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cc2dbda6f0ea58cac8d99a3ceb253a319d41270f3976bf327c1792cac528a64/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 09:49:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cc2dbda6f0ea58cac8d99a3ceb253a319d41270f3976bf327c1792cac528a64/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:49:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cc2dbda6f0ea58cac8d99a3ceb253a319d41270f3976bf327c1792cac528a64/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:49:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cc2dbda6f0ea58cac8d99a3ceb253a319d41270f3976bf327c1792cac528a64/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 09:49:36 compute-0 podman[111506]: 2025-10-08 09:49:36.21393929 +0000 UTC m=+0.112529312 container init 0d304d740120f68b084273dd917ac36c3a54b575d9b8f56b73226d7a31483ecd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:49:36 compute-0 podman[111506]: 2025-10-08 09:49:36.12512543 +0000 UTC m=+0.023715452 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:49:36 compute-0 podman[111506]: 2025-10-08 09:49:36.224845094 +0000 UTC m=+0.123435096 container start 0d304d740120f68b084273dd917ac36c3a54b575d9b8f56b73226d7a31483ecd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:49:36 compute-0 podman[111506]: 2025-10-08 09:49:36.22832953 +0000 UTC m=+0.126919542 container attach 0d304d740120f68b084273dd917ac36c3a54b575d9b8f56b73226d7a31483ecd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_wing, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:49:36 compute-0 confident_wing[111547]: {
Oct 08 09:49:36 compute-0 confident_wing[111547]:     "1": [
Oct 08 09:49:36 compute-0 confident_wing[111547]:         {
Oct 08 09:49:36 compute-0 confident_wing[111547]:             "devices": [
Oct 08 09:49:36 compute-0 confident_wing[111547]:                 "/dev/loop3"
Oct 08 09:49:36 compute-0 confident_wing[111547]:             ],
Oct 08 09:49:36 compute-0 confident_wing[111547]:             "lv_name": "ceph_lv0",
Oct 08 09:49:36 compute-0 confident_wing[111547]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 09:49:36 compute-0 confident_wing[111547]:             "lv_size": "21470642176",
Oct 08 09:49:36 compute-0 confident_wing[111547]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 08 09:49:36 compute-0 confident_wing[111547]:             "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 09:49:36 compute-0 confident_wing[111547]:             "name": "ceph_lv0",
Oct 08 09:49:36 compute-0 confident_wing[111547]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 09:49:36 compute-0 confident_wing[111547]:             "tags": {
Oct 08 09:49:36 compute-0 confident_wing[111547]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 08 09:49:36 compute-0 confident_wing[111547]:                 "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 09:49:36 compute-0 confident_wing[111547]:                 "ceph.cephx_lockbox_secret": "",
Oct 08 09:49:36 compute-0 confident_wing[111547]:                 "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct 08 09:49:36 compute-0 confident_wing[111547]:                 "ceph.cluster_name": "ceph",
Oct 08 09:49:36 compute-0 confident_wing[111547]:                 "ceph.crush_device_class": "",
Oct 08 09:49:36 compute-0 confident_wing[111547]:                 "ceph.encrypted": "0",
Oct 08 09:49:36 compute-0 confident_wing[111547]:                 "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct 08 09:49:36 compute-0 confident_wing[111547]:                 "ceph.osd_id": "1",
Oct 08 09:49:36 compute-0 confident_wing[111547]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 08 09:49:36 compute-0 confident_wing[111547]:                 "ceph.type": "block",
Oct 08 09:49:36 compute-0 confident_wing[111547]:                 "ceph.vdo": "0",
Oct 08 09:49:36 compute-0 confident_wing[111547]:                 "ceph.with_tpm": "0"
Oct 08 09:49:36 compute-0 confident_wing[111547]:             },
Oct 08 09:49:36 compute-0 confident_wing[111547]:             "type": "block",
Oct 08 09:49:36 compute-0 confident_wing[111547]:             "vg_name": "ceph_vg0"
Oct 08 09:49:36 compute-0 confident_wing[111547]:         }
Oct 08 09:49:36 compute-0 confident_wing[111547]:     ]
Oct 08 09:49:36 compute-0 confident_wing[111547]: }
Oct 08 09:49:36 compute-0 systemd[1]: libpod-0d304d740120f68b084273dd917ac36c3a54b575d9b8f56b73226d7a31483ecd.scope: Deactivated successfully.
Oct 08 09:49:36 compute-0 podman[111506]: 2025-10-08 09:49:36.527523452 +0000 UTC m=+0.426113454 container died 0d304d740120f68b084273dd917ac36c3a54b575d9b8f56b73226d7a31483ecd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:49:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-7cc2dbda6f0ea58cac8d99a3ceb253a319d41270f3976bf327c1792cac528a64-merged.mount: Deactivated successfully.
Oct 08 09:49:36 compute-0 podman[111506]: 2025-10-08 09:49:36.60393733 +0000 UTC m=+0.502527332 container remove 0d304d740120f68b084273dd917ac36c3a54b575d9b8f56b73226d7a31483ecd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:49:36 compute-0 systemd[1]: libpod-conmon-0d304d740120f68b084273dd917ac36c3a54b575d9b8f56b73226d7a31483ecd.scope: Deactivated successfully.
Oct 08 09:49:36 compute-0 sudo[111274]: pam_unix(sudo:session): session closed for user root
Oct 08 09:49:36 compute-0 sudo[111694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:49:36 compute-0 sudo[111694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:49:36 compute-0 sudo[111694]: pam_unix(sudo:session): session closed for user root
Oct 08 09:49:36 compute-0 python3.9[111687]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 09:49:36 compute-0 sudo[111719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- raw list --format json
Oct 08 09:49:36 compute-0 sudo[111719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:49:36 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:49:36.954Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 09:49:36 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:49:36.955Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 09:49:36 compute-0 ceph-mon[73572]: pgmap v122: 353 pgs: 1 remapped+peering, 1 peering, 351 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s; 27 B/s, 0 objects/s recovering
Oct 08 09:49:36 compute-0 ceph-mon[73572]: osdmap e136: 3 total, 3 up, 3 in
Oct 08 09:49:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:37 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6628004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:49:37 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:49:37 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 09:49:37 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:49:37.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 09:49:37 compute-0 podman[111808]: 2025-10-08 09:49:37.169910104 +0000 UTC m=+0.037656667 container create 77a85fcb34c7ecc84d19ef259baddc7afb442c4a2c4a324ee4a0effc8be79d38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_kare, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:49:37 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:49:37 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:49:37 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:49:37.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:49:37 compute-0 systemd[1]: Started libpod-conmon-77a85fcb34c7ecc84d19ef259baddc7afb442c4a2c4a324ee4a0effc8be79d38.scope.
Oct 08 09:49:37 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:49:37 compute-0 podman[111808]: 2025-10-08 09:49:37.241266401 +0000 UTC m=+0.109012984 container init 77a85fcb34c7ecc84d19ef259baddc7afb442c4a2c4a324ee4a0effc8be79d38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 08 09:49:37 compute-0 podman[111808]: 2025-10-08 09:49:37.247960245 +0000 UTC m=+0.115706808 container start 77a85fcb34c7ecc84d19ef259baddc7afb442c4a2c4a324ee4a0effc8be79d38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_kare, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid)
Oct 08 09:49:37 compute-0 podman[111808]: 2025-10-08 09:49:37.155252655 +0000 UTC m=+0.022999228 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:49:37 compute-0 podman[111808]: 2025-10-08 09:49:37.251576585 +0000 UTC m=+0.119323158 container attach 77a85fcb34c7ecc84d19ef259baddc7afb442c4a2c4a324ee4a0effc8be79d38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_kare, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:49:37 compute-0 elated_kare[111825]: 167 167
Oct 08 09:49:37 compute-0 systemd[1]: libpod-77a85fcb34c7ecc84d19ef259baddc7afb442c4a2c4a324ee4a0effc8be79d38.scope: Deactivated successfully.
Oct 08 09:49:37 compute-0 podman[111808]: 2025-10-08 09:49:37.253266812 +0000 UTC m=+0.121013375 container died 77a85fcb34c7ecc84d19ef259baddc7afb442c4a2c4a324ee4a0effc8be79d38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_kare, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:49:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-57bee7339850195e612f0151499cea7e6813b7873531f7edf4bc47127e9b0cba-merged.mount: Deactivated successfully.
Oct 08 09:49:37 compute-0 podman[111808]: 2025-10-08 09:49:37.287891026 +0000 UTC m=+0.155637589 container remove 77a85fcb34c7ecc84d19ef259baddc7afb442c4a2c4a324ee4a0effc8be79d38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_kare, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 08 09:49:37 compute-0 systemd[1]: libpod-conmon-77a85fcb34c7ecc84d19ef259baddc7afb442c4a2c4a324ee4a0effc8be79d38.scope: Deactivated successfully.
Oct 08 09:49:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:37 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6628004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:49:37 compute-0 podman[111900]: 2025-10-08 09:49:37.436019073 +0000 UTC m=+0.037188581 container create ac4635ca5ce2fb4b9e08cbe618cf2916decf7c755de775b95e5201739f8326a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_haibt, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 08 09:49:37 compute-0 systemd[1]: Started libpod-conmon-ac4635ca5ce2fb4b9e08cbe618cf2916decf7c755de775b95e5201739f8326a3.scope.
Oct 08 09:49:37 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:49:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c264be6aebdf102299af0ab2ff4d6294255853f11ad1fa3752a465b2b56cc7d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 09:49:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c264be6aebdf102299af0ab2ff4d6294255853f11ad1fa3752a465b2b56cc7d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:49:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c264be6aebdf102299af0ab2ff4d6294255853f11ad1fa3752a465b2b56cc7d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:49:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c264be6aebdf102299af0ab2ff4d6294255853f11ad1fa3752a465b2b56cc7d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 09:49:37 compute-0 podman[111900]: 2025-10-08 09:49:37.504503005 +0000 UTC m=+0.105672533 container init ac4635ca5ce2fb4b9e08cbe618cf2916decf7c755de775b95e5201739f8326a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_haibt, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:49:37 compute-0 podman[111900]: 2025-10-08 09:49:37.512209982 +0000 UTC m=+0.113379490 container start ac4635ca5ce2fb4b9e08cbe618cf2916decf7c755de775b95e5201739f8326a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True)
Oct 08 09:49:37 compute-0 podman[111900]: 2025-10-08 09:49:37.51514764 +0000 UTC m=+0.116317178 container attach ac4635ca5ce2fb4b9e08cbe618cf2916decf7c755de775b95e5201739f8326a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_haibt, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:49:37 compute-0 podman[111900]: 2025-10-08 09:49:37.420483615 +0000 UTC m=+0.021653143 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:49:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:37 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6614001ef0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:49:37 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v124: 353 pgs: 1 remapped+peering, 1 peering, 351 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 215 B/s rd, 0 op/s; 23 B/s, 0 objects/s recovering
Oct 08 09:49:37 compute-0 sudo[112028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvjvdogvgglhytesdqzbzlrdarxnjkyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916977.2927132-996-226758820545548/AnsiballZ_systemd.py'
Oct 08 09:49:37 compute-0 sudo[112028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:49:38 compute-0 lvm[112068]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 08 09:49:38 compute-0 lvm[112068]: VG ceph_vg0 finished
Oct 08 09:49:38 compute-0 python3.9[112037]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 09:49:38 compute-0 frosty_haibt[111917]: {}
Oct 08 09:49:38 compute-0 systemd[1]: libpod-ac4635ca5ce2fb4b9e08cbe618cf2916decf7c755de775b95e5201739f8326a3.scope: Deactivated successfully.
Oct 08 09:49:38 compute-0 podman[111900]: 2025-10-08 09:49:38.216579179 +0000 UTC m=+0.817748697 container died ac4635ca5ce2fb4b9e08cbe618cf2916decf7c755de775b95e5201739f8326a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_haibt, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct 08 09:49:38 compute-0 systemd[1]: libpod-ac4635ca5ce2fb4b9e08cbe618cf2916decf7c755de775b95e5201739f8326a3.scope: Consumed 1.044s CPU time.
Oct 08 09:49:38 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct 08 09:49:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c264be6aebdf102299af0ab2ff4d6294255853f11ad1fa3752a465b2b56cc7d-merged.mount: Deactivated successfully.
Oct 08 09:49:38 compute-0 podman[111900]: 2025-10-08 09:49:38.282573959 +0000 UTC m=+0.883743467 container remove ac4635ca5ce2fb4b9e08cbe618cf2916decf7c755de775b95e5201739f8326a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:49:38 compute-0 systemd[1]: libpod-conmon-ac4635ca5ce2fb4b9e08cbe618cf2916decf7c755de775b95e5201739f8326a3.scope: Deactivated successfully.
Oct 08 09:49:38 compute-0 sudo[111719]: pam_unix(sudo:session): session closed for user root
Oct 08 09:49:38 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Oct 08 09:49:38 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct 08 09:49:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 09:49:38 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct 08 09:49:38 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:49:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 09:49:38 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:49:38 compute-0 sudo[112089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 08 09:49:38 compute-0 sudo[112089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:49:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:49:38 compute-0 sudo[112089]: pam_unix(sudo:session): session closed for user root
Oct 08 09:49:38 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Oct 08 09:49:38 compute-0 sudo[112028]: pam_unix(sudo:session): session closed for user root
Oct 08 09:49:39 compute-0 ceph-mon[73572]: pgmap v124: 353 pgs: 1 remapped+peering, 1 peering, 351 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 215 B/s rd, 0 op/s; 23 B/s, 0 objects/s recovering
Oct 08 09:49:39 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:49:39 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:49:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:39 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:49:39 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:49:39 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:49:39 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:49:39.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:49:39 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:49:39 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:49:39 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:49:39.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:49:39 compute-0 python3.9[112267]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Oct 08 09:49:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:39 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:49:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:39 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:49:39 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v125: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 170 B/s wr, 1 op/s; 18 B/s, 0 objects/s recovering
Oct 08 09:49:39 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0)
Oct 08 09:49:39 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Oct 08 09:49:40 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Oct 08 09:49:40 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Oct 08 09:49:40 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Oct 08 09:49:40 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Oct 08 09:49:40 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Oct 08 09:49:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=infra.usagestats t=2025-10-08T09:49:40.442675945Z level=info msg="Usage stats are ready to report"
Oct 08 09:49:40 compute-0 sudo[112294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 09:49:40 compute-0 sudo[112294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:49:40 compute-0 sudo[112294]: pam_unix(sudo:session): session closed for user root
Oct 08 09:49:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:40 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 09:49:41 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:41 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280042b0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:49:41 compute-0 ceph-mon[73572]: pgmap v125: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 170 B/s wr, 1 op/s; 18 B/s, 0 objects/s recovering
Oct 08 09:49:41 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Oct 08 09:49:41 compute-0 ceph-mon[73572]: osdmap e137: 3 total, 3 up, 3 in
Oct 08 09:49:41 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:49:41 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:49:41 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:49:41.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:49:41 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:49:41 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:49:41 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:49:41.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:49:41 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:41 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c003630 fd 49 proxy ignored for local
Oct 08 09:49:41 compute-0 kernel: ganesha.nfsd[107069]: segfault at 50 ip 00007f66db14e32e sp 00007f669cff8210 error 4 in libntirpc.so.5.8[7f66db133000+2c000] likely on CPU 6 (core 0, socket 6)
Oct 08 09:49:41 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 08 09:49:41 compute-0 systemd[1]: Created slice Slice /system/systemd-coredump.
Oct 08 09:49:41 compute-0 systemd[1]: Started Process Core Dump (PID 112320/UID 0).
Oct 08 09:49:41 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v127: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 916 B/s rd, 152 B/s wr, 1 op/s; 16 B/s, 0 objects/s recovering
Oct 08 09:49:41 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0)
Oct 08 09:49:41 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Oct 08 09:49:42 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Oct 08 09:49:42 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Oct 08 09:49:42 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Oct 08 09:49:42 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Oct 08 09:49:42 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Oct 08 09:49:42 compute-0 sudo[112448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwgzaconyhptgpohqytujexjzyryfpvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916981.9774554-1167-52070338560478/AnsiballZ_systemd.py'
Oct 08 09:49:42 compute-0 sudo[112448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:49:42 compute-0 systemd-coredump[112321]: Process 96172 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 64:
                                                    #0  0x00007f66db14e32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Oct 08 09:49:42 compute-0 python3.9[112450]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 09:49:42 compute-0 systemd[1]: systemd-coredump@0-112320-0.service: Deactivated successfully.
Oct 08 09:49:42 compute-0 systemd[1]: systemd-coredump@0-112320-0.service: Consumed 1.171s CPU time.
Oct 08 09:49:42 compute-0 podman[112456]: 2025-10-08 09:49:42.649971495 +0000 UTC m=+0.034335196 container died c5f79ead609d5155b918e60a1470581982785e4aac0e1c185a19093b2bdf84dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct 08 09:49:42 compute-0 sudo[112448]: pam_unix(sudo:session): session closed for user root
Oct 08 09:49:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-f542fbc76345914e50b0a692320404ddade2bba14cf57cdbb4a6cefc867b9d7e-merged.mount: Deactivated successfully.
Oct 08 09:49:42 compute-0 podman[112456]: 2025-10-08 09:49:42.759439753 +0000 UTC m=+0.143803404 container remove c5f79ead609d5155b918e60a1470581982785e4aac0e1c185a19093b2bdf84dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 08 09:49:42 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Main process exited, code=exited, status=139/n/a
Oct 08 09:49:42 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Failed with result 'exit-code'.
Oct 08 09:49:42 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Consumed 1.770s CPU time.
Oct 08 09:49:43 compute-0 sudo[112650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqarvqssgzqsgtxwdxmgrnqghnbpldmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916982.8287585-1167-243651928800707/AnsiballZ_systemd.py'
Oct 08 09:49:43 compute-0 sudo[112650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:49:43 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:49:43 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:49:43 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:49:43.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:49:43 compute-0 ceph-mon[73572]: pgmap v127: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 916 B/s rd, 152 B/s wr, 1 op/s; 16 B/s, 0 objects/s recovering
Oct 08 09:49:43 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Oct 08 09:49:43 compute-0 ceph-mon[73572]: osdmap e138: 3 total, 3 up, 3 in
Oct 08 09:49:43 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:49:43 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:49:43 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:49:43.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:49:43 compute-0 python3.9[112652]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 09:49:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:49:43 compute-0 sudo[112650]: pam_unix(sudo:session): session closed for user root
Oct 08 09:49:43 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v129: 353 pgs: 1 unknown, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 933 B/s wr, 2 op/s; 14 B/s, 0 objects/s recovering
Oct 08 09:49:44 compute-0 sshd-session[103775]: Connection closed by 192.168.122.30 port 41332
Oct 08 09:49:44 compute-0 sshd-session[103771]: pam_unix(sshd:session): session closed for user zuul
Oct 08 09:49:44 compute-0 systemd-logind[798]: Session 40 logged out. Waiting for processes to exit.
Oct 08 09:49:44 compute-0 systemd[1]: session-40.scope: Deactivated successfully.
Oct 08 09:49:44 compute-0 systemd[1]: session-40.scope: Consumed 1min 3.340s CPU time.
Oct 08 09:49:44 compute-0 systemd-logind[798]: Removed session 40.
Oct 08 09:49:44 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Oct 08 09:49:44 compute-0 ceph-mon[73572]: pgmap v129: 353 pgs: 1 unknown, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 933 B/s wr, 2 op/s; 14 B/s, 0 objects/s recovering
Oct 08 09:49:44 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Oct 08 09:49:44 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Oct 08 09:49:45 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:49:45 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:49:45 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:49:45.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:49:45 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:49:45 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:49:45 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:49:45.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:49:45 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Oct 08 09:49:45 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Oct 08 09:49:45 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Oct 08 09:49:45 compute-0 ceph-mon[73572]: osdmap e139: 3 total, 3 up, 3 in
Oct 08 09:49:45 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v132: 353 pgs: 1 unknown, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1.1 KiB/s wr, 2 op/s
Oct 08 09:49:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:49:45] "GET /metrics HTTP/1.1" 200 48250 "" "Prometheus/2.51.0"
Oct 08 09:49:45 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:49:45] "GET /metrics HTTP/1.1" 200 48250 "" "Prometheus/2.51.0"
Oct 08 09:49:46 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Oct 08 09:49:46 compute-0 ceph-mon[73572]: osdmap e140: 3 total, 3 up, 3 in
Oct 08 09:49:46 compute-0 ceph-mon[73572]: pgmap v132: 353 pgs: 1 unknown, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1.1 KiB/s wr, 2 op/s
Oct 08 09:49:46 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Oct 08 09:49:46 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Oct 08 09:49:46 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:49:46.956Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 09:49:46 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:49:46.956Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 09:49:47 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:49:47 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:49:47 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:49:47.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:49:47 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:49:47 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:49:47 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:49:47.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:49:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Oct 08 09:49:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Oct 08 09:49:47 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Oct 08 09:49:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/094947 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 08 09:49:47 compute-0 ceph-mon[73572]: osdmap e141: 3 total, 3 up, 3 in
Oct 08 09:49:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_09:49:47
Oct 08 09:49:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 08 09:49:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Some PGs (0.002833) are unknown; try again later
Oct 08 09:49:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct 08 09:49:47 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v135: 353 pgs: 1 unknown, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:49:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:49:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 08 09:49:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:49:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:49:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:49:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:49:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:49:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:49:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:49:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:49:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:49:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 08 09:49:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:49:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:49:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:49:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 08 09:49:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:49:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 08 09:49:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:49:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:49:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:49:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 08 09:49:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:49:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 09:49:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 09:49:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:49:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:49:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:49:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 08 09:49:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 09:49:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 09:49:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 09:49:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 09:49:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 08 09:49:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 09:49:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 09:49:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 09:49:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 09:49:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:49:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:49:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:49:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:49:48 compute-0 ceph-mon[73572]: osdmap e142: 3 total, 3 up, 3 in
Oct 08 09:49:48 compute-0 ceph-mon[73572]: pgmap v135: 353 pgs: 1 unknown, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:49:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:49:48 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:49:49 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:49:49 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 09:49:49 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:49:49.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 09:49:49 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:49:49 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:49:49 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:49:49.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:49:49 compute-0 sshd-session[112685]: Accepted publickey for zuul from 192.168.122.30 port 51686 ssh2: ECDSA SHA256:7LvTHAj52RCSkKXOsIbzSDrEw7lwj23D0dAJ0Qgx0Rg
Oct 08 09:49:49 compute-0 systemd-logind[798]: New session 41 of user zuul.
Oct 08 09:49:49 compute-0 systemd[1]: Started Session 41 of User zuul.
Oct 08 09:49:49 compute-0 sshd-session[112685]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 08 09:49:49 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v136: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 758 B/s wr, 2 op/s; 40 B/s, 0 objects/s recovering
Oct 08 09:49:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0)
Oct 08 09:49:49 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Oct 08 09:49:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Oct 08 09:49:49 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Oct 08 09:49:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Oct 08 09:49:49 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Oct 08 09:49:49 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Oct 08 09:49:50 compute-0 python3.9[112839]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 08 09:49:50 compute-0 ceph-mon[73572]: pgmap v136: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 758 B/s wr, 2 op/s; 40 B/s, 0 objects/s recovering
Oct 08 09:49:50 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Oct 08 09:49:50 compute-0 ceph-mon[73572]: osdmap e143: 3 total, 3 up, 3 in
Oct 08 09:49:51 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:49:51 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:49:51 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:49:51.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:49:51 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:49:51 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:49:51 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:49:51.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:49:51 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 143 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=74/74 les/c/f=75/75/0 sis=143) [1] r=0 lpr=143 pi=[74,143)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:49:51 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v138: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 682 B/s wr, 2 op/s; 36 B/s, 0 objects/s recovering
Oct 08 09:49:51 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0)
Oct 08 09:49:51 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 08 09:49:51 compute-0 sudo[112995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqsuayssjjkseuadmtzgsyrcglcfrtdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916991.4281046-68-277702674845952/AnsiballZ_getent.py'
Oct 08 09:49:51 compute-0 sudo[112995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:49:51 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Oct 08 09:49:52 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/094952 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 08 09:49:52 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 08 09:49:52 compute-0 python3.9[112997]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Oct 08 09:49:52 compute-0 sudo[112995]: pam_unix(sudo:session): session closed for user root
Oct 08 09:49:52 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 08 09:49:52 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Oct 08 09:49:52 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Oct 08 09:49:52 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 144 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=74/74 les/c/f=75/75/0 sis=144) [1]/[0] r=-1 lpr=144 pi=[74,144)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:49:52 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 144 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=74/74 les/c/f=75/75/0 sis=144) [1]/[0] r=-1 lpr=144 pi=[74,144)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 08 09:49:52 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 144 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=98/98 les/c/f=99/99/0 sis=144) [1] r=0 lpr=144 pi=[98,144)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:49:52 compute-0 sudo[113148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iijqofddofgcencogqbukoecaoupnwhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916992.5863142-104-124979736863936/AnsiballZ_setup.py'
Oct 08 09:49:52 compute-0 sudo[113148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:49:53 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Scheduled restart job, restart counter is at 1.
Oct 08 09:49:53 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct 08 09:49:53 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Consumed 1.770s CPU time.
Oct 08 09:49:53 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct 08 09:49:53 compute-0 python3.9[113150]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 08 09:49:53 compute-0 ceph-mon[73572]: pgmap v138: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 682 B/s wr, 2 op/s; 36 B/s, 0 objects/s recovering
Oct 08 09:49:53 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 08 09:49:53 compute-0 ceph-mon[73572]: osdmap e144: 3 total, 3 up, 3 in
Oct 08 09:49:53 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:49:53 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:49:53 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:49:53.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:49:53 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:49:53 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 09:49:53 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:49:53.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 09:49:53 compute-0 podman[113209]: 2025-10-08 09:49:53.279267221 +0000 UTC m=+0.039430764 container create beaf974db496741c669ad81c891163d1307f5a165761c4e5b55f8bdeca674ecc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Oct 08 09:49:53 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Oct 08 09:49:53 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Oct 08 09:49:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/583cf0aaac57cbf31d8d3c04dca1e57b2ad26a16a92039925dd8c3b62b820860/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 08 09:49:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/583cf0aaac57cbf31d8d3c04dca1e57b2ad26a16a92039925dd8c3b62b820860/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:49:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/583cf0aaac57cbf31d8d3c04dca1e57b2ad26a16a92039925dd8c3b62b820860/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 09:49:53 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 145 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=98/98 les/c/f=99/99/0 sis=145) [1]/[0] r=-1 lpr=145 pi=[98,145)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:49:53 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 145 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=98/98 les/c/f=99/99/0 sis=145) [1]/[0] r=-1 lpr=145 pi=[98,145)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 08 09:49:53 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Oct 08 09:49:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/583cf0aaac57cbf31d8d3c04dca1e57b2ad26a16a92039925dd8c3b62b820860/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.uynkmx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 09:49:53 compute-0 podman[113209]: 2025-10-08 09:49:53.347666742 +0000 UTC m=+0.107830305 container init beaf974db496741c669ad81c891163d1307f5a165761c4e5b55f8bdeca674ecc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 08 09:49:53 compute-0 podman[113209]: 2025-10-08 09:49:53.260814937 +0000 UTC m=+0.020978490 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:49:53 compute-0 podman[113209]: 2025-10-08 09:49:53.356087722 +0000 UTC m=+0.116251245 container start beaf974db496741c669ad81c891163d1307f5a165761c4e5b55f8bdeca674ecc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Oct 08 09:49:53 compute-0 bash[113209]: beaf974db496741c669ad81c891163d1307f5a165761c4e5b55f8bdeca674ecc
Oct 08 09:49:53 compute-0 sudo[113148]: pam_unix(sudo:session): session closed for user root
Oct 08 09:49:53 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct 08 09:49:53 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:49:53 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 08 09:49:53 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:49:53 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 08 09:49:53 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:49:53 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 08 09:49:53 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:49:53 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 08 09:49:53 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:49:53 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 08 09:49:53 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:49:53 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 08 09:49:53 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:49:53 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 08 09:49:53 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:49:53 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 09:49:53 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:49:53 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Oct 08 09:49:53 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Oct 08 09:49:53 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Oct 08 09:49:53 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 146 pg[9.1e( v 45'1018 (0'0,45'1018] local-lis/les=0/0 n=5 ec=54/38 lis/c=144/74 les/c/f=145/75/0 sis=146) [1] r=0 lpr=146 pi=[74,146)/1 luod=0'0 crt=45'1018 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:49:53 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 146 pg[9.1e( v 45'1018 (0'0,45'1018] local-lis/les=0/0 n=5 ec=54/38 lis/c=144/74 les/c/f=145/75/0 sis=146) [1] r=0 lpr=146 pi=[74,146)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:49:53 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v142: 353 pgs: 1 active+remapped, 1 peering, 351 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s; 27 B/s, 2 objects/s recovering
Oct 08 09:49:53 compute-0 sudo[113340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oknptvoraqsywjxpkcqovrfymhiendhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916992.5863142-104-124979736863936/AnsiballZ_dnf.py'
Oct 08 09:49:53 compute-0 sudo[113340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:49:54 compute-0 python3.9[113342]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 08 09:49:54 compute-0 ceph-mon[73572]: osdmap e145: 3 total, 3 up, 3 in
Oct 08 09:49:54 compute-0 ceph-mon[73572]: osdmap e146: 3 total, 3 up, 3 in
Oct 08 09:49:54 compute-0 ceph-mon[73572]: pgmap v142: 353 pgs: 1 active+remapped, 1 peering, 351 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s; 27 B/s, 2 objects/s recovering
Oct 08 09:49:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Oct 08 09:49:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Oct 08 09:49:54 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Oct 08 09:49:54 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 147 pg[9.1f( v 45'1018 (0'0,45'1018] local-lis/les=0/0 n=5 ec=54/38 lis/c=145/98 les/c/f=146/99/0 sis=147) [1] r=0 lpr=147 pi=[98,147)/1 luod=0'0 crt=45'1018 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 08 09:49:54 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 147 pg[9.1f( v 45'1018 (0'0,45'1018] local-lis/les=0/0 n=5 ec=54/38 lis/c=145/98 les/c/f=146/99/0 sis=147) [1] r=0 lpr=147 pi=[98,147)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 08 09:49:54 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 147 pg[9.1e( v 45'1018 (0'0,45'1018] local-lis/les=146/147 n=5 ec=54/38 lis/c=144/74 les/c/f=145/75/0 sis=146) [1] r=0 lpr=146 pi=[74,146)/1 crt=45'1018 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:49:55 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:49:55 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:49:55 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:49:55.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:49:55 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:49:55 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:49:55 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:49:55.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:49:55 compute-0 sudo[113340]: pam_unix(sudo:session): session closed for user root
Oct 08 09:49:55 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Oct 08 09:49:55 compute-0 ceph-mon[73572]: osdmap e147: 3 total, 3 up, 3 in
Oct 08 09:49:55 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Oct 08 09:49:55 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Oct 08 09:49:55 compute-0 ceph-osd[81751]: osd.1 pg_epoch: 148 pg[9.1f( v 45'1018 (0'0,45'1018] local-lis/les=147/148 n=5 ec=54/38 lis/c=145/98 les/c/f=146/99/0 sis=147) [1] r=0 lpr=147 pi=[98,147)/1 crt=45'1018 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 08 09:49:55 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v145: 353 pgs: 1 active+remapped, 1 peering, 351 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 305 B/s wr, 2 op/s; 32 B/s, 2 objects/s recovering
Oct 08 09:49:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:49:55] "GET /metrics HTTP/1.1" 200 48248 "" "Prometheus/2.51.0"
Oct 08 09:49:55 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:49:55] "GET /metrics HTTP/1.1" 200 48248 "" "Prometheus/2.51.0"
Oct 08 09:49:55 compute-0 sudo[113496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlgtmrtfxvvnlkejxwqmoqipcehmjtyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916995.607534-146-62839649867933/AnsiballZ_dnf.py'
Oct 08 09:49:55 compute-0 sudo[113496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:49:56 compute-0 python3.9[113498]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 08 09:49:56 compute-0 ceph-mon[73572]: osdmap e148: 3 total, 3 up, 3 in
Oct 08 09:49:56 compute-0 ceph-mon[73572]: pgmap v145: 353 pgs: 1 active+remapped, 1 peering, 351 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 305 B/s wr, 2 op/s; 32 B/s, 2 objects/s recovering
Oct 08 09:49:56 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:49:56.957Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:49:57 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:49:57 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:49:57 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:49:57.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:49:57 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:49:57 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:49:57 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:49:57.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:49:57 compute-0 sudo[113496]: pam_unix(sudo:session): session closed for user root
Oct 08 09:49:57 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v146: 353 pgs: 1 active+remapped, 1 peering, 351 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 235 B/s wr, 2 op/s; 25 B/s, 2 objects/s recovering
Oct 08 09:49:58 compute-0 sudo[113651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgkjviltgmtaqshldpyuyejjkwnixqyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916997.619302-170-44029078979805/AnsiballZ_systemd.py'
Oct 08 09:49:58 compute-0 sudo[113651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:49:58 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:49:58 compute-0 python3.9[113653]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 08 09:49:58 compute-0 sudo[113651]: pam_unix(sudo:session): session closed for user root
Oct 08 09:49:58 compute-0 ceph-mon[73572]: pgmap v146: 353 pgs: 1 active+remapped, 1 peering, 351 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 235 B/s wr, 2 op/s; 25 B/s, 2 objects/s recovering
Oct 08 09:49:59 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:49:59 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:49:59 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:49:59.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:49:59 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:49:59 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:49:59 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:49:59.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:49:59 compute-0 python3.9[113807]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 08 09:49:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:49:59 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 09:49:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:49:59 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 09:49:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:49:59 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 09:49:59 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v147: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 822 B/s wr, 2 op/s; 17 B/s, 1 objects/s recovering
Oct 08 09:50:00 compute-0 ceph-mon[73572]: log_channel(cluster) log [INF] : overall HEALTH_OK
Oct 08 09:50:00 compute-0 sudo[113958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epnmzrezouialjiadfymgitvqanjjpwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759916999.6662505-224-68666796158795/AnsiballZ_sefcontext.py'
Oct 08 09:50:00 compute-0 sudo[113958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:50:00 compute-0 python3.9[113960]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Oct 08 09:50:00 compute-0 sudo[113958]: pam_unix(sudo:session): session closed for user root
Oct 08 09:50:00 compute-0 sudo[113961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 09:50:00 compute-0 sudo[113961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:50:00 compute-0 sudo[113961]: pam_unix(sudo:session): session closed for user root
Oct 08 09:50:00 compute-0 ceph-mon[73572]: pgmap v147: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 822 B/s wr, 2 op/s; 17 B/s, 1 objects/s recovering
Oct 08 09:50:00 compute-0 ceph-mon[73572]: overall HEALTH_OK
Oct 08 09:50:01 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:50:01 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:50:01 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:50:01.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:50:01 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:50:01 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:50:01 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:50:01.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:50:01 compute-0 python3.9[114136]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 08 09:50:01 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v148: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 511 B/s wr, 1 op/s
Oct 08 09:50:02 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/095002 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 08 09:50:02 compute-0 sudo[114293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntyzkxbeloqkuvbaqgchizunzlisjjrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917002.049296-278-80045285355930/AnsiballZ_dnf.py'
Oct 08 09:50:02 compute-0 sudo[114293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:50:02 compute-0 python3.9[114295]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 08 09:50:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 09:50:02 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:50:02 compute-0 ceph-mon[73572]: pgmap v148: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 511 B/s wr, 1 op/s
Oct 08 09:50:03 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:50:03 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:50:03 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:50:03.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:50:03 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:50:03 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:50:03 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:50:03.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:50:03 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:50:03 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v149: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 444 B/s wr, 1 op/s
Oct 08 09:50:03 compute-0 sudo[114293]: pam_unix(sudo:session): session closed for user root
Oct 08 09:50:03 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:50:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:03 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 09:50:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:03 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 09:50:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:03 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 09:50:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:04 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 09:50:04 compute-0 sudo[114448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrsxjntgiljiigywudtqxkcngdkzmhsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917004.10527-302-274022575460124/AnsiballZ_command.py'
Oct 08 09:50:04 compute-0 sudo[114448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:50:04 compute-0 python3.9[114450]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:50:04 compute-0 ceph-mon[73572]: pgmap v149: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 444 B/s wr, 1 op/s
Oct 08 09:50:05 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:50:05 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:50:05 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:50:05.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:50:05 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:50:05 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:50:05 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:50:05.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:50:05 compute-0 sudo[114448]: pam_unix(sudo:session): session closed for user root
Oct 08 09:50:05 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v150: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 402 B/s wr, 1 op/s
Oct 08 09:50:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:50:05] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Oct 08 09:50:05 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:50:05] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Oct 08 09:50:06 compute-0 sudo[114737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjflzhikbuoodccscmpiipjofbtorflh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917005.803563-326-66231637711224/AnsiballZ_file.py'
Oct 08 09:50:06 compute-0 sudo[114737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:50:06 compute-0 python3.9[114739]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 08 09:50:06 compute-0 sudo[114737]: pam_unix(sudo:session): session closed for user root
Oct 08 09:50:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:50:06.958Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:50:06 compute-0 ceph-mon[73572]: pgmap v150: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 402 B/s wr, 1 op/s
Oct 08 09:50:07 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:50:07 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 09:50:07 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:50:07.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 09:50:07 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:50:07 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:50:07 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:50:07.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:50:07 compute-0 python3.9[114890]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 09:50:07 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v151: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 341 B/s wr, 1 op/s
Oct 08 09:50:08 compute-0 sudo[115043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovrqevzrzzhwnpowwincwwocimqcnzsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917007.7924905-374-153474133736497/AnsiballZ_dnf.py'
Oct 08 09:50:08 compute-0 sudo[115043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:50:08 compute-0 python3.9[115045]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 08 09:50:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:08 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 09:50:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:08 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 09:50:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:08 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 09:50:08 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:50:09 compute-0 ceph-mon[73572]: pgmap v151: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 341 B/s wr, 1 op/s
Oct 08 09:50:09 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:50:09 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:50:09 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:50:09.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:50:09 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:50:09 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:50:09 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:50:09.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:50:09 compute-0 sudo[115043]: pam_unix(sudo:session): session closed for user root
Oct 08 09:50:09 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v152: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 682 B/s wr, 2 op/s
Oct 08 09:50:10 compute-0 sudo[115198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpeurtwzpsevrkasulsxwezmvxarbvpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917009.8181477-401-208616196135844/AnsiballZ_dnf.py'
Oct 08 09:50:10 compute-0 sudo[115198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:50:10 compute-0 python3.9[115200]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 08 09:50:11 compute-0 ceph-mon[73572]: pgmap v152: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 682 B/s wr, 2 op/s
Oct 08 09:50:11 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:50:11 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:50:11 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:50:11.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:50:11 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:50:11 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:50:11 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:50:11.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:50:11 compute-0 sudo[115198]: pam_unix(sudo:session): session closed for user root
Oct 08 09:50:11 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v153: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 341 B/s wr, 2 op/s
Oct 08 09:50:12 compute-0 sudo[115353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-veebkokqrdbnrzgrvnppblrtyjepiqwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917012.0870519-437-168695703262552/AnsiballZ_stat.py'
Oct 08 09:50:12 compute-0 sudo[115353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:50:12 compute-0 python3.9[115355]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 09:50:12 compute-0 sudo[115353]: pam_unix(sudo:session): session closed for user root
Oct 08 09:50:13 compute-0 ceph-mon[73572]: pgmap v153: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 341 B/s wr, 2 op/s
Oct 08 09:50:13 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:50:13 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:50:13 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:50:13.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:50:13 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:50:13 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:50:13 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:50:13.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:50:13 compute-0 sudo[115508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqfyyypfmweupdiorthlnmwjxkdqvccg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917012.8269274-461-69006444802563/AnsiballZ_slurp.py'
Oct 08 09:50:13 compute-0 sudo[115508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:50:13 compute-0 python3.9[115510]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Oct 08 09:50:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:50:13 compute-0 sudo[115508]: pam_unix(sudo:session): session closed for user root
Oct 08 09:50:13 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v154: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 597 B/s wr, 2 op/s
Oct 08 09:50:14 compute-0 ceph-mon[73572]: pgmap v154: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 597 B/s wr, 2 op/s
Oct 08 09:50:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:14 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 08 09:50:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:14 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 08 09:50:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:14 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 08 09:50:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:14 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 08 09:50:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:14 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 08 09:50:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:14 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 08 09:50:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:14 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 08 09:50:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:14 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 08 09:50:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:14 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 08 09:50:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:14 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 08 09:50:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:14 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 08 09:50:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:14 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 08 09:50:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:14 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 08 09:50:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:14 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 08 09:50:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:14 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 08 09:50:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:14 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 08 09:50:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:14 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 08 09:50:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:14 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 08 09:50:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:14 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 08 09:50:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:14 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 08 09:50:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:14 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 08 09:50:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:14 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 08 09:50:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:14 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 08 09:50:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:14 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 08 09:50:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:14 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 08 09:50:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:14 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 08 09:50:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:14 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 08 09:50:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:14 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 09:50:14 compute-0 sshd-session[112688]: Connection closed by 192.168.122.30 port 51686
Oct 08 09:50:14 compute-0 sshd-session[112685]: pam_unix(sshd:session): session closed for user zuul
Oct 08 09:50:14 compute-0 systemd[1]: session-41.scope: Deactivated successfully.
Oct 08 09:50:14 compute-0 systemd[1]: session-41.scope: Consumed 17.287s CPU time.
Oct 08 09:50:14 compute-0 systemd-logind[798]: Session 41 logged out. Waiting for processes to exit.
Oct 08 09:50:14 compute-0 systemd-logind[798]: Removed session 41.
Oct 08 09:50:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:15 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59d4000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:50:15 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:50:15 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 09:50:15 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:50:15.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 09:50:15 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:50:15 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:50:15 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:50:15.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:50:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:15 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59c8001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:50:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:15 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59b0000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:50:15 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v155: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 597 B/s wr, 2 op/s
Oct 08 09:50:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:50:15] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Oct 08 09:50:15 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:50:15] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Oct 08 09:50:16 compute-0 ceph-mon[73572]: pgmap v155: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 597 B/s wr, 2 op/s
Oct 08 09:50:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:50:16.958Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:50:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:17 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59ac000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:50:17 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:50:17 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 09:50:17 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:50:17.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 09:50:17 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:50:17 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:50:17 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:50:17.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:50:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/095017 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 08 09:50:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:17 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59b8000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:50:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:17 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 09:50:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:17 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 09:50:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:17 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59b00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:50:17 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v156: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 597 B/s wr, 2 op/s
Oct 08 09:50:17 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 09:50:17 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:50:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:50:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:50:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:50:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:50:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:50:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:50:18 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:50:18 compute-0 ceph-mon[73572]: pgmap v156: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 597 B/s wr, 2 op/s
Oct 08 09:50:18 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:50:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:19 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59b8000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:50:19 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:50:19 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 09:50:19 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:50:19.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 09:50:19 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:50:19 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:50:19 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:50:19.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:50:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:19 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59c8001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:50:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:19 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59ac0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:50:19 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v157: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Oct 08 09:50:19 compute-0 sshd-session[115556]: Accepted publickey for zuul from 192.168.122.30 port 35846 ssh2: ECDSA SHA256:7LvTHAj52RCSkKXOsIbzSDrEw7lwj23D0dAJ0Qgx0Rg
Oct 08 09:50:19 compute-0 systemd-logind[798]: New session 42 of user zuul.
Oct 08 09:50:19 compute-0 systemd[1]: Started Session 42 of User zuul.
Oct 08 09:50:19 compute-0 sshd-session[115556]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 08 09:50:20 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:20 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 08 09:50:20 compute-0 sudo[115711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 09:50:20 compute-0 sudo[115711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:50:20 compute-0 sudo[115711]: pam_unix(sudo:session): session closed for user root
Oct 08 09:50:20 compute-0 ceph-mon[73572]: pgmap v157: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Oct 08 09:50:20 compute-0 python3.9[115710]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 08 09:50:21 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:21 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59ac0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:50:21 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:50:21 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:50:21 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:50:21.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:50:21 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:50:21 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:50:21 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:50:21.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:50:21 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:21 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59b8001f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:50:21 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:21 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59c8001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:50:21 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v158: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 08 09:50:21 compute-0 python3.9[115890]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 08 09:50:23 compute-0 ceph-mon[73572]: pgmap v158: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 08 09:50:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:23 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59ac0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:50:23 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:50:23 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 09:50:23 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:50:23.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 09:50:23 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:50:23 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:50:23 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:50:23.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:50:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:23 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59ac0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:50:23 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:50:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:23 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59b8001f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:50:23 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v159: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 08 09:50:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/095024 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 08 09:50:24 compute-0 python3.9[116086]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:50:24 compute-0 sshd-session[115559]: Connection closed by 192.168.122.30 port 35846
Oct 08 09:50:24 compute-0 sshd-session[115556]: pam_unix(sshd:session): session closed for user zuul
Oct 08 09:50:24 compute-0 systemd[1]: session-42.scope: Deactivated successfully.
Oct 08 09:50:24 compute-0 systemd[1]: session-42.scope: Consumed 2.311s CPU time.
Oct 08 09:50:24 compute-0 systemd-logind[798]: Session 42 logged out. Waiting for processes to exit.
Oct 08 09:50:24 compute-0 systemd-logind[798]: Removed session 42.
Oct 08 09:50:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:25 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59c8001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:50:25 compute-0 ceph-mon[73572]: pgmap v159: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 08 09:50:25 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:50:25 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:50:25 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:50:25.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:50:25 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:50:25 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 09:50:25 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:50:25.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 09:50:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:25 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59ac0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:50:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:25 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59b0002160 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:50:25 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v160: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Oct 08 09:50:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:50:25] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Oct 08 09:50:25 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:50:25] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Oct 08 09:50:26 compute-0 ceph-mon[73572]: pgmap v160: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Oct 08 09:50:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:50:26.960Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:50:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:27 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59b8001f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:50:27 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:50:27 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 09:50:27 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:50:27.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 09:50:27 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:50:27 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:50:27 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:50:27.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:50:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:27 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59c8001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:50:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:27 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59ac0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:50:27 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v161: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Oct 08 09:50:28 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:50:28 compute-0 ceph-mon[73572]: pgmap v161: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Oct 08 09:50:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:29 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59b0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:50:29 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:50:29 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 09:50:29 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:50:29.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 09:50:29 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:50:29 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 09:50:29 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:50:29.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 09:50:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:29 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59b8003340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:50:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:29 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59c8001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:50:29 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v162: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Oct 08 09:50:29 compute-0 sshd-session[116118]: Accepted publickey for zuul from 192.168.122.30 port 58126 ssh2: ECDSA SHA256:7LvTHAj52RCSkKXOsIbzSDrEw7lwj23D0dAJ0Qgx0Rg
Oct 08 09:50:29 compute-0 systemd-logind[798]: New session 43 of user zuul.
Oct 08 09:50:29 compute-0 systemd[1]: Started Session 43 of User zuul.
Oct 08 09:50:30 compute-0 sshd-session[116118]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 08 09:50:30 compute-0 ceph-mon[73572]: pgmap v162: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Oct 08 09:50:31 compute-0 python3.9[116271]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 08 09:50:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:31 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59c8001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:50:31 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:50:31 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 09:50:31 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:50:31.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 09:50:31 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:50:31 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:50:31 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:50:31.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:50:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:31 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59b0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:50:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:31 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59b8003340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:50:31 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v163: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Oct 08 09:50:31 compute-0 python3.9[116426]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 08 09:50:32 compute-0 sudo[116581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plryriwnqfsmovvioyemlgamqeiwjycy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917032.3107634-80-168116134812081/AnsiballZ_setup.py'
Oct 08 09:50:32 compute-0 sudo[116581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:50:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 09:50:32 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:50:32 compute-0 python3.9[116583]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 08 09:50:32 compute-0 ceph-mon[73572]: pgmap v163: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Oct 08 09:50:32 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:50:33 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:33 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59ac0036e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:50:33 compute-0 sudo[116581]: pam_unix(sudo:session): session closed for user root
Oct 08 09:50:33 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:50:33 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:50:33 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:50:33.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:50:33 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:50:33 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:50:33 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:50:33.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:50:33 compute-0 kernel: ganesha.nfsd[115539]: segfault at 50 ip 00007f5a82ef932e sp 00007f5a4dffa210 error 4 in libntirpc.so.5.8[7f5a82ede000+2c000] likely on CPU 6 (core 0, socket 6)
Oct 08 09:50:33 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 08 09:50:33 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:33 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59c8001c00 fd 39 proxy ignored for local
Oct 08 09:50:33 compute-0 systemd[1]: Started Process Core Dump (PID 116624/UID 0).
Oct 08 09:50:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:50:33 compute-0 sudo[116668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtguxezedpnmudwgparzhbtjsqohiczc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917032.3107634-80-168116134812081/AnsiballZ_dnf.py'
Oct 08 09:50:33 compute-0 sudo[116668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:50:33 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v164: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 08 09:50:33 compute-0 python3.9[116670]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 08 09:50:34 compute-0 systemd-coredump[116640]: Process 113229 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 43:
                                                    #0  0x00007f5a82ef932e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Oct 08 09:50:34 compute-0 systemd[1]: systemd-coredump@1-116624-0.service: Deactivated successfully.
Oct 08 09:50:34 compute-0 systemd[1]: systemd-coredump@1-116624-0.service: Consumed 1.167s CPU time.
Oct 08 09:50:34 compute-0 podman[116677]: 2025-10-08 09:50:34.66706905 +0000 UTC m=+0.031950634 container died beaf974db496741c669ad81c891163d1307f5a165761c4e5b55f8bdeca674ecc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:50:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-583cf0aaac57cbf31d8d3c04dca1e57b2ad26a16a92039925dd8c3b62b820860-merged.mount: Deactivated successfully.
Oct 08 09:50:34 compute-0 podman[116677]: 2025-10-08 09:50:34.715115337 +0000 UTC m=+0.079996901 container remove beaf974db496741c669ad81c891163d1307f5a165761c4e5b55f8bdeca674ecc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 08 09:50:34 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Main process exited, code=exited, status=139/n/a
Oct 08 09:50:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/095034 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 08 09:50:34 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Failed with result 'exit-code'.
Oct 08 09:50:34 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Consumed 1.397s CPU time.
Oct 08 09:50:34 compute-0 ceph-mon[73572]: pgmap v164: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 08 09:50:34 compute-0 sudo[116668]: pam_unix(sudo:session): session closed for user root
Oct 08 09:50:35 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:50:35 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:50:35 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:50:35.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:50:35 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:50:35 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:50:35 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:50:35.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:50:35 compute-0 sudo[116870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtdtdhplyolqgjgqxwmmpoejogfsclzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917035.1079624-116-220492440514523/AnsiballZ_setup.py'
Oct 08 09:50:35 compute-0 sudo[116870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:50:35 compute-0 python3.9[116872]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 08 09:50:35 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v165: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 09:50:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:50:35] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Oct 08 09:50:35 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:50:35] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Oct 08 09:50:35 compute-0 sudo[116870]: pam_unix(sudo:session): session closed for user root
Oct 08 09:50:36 compute-0 sudo[117066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzaqnifqewugjkjmeictgechiunabuqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917036.4101012-149-162166149680042/AnsiballZ_file.py'
Oct 08 09:50:36 compute-0 sudo[117066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:50:36 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:50:36.960Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 09:50:36 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:50:36.961Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 09:50:36 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:50:36.961Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:50:36 compute-0 ceph-mon[73572]: pgmap v165: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 09:50:37 compute-0 python3.9[117068]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:50:37 compute-0 sudo[117066]: pam_unix(sudo:session): session closed for user root
Oct 08 09:50:37 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:50:37 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 09:50:37 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:50:37.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 09:50:37 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:50:37 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:50:37 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:50:37.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:50:37 compute-0 sudo[117219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjcmjaoopwrrjbwmemellxdudljffpxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917037.2035828-173-274355743635942/AnsiballZ_command.py'
Oct 08 09:50:37 compute-0 sudo[117219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:50:37 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v166: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 09:50:37 compute-0 python3.9[117221]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:50:37 compute-0 sudo[117219]: pam_unix(sudo:session): session closed for user root
Oct 08 09:50:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:50:38 compute-0 sudo[117385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrsjbkryslpibppubhivhlicltkmrdeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917038.0794814-197-84542151160502/AnsiballZ_stat.py'
Oct 08 09:50:38 compute-0 sudo[117385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:50:38 compute-0 sudo[117388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:50:38 compute-0 sudo[117388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:50:38 compute-0 sudo[117388]: pam_unix(sudo:session): session closed for user root
Oct 08 09:50:38 compute-0 sudo[117413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Oct 08 09:50:38 compute-0 sudo[117413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:50:38 compute-0 python3.9[117387]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:50:38 compute-0 sudo[117385]: pam_unix(sudo:session): session closed for user root
Oct 08 09:50:38 compute-0 ceph-mon[73572]: pgmap v166: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 09:50:39 compute-0 sudo[117555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvelswmbkcilvfdkfnyvapjqtzsghcqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917038.0794814-197-84542151160502/AnsiballZ_file.py'
Oct 08 09:50:39 compute-0 sudo[117555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:50:39 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:50:39 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:50:39 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:50:39.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:50:39 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:50:39 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:50:39 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:50:39.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:50:39 compute-0 python3.9[117559]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:50:39 compute-0 podman[117587]: 2025-10-08 09:50:39.318366479 +0000 UTC m=+0.072962241 container exec 01c666addd8584f02eb7d29040d97e45d8cb7f834fd134029c1f7cca550aeb9d (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:50:39 compute-0 sudo[117555]: pam_unix(sudo:session): session closed for user root
Oct 08 09:50:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/095039 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 08 09:50:39 compute-0 podman[117587]: 2025-10-08 09:50:39.436361349 +0000 UTC m=+0.190957091 container exec_died 01c666addd8584f02eb7d29040d97e45d8cb7f834fd134029c1f7cca550aeb9d (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Oct 08 09:50:39 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v167: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 09:50:39 compute-0 sudo[117833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sajwsztccvkbwasdhgzwwsqwxoszhcmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917039.485309-233-255625468078310/AnsiballZ_stat.py'
Oct 08 09:50:39 compute-0 sudo[117833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:50:39 compute-0 python3.9[117842]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:50:39 compute-0 podman[117877]: 2025-10-08 09:50:39.951221978 +0000 UTC m=+0.055172131 container exec 4eb6b712d09e2820826e6f961f0aba1bff09f13eee1664dbb03edca7615a708b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:50:39 compute-0 podman[117877]: 2025-10-08 09:50:39.961463923 +0000 UTC m=+0.065414056 container exec_died 4eb6b712d09e2820826e6f961f0aba1bff09f13eee1664dbb03edca7615a708b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:50:39 compute-0 sudo[117833]: pam_unix(sudo:session): session closed for user root
Oct 08 09:50:40 compute-0 sudo[118038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uysvrvbgzjepqmmfzhdzcceebvxcreha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917039.485309-233-255625468078310/AnsiballZ_file.py'
Oct 08 09:50:40 compute-0 sudo[118038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:50:40 compute-0 python3.9[118042]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:50:40 compute-0 podman[118075]: 2025-10-08 09:50:40.417308985 +0000 UTC m=+0.087035720 container exec 1c113ffcb0d41c6337c256a4892f13e82bdea6ca8062b10324516aa631301ea5 (image=quay.io/ceph/haproxy:2.3, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp)
Oct 08 09:50:40 compute-0 sudo[118038]: pam_unix(sudo:session): session closed for user root
Oct 08 09:50:40 compute-0 podman[118097]: 2025-10-08 09:50:40.489198071 +0000 UTC m=+0.053971032 container exec_died 1c113ffcb0d41c6337c256a4892f13e82bdea6ca8062b10324516aa631301ea5 (image=quay.io/ceph/haproxy:2.3, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp)
Oct 08 09:50:40 compute-0 podman[118075]: 2025-10-08 09:50:40.495240968 +0000 UTC m=+0.164967703 container exec_died 1c113ffcb0d41c6337c256a4892f13e82bdea6ca8062b10324516aa631301ea5 (image=quay.io/ceph/haproxy:2.3, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp)
Oct 08 09:50:40 compute-0 podman[118186]: 2025-10-08 09:50:40.69512257 +0000 UTC m=+0.052510105 container exec 5814788fb4b63196d097e0c60082efdca22825a3ba337ddec5c99170a1bfd89d (image=quay.io/ceph/keepalived:2.2.4, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw, vcs-type=git, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, distribution-scope=public, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.buildah.version=1.28.2, com.redhat.component=keepalived-container, architecture=x86_64, name=keepalived, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct 08 09:50:40 compute-0 podman[118186]: 2025-10-08 09:50:40.708833047 +0000 UTC m=+0.066220562 container exec_died 5814788fb4b63196d097e0c60082efdca22825a3ba337ddec5c99170a1bfd89d (image=quay.io/ceph/keepalived:2.2.4, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw, release=1793, description=keepalived for Ceph, distribution-scope=public, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, io.openshift.expose-services=, version=2.2.4, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vcs-type=git, name=keepalived, com.redhat.component=keepalived-container)
Oct 08 09:50:40 compute-0 sudo[118238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 09:50:40 compute-0 sudo[118238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:50:40 compute-0 sudo[118238]: pam_unix(sudo:session): session closed for user root
Oct 08 09:50:40 compute-0 podman[118307]: 2025-10-08 09:50:40.917553267 +0000 UTC m=+0.048209814 container exec feb968c21a5fda3cd2e7b86379d5576dbe3dadfd4cb62b92318e18ddbaaafb75 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:50:40 compute-0 podman[118307]: 2025-10-08 09:50:40.957445849 +0000 UTC m=+0.088102376 container exec_died feb968c21a5fda3cd2e7b86379d5576dbe3dadfd4cb62b92318e18ddbaaafb75 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:50:41 compute-0 ceph-mon[73572]: pgmap v167: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 09:50:41 compute-0 sudo[118442]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcpbgwrbvtfujhcqmdgtqsnguxjzzbwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917040.6472287-272-232130393462109/AnsiballZ_ini_file.py'
Oct 08 09:50:41 compute-0 sudo[118442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:50:41 compute-0 podman[118451]: 2025-10-08 09:50:41.154592322 +0000 UTC m=+0.047831082 container exec 73efcf403aa6fcb4b04e1ef714b7bf4169b576e4a4c1b34cdc0b458f62db65fd (image=quay.io/ceph/grafana:10.4.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 08 09:50:41 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:50:41 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 09:50:41 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:50:41.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 09:50:41 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:50:41 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:50:41 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:50:41.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:50:41 compute-0 python3.9[118450]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:50:41 compute-0 sudo[118442]: pam_unix(sudo:session): session closed for user root
Oct 08 09:50:41 compute-0 podman[118451]: 2025-10-08 09:50:41.317972242 +0000 UTC m=+0.211210982 container exec_died 73efcf403aa6fcb4b04e1ef714b7bf4169b576e4a4c1b34cdc0b458f62db65fd (image=quay.io/ceph/grafana:10.4.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 08 09:50:41 compute-0 podman[118665]: 2025-10-08 09:50:41.646982377 +0000 UTC m=+0.049536998 container exec 50d7285a7766fc915688c3cc737e186f9ad2230d7b0b7e759880375ad996bb6c (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:50:41 compute-0 podman[118665]: 2025-10-08 09:50:41.680339925 +0000 UTC m=+0.082894526 container exec_died 50d7285a7766fc915688c3cc737e186f9ad2230d7b0b7e759880375ad996bb6c (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 09:50:41 compute-0 sudo[118737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqqssumcqiwpwgfsandongshbzqmtquc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917041.454223-272-200539883776888/AnsiballZ_ini_file.py'
Oct 08 09:50:41 compute-0 sudo[118737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:50:41 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v168: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 09:50:41 compute-0 sudo[117413]: pam_unix(sudo:session): session closed for user root
Oct 08 09:50:41 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 09:50:41 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:50:41 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 09:50:41 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:50:41 compute-0 sudo[118749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:50:41 compute-0 sudo[118749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:50:41 compute-0 sudo[118749]: pam_unix(sudo:session): session closed for user root
Oct 08 09:50:41 compute-0 sudo[118774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 08 09:50:41 compute-0 sudo[118774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:50:41 compute-0 python3.9[118745]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:50:41 compute-0 sudo[118737]: pam_unix(sudo:session): session closed for user root
Oct 08 09:50:42 compute-0 sudo[118965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psjrpjjoktsgrkzwcfdyfzyyusdixxsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917042.016057-272-181551952014614/AnsiballZ_ini_file.py'
Oct 08 09:50:42 compute-0 sudo[118965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:50:42 compute-0 sudo[118774]: pam_unix(sudo:session): session closed for user root
Oct 08 09:50:42 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:50:42 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:50:42 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 08 09:50:42 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 09:50:42 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v169: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 191 B/s rd, 0 op/s
Oct 08 09:50:42 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 08 09:50:42 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:50:42 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 08 09:50:42 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:50:42 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 08 09:50:42 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 09:50:42 compute-0 python3.9[118969]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:50:42 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 08 09:50:42 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 09:50:42 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:50:42 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:50:42 compute-0 sudo[118965]: pam_unix(sudo:session): session closed for user root
Oct 08 09:50:42 compute-0 sudo[118982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:50:42 compute-0 sudo[118982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:50:42 compute-0 sudo[118982]: pam_unix(sudo:session): session closed for user root
Oct 08 09:50:42 compute-0 sudo[119030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 08 09:50:42 compute-0 sudo[119030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:50:42 compute-0 ceph-mon[73572]: pgmap v168: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 09:50:42 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:50:42 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:50:42 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:50:42 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 09:50:42 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:50:42 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:50:42 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 09:50:42 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 09:50:42 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:50:42 compute-0 sudo[119200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrpnhqqhcbshbgdvneabudxkxpfvfdgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917042.583671-272-84534615567256/AnsiballZ_ini_file.py'
Oct 08 09:50:42 compute-0 sudo[119200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:50:42 compute-0 podman[119224]: 2025-10-08 09:50:42.930440312 +0000 UTC m=+0.037700150 container create f249b4ac27548f10b303b9c758b3c32dcaaafee80b40ce66a52d70eab2ebc4bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:50:42 compute-0 systemd[1]: Started libpod-conmon-f249b4ac27548f10b303b9c758b3c32dcaaafee80b40ce66a52d70eab2ebc4bf.scope.
Oct 08 09:50:42 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:50:43 compute-0 python3.9[119208]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:50:43 compute-0 podman[119224]: 2025-10-08 09:50:42.91598323 +0000 UTC m=+0.023243068 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:50:43 compute-0 podman[119224]: 2025-10-08 09:50:43.013071389 +0000 UTC m=+0.120331257 container init f249b4ac27548f10b303b9c758b3c32dcaaafee80b40ce66a52d70eab2ebc4bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_nobel, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:50:43 compute-0 podman[119224]: 2025-10-08 09:50:43.020673657 +0000 UTC m=+0.127933495 container start f249b4ac27548f10b303b9c758b3c32dcaaafee80b40ce66a52d70eab2ebc4bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_nobel, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:50:43 compute-0 podman[119224]: 2025-10-08 09:50:43.02446083 +0000 UTC m=+0.131720688 container attach f249b4ac27548f10b303b9c758b3c32dcaaafee80b40ce66a52d70eab2ebc4bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_nobel, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:50:43 compute-0 distracted_nobel[119241]: 167 167
Oct 08 09:50:43 compute-0 podman[119224]: 2025-10-08 09:50:43.025710111 +0000 UTC m=+0.132969949 container died f249b4ac27548f10b303b9c758b3c32dcaaafee80b40ce66a52d70eab2ebc4bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_nobel, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:50:43 compute-0 systemd[1]: libpod-f249b4ac27548f10b303b9c758b3c32dcaaafee80b40ce66a52d70eab2ebc4bf.scope: Deactivated successfully.
Oct 08 09:50:43 compute-0 sudo[119200]: pam_unix(sudo:session): session closed for user root
Oct 08 09:50:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-f23a8f66cf4354f56a707a3a7c52c4bbf1218d56f52dab93e3f9a6a4579083c0-merged.mount: Deactivated successfully.
Oct 08 09:50:43 compute-0 ceph-mon[73572]: log_channel(cluster) log [WRN] : Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Oct 08 09:50:43 compute-0 podman[119224]: 2025-10-08 09:50:43.060385312 +0000 UTC m=+0.167645150 container remove f249b4ac27548f10b303b9c758b3c32dcaaafee80b40ce66a52d70eab2ebc4bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_nobel, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True)
Oct 08 09:50:43 compute-0 systemd[1]: libpod-conmon-f249b4ac27548f10b303b9c758b3c32dcaaafee80b40ce66a52d70eab2ebc4bf.scope: Deactivated successfully.
Oct 08 09:50:43 compute-0 podman[119290]: 2025-10-08 09:50:43.193454844 +0000 UTC m=+0.033473823 container create 5775f780a40ad9c9a4a0d053543c503278333ba3caf77a7ac4e5de5d4ad22f8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_feistel, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:50:43 compute-0 systemd[1]: Started libpod-conmon-5775f780a40ad9c9a4a0d053543c503278333ba3caf77a7ac4e5de5d4ad22f8a.scope.
Oct 08 09:50:43 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:50:43 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:50:43 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:50:43.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:50:43 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:50:43 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:50:43 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:50:43.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:50:43 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:50:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/846ce7c2d28d4547188c2e652839b1d94587c822d700ad7af959906437a2f8be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 09:50:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/846ce7c2d28d4547188c2e652839b1d94587c822d700ad7af959906437a2f8be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:50:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/846ce7c2d28d4547188c2e652839b1d94587c822d700ad7af959906437a2f8be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:50:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/846ce7c2d28d4547188c2e652839b1d94587c822d700ad7af959906437a2f8be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 09:50:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/846ce7c2d28d4547188c2e652839b1d94587c822d700ad7af959906437a2f8be/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 09:50:43 compute-0 podman[119290]: 2025-10-08 09:50:43.179983864 +0000 UTC m=+0.020002883 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:50:43 compute-0 podman[119290]: 2025-10-08 09:50:43.278769008 +0000 UTC m=+0.118788047 container init 5775f780a40ad9c9a4a0d053543c503278333ba3caf77a7ac4e5de5d4ad22f8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_feistel, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct 08 09:50:43 compute-0 podman[119290]: 2025-10-08 09:50:43.292856457 +0000 UTC m=+0.132875456 container start 5775f780a40ad9c9a4a0d053543c503278333ba3caf77a7ac4e5de5d4ad22f8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct 08 09:50:43 compute-0 podman[119290]: 2025-10-08 09:50:43.297444957 +0000 UTC m=+0.137463956 container attach 5775f780a40ad9c9a4a0d053543c503278333ba3caf77a7ac4e5de5d4ad22f8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_feistel, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:50:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:50:43 compute-0 upbeat_feistel[119306]: --> passed data devices: 0 physical, 1 LVM
Oct 08 09:50:43 compute-0 upbeat_feistel[119306]: --> All data devices are unavailable
Oct 08 09:50:43 compute-0 systemd[1]: libpod-5775f780a40ad9c9a4a0d053543c503278333ba3caf77a7ac4e5de5d4ad22f8a.scope: Deactivated successfully.
Oct 08 09:50:43 compute-0 podman[119290]: 2025-10-08 09:50:43.617675896 +0000 UTC m=+0.457694905 container died 5775f780a40ad9c9a4a0d053543c503278333ba3caf77a7ac4e5de5d4ad22f8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_feistel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:50:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-846ce7c2d28d4547188c2e652839b1d94587c822d700ad7af959906437a2f8be-merged.mount: Deactivated successfully.
Oct 08 09:50:43 compute-0 podman[119290]: 2025-10-08 09:50:43.654555359 +0000 UTC m=+0.494574348 container remove 5775f780a40ad9c9a4a0d053543c503278333ba3caf77a7ac4e5de5d4ad22f8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_feistel, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:50:43 compute-0 systemd[1]: libpod-conmon-5775f780a40ad9c9a4a0d053543c503278333ba3caf77a7ac4e5de5d4ad22f8a.scope: Deactivated successfully.
Oct 08 09:50:43 compute-0 sudo[119030]: pam_unix(sudo:session): session closed for user root
Oct 08 09:50:43 compute-0 sudo[119433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:50:43 compute-0 sudo[119433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:50:43 compute-0 sudo[119433]: pam_unix(sudo:session): session closed for user root
Oct 08 09:50:43 compute-0 sudo[119482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzbspkjntlxsdnlzcibjmwqgcnwhwtec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917043.5368814-365-182184444943489/AnsiballZ_dnf.py'
Oct 08 09:50:43 compute-0 sudo[119482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:50:43 compute-0 sudo[119487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- lvm list --format json
Oct 08 09:50:43 compute-0 ceph-mon[73572]: pgmap v169: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 191 B/s rd, 0 op/s
Oct 08 09:50:43 compute-0 ceph-mon[73572]: Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Oct 08 09:50:43 compute-0 sudo[119487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:50:44 compute-0 python3.9[119486]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 08 09:50:44 compute-0 podman[119557]: 2025-10-08 09:50:44.17163332 +0000 UTC m=+0.043323985 container create 75c19f88a0e848edfbf08ee455feaf75ed229c04e4d67ccdc4c3337986cc65db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_driscoll, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:50:44 compute-0 systemd[1]: Started libpod-conmon-75c19f88a0e848edfbf08ee455feaf75ed229c04e4d67ccdc4c3337986cc65db.scope.
Oct 08 09:50:44 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:50:44 compute-0 podman[119557]: 2025-10-08 09:50:44.14681659 +0000 UTC m=+0.018507235 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:50:44 compute-0 podman[119557]: 2025-10-08 09:50:44.261335946 +0000 UTC m=+0.133026681 container init 75c19f88a0e848edfbf08ee455feaf75ed229c04e4d67ccdc4c3337986cc65db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_driscoll, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:50:44 compute-0 podman[119557]: 2025-10-08 09:50:44.27277296 +0000 UTC m=+0.144463585 container start 75c19f88a0e848edfbf08ee455feaf75ed229c04e4d67ccdc4c3337986cc65db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Oct 08 09:50:44 compute-0 funny_driscoll[119573]: 167 167
Oct 08 09:50:44 compute-0 systemd[1]: libpod-75c19f88a0e848edfbf08ee455feaf75ed229c04e4d67ccdc4c3337986cc65db.scope: Deactivated successfully.
Oct 08 09:50:44 compute-0 podman[119557]: 2025-10-08 09:50:44.278374002 +0000 UTC m=+0.150064727 container attach 75c19f88a0e848edfbf08ee455feaf75ed229c04e4d67ccdc4c3337986cc65db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_driscoll, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 08 09:50:44 compute-0 podman[119557]: 2025-10-08 09:50:44.278871369 +0000 UTC m=+0.150562034 container died 75c19f88a0e848edfbf08ee455feaf75ed229c04e4d67ccdc4c3337986cc65db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_driscoll, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:50:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc68ad8aac6aceda942fc122b0af17e0c3a716cafb683490721397e23be52085-merged.mount: Deactivated successfully.
Oct 08 09:50:44 compute-0 podman[119557]: 2025-10-08 09:50:44.334212224 +0000 UTC m=+0.205902879 container remove 75c19f88a0e848edfbf08ee455feaf75ed229c04e4d67ccdc4c3337986cc65db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:50:44 compute-0 systemd[1]: libpod-conmon-75c19f88a0e848edfbf08ee455feaf75ed229c04e4d67ccdc4c3337986cc65db.scope: Deactivated successfully.
Oct 08 09:50:44 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v170: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 861 B/s rd, 478 B/s wr, 1 op/s
Oct 08 09:50:44 compute-0 podman[119599]: 2025-10-08 09:50:44.484646452 +0000 UTC m=+0.038039961 container create 2703d53e934088f1c3113c2faa7fac738a7e25ea4095e662d66493ca0bf579b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:50:44 compute-0 systemd[1]: Started libpod-conmon-2703d53e934088f1c3113c2faa7fac738a7e25ea4095e662d66493ca0bf579b0.scope.
Oct 08 09:50:44 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:50:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bba69b4580df84d4cfb5601e077e71ac8112cbab8a0465842878f57f3ee725c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 09:50:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bba69b4580df84d4cfb5601e077e71ac8112cbab8a0465842878f57f3ee725c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:50:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bba69b4580df84d4cfb5601e077e71ac8112cbab8a0465842878f57f3ee725c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:50:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bba69b4580df84d4cfb5601e077e71ac8112cbab8a0465842878f57f3ee725c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 09:50:44 compute-0 podman[119599]: 2025-10-08 09:50:44.548247607 +0000 UTC m=+0.101641166 container init 2703d53e934088f1c3113c2faa7fac738a7e25ea4095e662d66493ca0bf579b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:50:44 compute-0 podman[119599]: 2025-10-08 09:50:44.554917155 +0000 UTC m=+0.108310664 container start 2703d53e934088f1c3113c2faa7fac738a7e25ea4095e662d66493ca0bf579b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:50:44 compute-0 podman[119599]: 2025-10-08 09:50:44.557732687 +0000 UTC m=+0.111126236 container attach 2703d53e934088f1c3113c2faa7fac738a7e25ea4095e662d66493ca0bf579b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:50:44 compute-0 podman[119599]: 2025-10-08 09:50:44.469128006 +0000 UTC m=+0.022521535 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:50:44 compute-0 loving_jepsen[119615]: {
Oct 08 09:50:44 compute-0 loving_jepsen[119615]:     "1": [
Oct 08 09:50:44 compute-0 loving_jepsen[119615]:         {
Oct 08 09:50:44 compute-0 loving_jepsen[119615]:             "devices": [
Oct 08 09:50:44 compute-0 loving_jepsen[119615]:                 "/dev/loop3"
Oct 08 09:50:44 compute-0 loving_jepsen[119615]:             ],
Oct 08 09:50:44 compute-0 loving_jepsen[119615]:             "lv_name": "ceph_lv0",
Oct 08 09:50:44 compute-0 loving_jepsen[119615]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 09:50:44 compute-0 loving_jepsen[119615]:             "lv_size": "21470642176",
Oct 08 09:50:44 compute-0 loving_jepsen[119615]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 08 09:50:44 compute-0 loving_jepsen[119615]:             "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 09:50:44 compute-0 loving_jepsen[119615]:             "name": "ceph_lv0",
Oct 08 09:50:44 compute-0 loving_jepsen[119615]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 09:50:44 compute-0 loving_jepsen[119615]:             "tags": {
Oct 08 09:50:44 compute-0 loving_jepsen[119615]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 08 09:50:44 compute-0 loving_jepsen[119615]:                 "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 09:50:44 compute-0 loving_jepsen[119615]:                 "ceph.cephx_lockbox_secret": "",
Oct 08 09:50:44 compute-0 loving_jepsen[119615]:                 "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct 08 09:50:44 compute-0 loving_jepsen[119615]:                 "ceph.cluster_name": "ceph",
Oct 08 09:50:44 compute-0 loving_jepsen[119615]:                 "ceph.crush_device_class": "",
Oct 08 09:50:44 compute-0 loving_jepsen[119615]:                 "ceph.encrypted": "0",
Oct 08 09:50:44 compute-0 loving_jepsen[119615]:                 "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct 08 09:50:44 compute-0 loving_jepsen[119615]:                 "ceph.osd_id": "1",
Oct 08 09:50:44 compute-0 loving_jepsen[119615]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 08 09:50:44 compute-0 loving_jepsen[119615]:                 "ceph.type": "block",
Oct 08 09:50:44 compute-0 loving_jepsen[119615]:                 "ceph.vdo": "0",
Oct 08 09:50:44 compute-0 loving_jepsen[119615]:                 "ceph.with_tpm": "0"
Oct 08 09:50:44 compute-0 loving_jepsen[119615]:             },
Oct 08 09:50:44 compute-0 loving_jepsen[119615]:             "type": "block",
Oct 08 09:50:44 compute-0 loving_jepsen[119615]:             "vg_name": "ceph_vg0"
Oct 08 09:50:44 compute-0 loving_jepsen[119615]:         }
Oct 08 09:50:44 compute-0 loving_jepsen[119615]:     ]
Oct 08 09:50:44 compute-0 loving_jepsen[119615]: }
Oct 08 09:50:44 compute-0 systemd[1]: libpod-2703d53e934088f1c3113c2faa7fac738a7e25ea4095e662d66493ca0bf579b0.scope: Deactivated successfully.
Oct 08 09:50:44 compute-0 podman[119599]: 2025-10-08 09:50:44.822255997 +0000 UTC m=+0.375649506 container died 2703d53e934088f1c3113c2faa7fac738a7e25ea4095e662d66493ca0bf579b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_jepsen, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:50:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-3bba69b4580df84d4cfb5601e077e71ac8112cbab8a0465842878f57f3ee725c-merged.mount: Deactivated successfully.
Oct 08 09:50:44 compute-0 podman[119599]: 2025-10-08 09:50:44.864284538 +0000 UTC m=+0.417678047 container remove 2703d53e934088f1c3113c2faa7fac738a7e25ea4095e662d66493ca0bf579b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:50:44 compute-0 systemd[1]: libpod-conmon-2703d53e934088f1c3113c2faa7fac738a7e25ea4095e662d66493ca0bf579b0.scope: Deactivated successfully.
Oct 08 09:50:44 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Scheduled restart job, restart counter is at 2.
Oct 08 09:50:44 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct 08 09:50:44 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Consumed 1.397s CPU time.
Oct 08 09:50:44 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct 08 09:50:44 compute-0 sudo[119487]: pam_unix(sudo:session): session closed for user root
Oct 08 09:50:44 compute-0 sudo[119644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:50:44 compute-0 sudo[119644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:50:44 compute-0 sudo[119644]: pam_unix(sudo:session): session closed for user root
Oct 08 09:50:45 compute-0 sudo[119689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- raw list --format json
Oct 08 09:50:45 compute-0 sudo[119689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:50:45 compute-0 podman[119729]: 2025-10-08 09:50:45.099942028 +0000 UTC m=+0.043272013 container create 5648b6991b3670625e89da113426ec69b90cf4710ec8879fe91ecbad4e23ac94 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 08 09:50:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db3a225b971325494d9fd29d607fb50df99f9768861ae0ade871ec413a763e24/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 08 09:50:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db3a225b971325494d9fd29d607fb50df99f9768861ae0ade871ec413a763e24/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:50:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db3a225b971325494d9fd29d607fb50df99f9768861ae0ade871ec413a763e24/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 09:50:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db3a225b971325494d9fd29d607fb50df99f9768861ae0ade871ec413a763e24/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.uynkmx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 09:50:45 compute-0 podman[119729]: 2025-10-08 09:50:45.156578576 +0000 UTC m=+0.099908581 container init 5648b6991b3670625e89da113426ec69b90cf4710ec8879fe91ecbad4e23ac94 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct 08 09:50:45 compute-0 podman[119729]: 2025-10-08 09:50:45.161144314 +0000 UTC m=+0.104474309 container start 5648b6991b3670625e89da113426ec69b90cf4710ec8879fe91ecbad4e23ac94 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:50:45 compute-0 bash[119729]: 5648b6991b3670625e89da113426ec69b90cf4710ec8879fe91ecbad4e23ac94
Oct 08 09:50:45 compute-0 podman[119729]: 2025-10-08 09:50:45.082923342 +0000 UTC m=+0.026253357 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:50:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:45 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 08 09:50:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:45 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 08 09:50:45 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct 08 09:50:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:45 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 08 09:50:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:45 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 08 09:50:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:45 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 08 09:50:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:45 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 08 09:50:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:45 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 08 09:50:45 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:50:45 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 09:50:45 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:50:45.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 09:50:45 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:50:45 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:50:45 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:50:45.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:50:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:45 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 09:50:45 compute-0 sudo[119482]: pam_unix(sudo:session): session closed for user root
Oct 08 09:50:45 compute-0 podman[119851]: 2025-10-08 09:50:45.411762372 +0000 UTC m=+0.042378074 container create 7522981c35280f86244c05d6b63d4db5c99a112986e074f931f8798768ffdbd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_bardeen, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS)
Oct 08 09:50:45 compute-0 systemd[1]: Started libpod-conmon-7522981c35280f86244c05d6b63d4db5c99a112986e074f931f8798768ffdbd6.scope.
Oct 08 09:50:45 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:50:45 compute-0 podman[119851]: 2025-10-08 09:50:45.391559712 +0000 UTC m=+0.022175464 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:50:45 compute-0 podman[119851]: 2025-10-08 09:50:45.502574754 +0000 UTC m=+0.133190546 container init 7522981c35280f86244c05d6b63d4db5c99a112986e074f931f8798768ffdbd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_bardeen, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct 08 09:50:45 compute-0 podman[119851]: 2025-10-08 09:50:45.514724471 +0000 UTC m=+0.145340203 container start 7522981c35280f86244c05d6b63d4db5c99a112986e074f931f8798768ffdbd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_bardeen, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct 08 09:50:45 compute-0 podman[119851]: 2025-10-08 09:50:45.519259399 +0000 UTC m=+0.149875141 container attach 7522981c35280f86244c05d6b63d4db5c99a112986e074f931f8798768ffdbd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_bardeen, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:50:45 compute-0 dazzling_bardeen[119867]: 167 167
Oct 08 09:50:45 compute-0 systemd[1]: libpod-7522981c35280f86244c05d6b63d4db5c99a112986e074f931f8798768ffdbd6.scope: Deactivated successfully.
Oct 08 09:50:45 compute-0 conmon[119867]: conmon 7522981c35280f86244c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7522981c35280f86244c05d6b63d4db5c99a112986e074f931f8798768ffdbd6.scope/container/memory.events
Oct 08 09:50:45 compute-0 podman[119851]: 2025-10-08 09:50:45.525450551 +0000 UTC m=+0.156066283 container died 7522981c35280f86244c05d6b63d4db5c99a112986e074f931f8798768ffdbd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_bardeen, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct 08 09:50:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-2adc613144e9b746934f61272d7f4b1e866dd681cc55db2b2881434e2506b4b9-merged.mount: Deactivated successfully.
Oct 08 09:50:45 compute-0 podman[119851]: 2025-10-08 09:50:45.58180346 +0000 UTC m=+0.212419162 container remove 7522981c35280f86244c05d6b63d4db5c99a112986e074f931f8798768ffdbd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_bardeen, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:50:45 compute-0 systemd[1]: libpod-conmon-7522981c35280f86244c05d6b63d4db5c99a112986e074f931f8798768ffdbd6.scope: Deactivated successfully.
Oct 08 09:50:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:50:45] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Oct 08 09:50:45 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:50:45] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Oct 08 09:50:45 compute-0 podman[119893]: 2025-10-08 09:50:45.741568962 +0000 UTC m=+0.039731807 container create ccf223cea8a4c4b0cc52eabb94d17fb678578c2f6d2765b6224fa976286c177d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_poitras, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:50:45 compute-0 systemd[1]: Started libpod-conmon-ccf223cea8a4c4b0cc52eabb94d17fb678578c2f6d2765b6224fa976286c177d.scope.
Oct 08 09:50:45 compute-0 podman[119893]: 2025-10-08 09:50:45.723005407 +0000 UTC m=+0.021168252 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:50:45 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:50:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/429ae51c20707ef2c835d53afac7b723f1b2216a4cd35f51d2d7270d506f6292/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 09:50:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/429ae51c20707ef2c835d53afac7b723f1b2216a4cd35f51d2d7270d506f6292/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:50:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/429ae51c20707ef2c835d53afac7b723f1b2216a4cd35f51d2d7270d506f6292/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:50:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/429ae51c20707ef2c835d53afac7b723f1b2216a4cd35f51d2d7270d506f6292/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 09:50:45 compute-0 podman[119893]: 2025-10-08 09:50:45.857477125 +0000 UTC m=+0.155639970 container init ccf223cea8a4c4b0cc52eabb94d17fb678578c2f6d2765b6224fa976286c177d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 08 09:50:45 compute-0 ceph-mon[73572]: pgmap v170: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 861 B/s rd, 478 B/s wr, 1 op/s
Oct 08 09:50:45 compute-0 podman[119893]: 2025-10-08 09:50:45.865788366 +0000 UTC m=+0.163951231 container start ccf223cea8a4c4b0cc52eabb94d17fb678578c2f6d2765b6224fa976286c177d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_poitras, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:50:45 compute-0 podman[119893]: 2025-10-08 09:50:45.869683802 +0000 UTC m=+0.167846647 container attach ccf223cea8a4c4b0cc52eabb94d17fb678578c2f6d2765b6224fa976286c177d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_poitras, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 08 09:50:46 compute-0 sudo[120088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fianvdzakzynajjydfmmbzvvwgugftpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917046.0404463-398-155117664660411/AnsiballZ_setup.py'
Oct 08 09:50:46 compute-0 sudo[120088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:50:46 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v171: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 861 B/s rd, 478 B/s wr, 1 op/s
Oct 08 09:50:46 compute-0 lvm[120113]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 08 09:50:46 compute-0 lvm[120113]: VG ceph_vg0 finished
Oct 08 09:50:46 compute-0 serene_poitras[119909]: {}
Oct 08 09:50:46 compute-0 systemd[1]: libpod-ccf223cea8a4c4b0cc52eabb94d17fb678578c2f6d2765b6224fa976286c177d.scope: Deactivated successfully.
Oct 08 09:50:46 compute-0 systemd[1]: libpod-ccf223cea8a4c4b0cc52eabb94d17fb678578c2f6d2765b6224fa976286c177d.scope: Consumed 1.090s CPU time.
Oct 08 09:50:46 compute-0 podman[119893]: 2025-10-08 09:50:46.551309362 +0000 UTC m=+0.849472207 container died ccf223cea8a4c4b0cc52eabb94d17fb678578c2f6d2765b6224fa976286c177d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_poitras, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:50:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-429ae51c20707ef2c835d53afac7b723f1b2216a4cd35f51d2d7270d506f6292-merged.mount: Deactivated successfully.
Oct 08 09:50:46 compute-0 podman[119893]: 2025-10-08 09:50:46.594393468 +0000 UTC m=+0.892556313 container remove ccf223cea8a4c4b0cc52eabb94d17fb678578c2f6d2765b6224fa976286c177d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_poitras, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:50:46 compute-0 python3.9[120095]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 08 09:50:46 compute-0 systemd[1]: libpod-conmon-ccf223cea8a4c4b0cc52eabb94d17fb678578c2f6d2765b6224fa976286c177d.scope: Deactivated successfully.
Oct 08 09:50:46 compute-0 sudo[119689]: pam_unix(sudo:session): session closed for user root
Oct 08 09:50:46 compute-0 sudo[120088]: pam_unix(sudo:session): session closed for user root
Oct 08 09:50:46 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 09:50:46 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:50:46 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 09:50:46 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:50:46 compute-0 sudo[120155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 08 09:50:46 compute-0 sudo[120155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:50:46 compute-0 sudo[120155]: pam_unix(sudo:session): session closed for user root
Oct 08 09:50:46 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:50:46.962Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 09:50:46 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:50:46.963Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 09:50:47 compute-0 sudo[120306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbmeltqqzqkxtvmwqvrqkpdckycvfzkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917046.8863473-422-227114623202666/AnsiballZ_stat.py'
Oct 08 09:50:47 compute-0 sudo[120306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:50:47 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:50:47 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:50:47 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:50:47.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:50:47 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:50:47 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 09:50:47 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:50:47.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 09:50:47 compute-0 python3.9[120308]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 09:50:47 compute-0 sudo[120306]: pam_unix(sudo:session): session closed for user root
Oct 08 09:50:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_09:50:47
Oct 08 09:50:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 08 09:50:47 compute-0 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct 08 09:50:47 compute-0 ceph-mgr[73869]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', '.nfs', 'default.rgw.control', 'volumes', 'images', 'default.rgw.log', 'backups', '.mgr', 'cephfs.cephfs.meta', '.rgw.root', 'vms']
Oct 08 09:50:47 compute-0 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct 08 09:50:47 compute-0 ceph-mon[73572]: pgmap v171: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 861 B/s rd, 478 B/s wr, 1 op/s
Oct 08 09:50:47 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:50:47 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:50:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct 08 09:50:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:50:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 08 09:50:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:50:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:50:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:50:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:50:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:50:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:50:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:50:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:50:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:50:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 08 09:50:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:50:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:50:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:50:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 08 09:50:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:50:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 08 09:50:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:50:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:50:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:50:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 08 09:50:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:50:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 09:50:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Oct 08 09:50:47 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:50:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 09:50:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:50:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:50:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:50:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 08 09:50:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 09:50:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 09:50:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 09:50:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 09:50:47 compute-0 sudo[120459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pafemritulpvnxqthzeqqkqpwzqkmqng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917047.633894-449-117547863515445/AnsiballZ_stat.py'
Oct 08 09:50:47 compute-0 sudo[120459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:50:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 08 09:50:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 09:50:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 09:50:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 09:50:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 09:50:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:50:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:50:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:50:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:50:48 compute-0 python3.9[120461]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 09:50:48 compute-0 sudo[120459]: pam_unix(sudo:session): session closed for user root
Oct 08 09:50:48 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v172: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 861 B/s rd, 478 B/s wr, 1 op/s
Oct 08 09:50:48 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:50:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:50:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:50:48 compute-0 sudo[120611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vggekjipsnmhysvhchbekzxylsnqrdti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917048.5088317-479-231928010915043/AnsiballZ_service_facts.py'
Oct 08 09:50:48 compute-0 sudo[120611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:50:49 compute-0 python3.9[120613]: ansible-service_facts Invoked
Oct 08 09:50:49 compute-0 network[120631]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 08 09:50:49 compute-0 network[120632]: 'network-scripts' will be removed from distribution in near future.
Oct 08 09:50:49 compute-0 network[120633]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 08 09:50:49 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:50:49 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 09:50:49 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:50:49.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 09:50:49 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:50:49 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:50:49 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:50:49.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:50:49 compute-0 ceph-mon[73572]: pgmap v172: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 861 B/s rd, 478 B/s wr, 1 op/s
Oct 08 09:50:50 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v173: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1.0 KiB/s wr, 3 op/s
Oct 08 09:50:51 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:50:51 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 09:50:51 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:50:51.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 09:50:51 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:50:51 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:50:51 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:50:51.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:50:51 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:51 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Oct 08 09:50:51 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:51 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Oct 08 09:50:51 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:51 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 09:50:51 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:51 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 09:50:51 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:51 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 09:50:51 compute-0 ceph-mon[73572]: pgmap v173: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1.0 KiB/s wr, 3 op/s
Oct 08 09:50:52 compute-0 sudo[120611]: pam_unix(sudo:session): session closed for user root
Oct 08 09:50:52 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v174: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1.0 KiB/s wr, 3 op/s
Oct 08 09:50:52 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:52 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 09:50:52 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:52 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 09:50:52 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:52 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 09:50:52 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:52 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 09:50:52 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:52 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 09:50:52 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:52 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 09:50:52 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:52 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 09:50:53 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:50:53 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 09:50:53 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:50:53.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 09:50:53 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:50:53 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:50:53 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:50:53.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:50:53 compute-0 sudo[120923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydfpyelxqghlqqbfhsyjishfibpaldch ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1759917053.0540226-518-137361522898107/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1759917053.0540226-518-137361522898107/args'
Oct 08 09:50:53 compute-0 sudo[120923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:50:53 compute-0 sudo[120923]: pam_unix(sudo:session): session closed for user root
Oct 08 09:50:53 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:50:53 compute-0 ceph-mon[73572]: pgmap v174: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1.0 KiB/s wr, 3 op/s
Oct 08 09:50:54 compute-0 sudo[121091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sruwdfrqjsyuwwpebffakmvqifpdlgoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917054.068227-551-209286772865453/AnsiballZ_dnf.py'
Oct 08 09:50:54 compute-0 sudo[121091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:50:54 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v175: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.5 KiB/s wr, 5 op/s
Oct 08 09:50:54 compute-0 python3.9[121093]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 08 09:50:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/095054 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 08 09:50:55 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:50:55 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 09:50:55 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:50:55.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 09:50:55 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:50:55 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:50:55 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:50:55.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:50:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:50:55] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct 08 09:50:55 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:50:55] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct 08 09:50:55 compute-0 sudo[121091]: pam_unix(sudo:session): session closed for user root
Oct 08 09:50:56 compute-0 ceph-mon[73572]: pgmap v175: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.5 KiB/s wr, 5 op/s
Oct 08 09:50:56 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v176: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Oct 08 09:50:56 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:50:56.964Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:50:57 compute-0 sudo[121247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akryvjbvtpenwblcevdpriedpbqgwgrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917056.402291-590-139992258398545/AnsiballZ_package_facts.py'
Oct 08 09:50:57 compute-0 sudo[121247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:50:57 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:50:57 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da0d5d0 =====
Oct 08 09:50:57 compute-0 radosgw[88577]: ====== req done req=0x7f162da0d5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:50:57 compute-0 radosgw[88577]: beast: 0x7f162da0d5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:50:57.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:50:57 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:50:57 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:50:57.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:50:57 compute-0 python3.9[121249]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Oct 08 09:50:57 compute-0 ceph-mon[73572]: pgmap v176: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Oct 08 09:50:57 compute-0 sudo[121247]: pam_unix(sudo:session): session closed for user root
Oct 08 09:50:58 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v177: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Oct 08 09:50:58 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:50:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:58 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-000000000000000a:nfs.cephfs.2: -2
Oct 08 09:50:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:58 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 08 09:50:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:58 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 08 09:50:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:58 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 08 09:50:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:58 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 08 09:50:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:58 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 08 09:50:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:58 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 08 09:50:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:58 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 08 09:50:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:58 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 08 09:50:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:58 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 08 09:50:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:58 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 08 09:50:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:58 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 08 09:50:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:58 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 08 09:50:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:58 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 08 09:50:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:58 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 08 09:50:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:58 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 08 09:50:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:58 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 08 09:50:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:58 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 08 09:50:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:58 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 08 09:50:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:58 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 08 09:50:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:58 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 08 09:50:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:58 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 08 09:50:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:58 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 08 09:50:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:58 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 08 09:50:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:58 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 08 09:50:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:58 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 08 09:50:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:58 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 08 09:50:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:58 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 08 09:50:58 compute-0 sudo[121412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvxglstwtfztgcrcttjlujfsudbikcoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917058.3559263-620-64564805481055/AnsiballZ_stat.py'
Oct 08 09:50:58 compute-0 sudo[121412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:50:58 compute-0 python3.9[121414]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:50:58 compute-0 sudo[121412]: pam_unix(sudo:session): session closed for user root
Oct 08 09:50:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:59 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f04000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:50:59 compute-0 sudo[121494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqsrdiqnevzuzwjpjinrtsdbfdmhdrhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917058.3559263-620-64564805481055/AnsiballZ_file.py'
Oct 08 09:50:59 compute-0 sudo[121494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:50:59 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:50:59 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 09:50:59 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:50:59.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 09:50:59 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:50:59 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:50:59 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:50:59.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:50:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:59 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef8001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:50:59 compute-0 python3.9[121496]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:50:59 compute-0 sudo[121494]: pam_unix(sudo:session): session closed for user root
Oct 08 09:50:59 compute-0 ceph-mon[73572]: pgmap v177: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Oct 08 09:50:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:59 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:00 compute-0 sudo[121647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayzilrxuorubhqxpqdbqmyhiwunfadec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917059.7510355-656-249881174581955/AnsiballZ_stat.py'
Oct 08 09:51:00 compute-0 sudo[121647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:51:00 compute-0 python3.9[121649]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:51:00 compute-0 sudo[121647]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:00 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v178: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Oct 08 09:51:00 compute-0 sudo[121725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkrfxkcxgpyoiczyejevofzsvmqaoaeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917059.7510355-656-249881174581955/AnsiballZ_file.py'
Oct 08 09:51:00 compute-0 sudo[121725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:51:00 compute-0 python3.9[121727]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:51:00 compute-0 sudo[121725]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:00 compute-0 sudo[121752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 09:51:00 compute-0 sudo[121752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:51:00 compute-0 sudo[121752]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:01 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:01 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec0016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:01 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:51:01 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:51:01 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:51:01.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:51:01 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:51:01 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:51:01 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:51:01.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:51:01 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/095101 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 08 09:51:01 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:01 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f04001bd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:01 compute-0 ceph-mon[73572]: pgmap v178: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Oct 08 09:51:01 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:01 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef8001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:01 compute-0 sudo[121904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsnzfoxnsshihsootjlzpnlimzoudzlh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917061.537767-710-143321950390717/AnsiballZ_lineinfile.py'
Oct 08 09:51:01 compute-0 sudo[121904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:51:02 compute-0 python3.9[121906]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:51:02 compute-0 sudo[121904]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:02 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v179: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 852 B/s wr, 3 op/s
Oct 08 09:51:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 09:51:02 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:51:03 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:03 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:03 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:51:03 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 09:51:03 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:51:03.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 09:51:03 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:51:03 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 09:51:03 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:51:03.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 09:51:03 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:03 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec002000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:03 compute-0 sudo[122057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnojqfqahtciixstzsyargneuwuksbzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917063.1106682-755-254247257377729/AnsiballZ_setup.py'
Oct 08 09:51:03 compute-0 sudo[122057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:51:03 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:51:03 compute-0 ceph-mon[73572]: pgmap v179: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 852 B/s wr, 3 op/s
Oct 08 09:51:03 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:51:03 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:03 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f04001bd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:03 compute-0 python3.9[122059]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 08 09:51:03 compute-0 sudo[122057]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:04 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v180: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 852 B/s wr, 3 op/s
Oct 08 09:51:04 compute-0 sudo[122142]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csxpbvigsudkkunhdfzmstpsnskfkmkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917063.1106682-755-254247257377729/AnsiballZ_systemd.py'
Oct 08 09:51:04 compute-0 sudo[122142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:51:04 compute-0 python3.9[122144]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 09:51:04 compute-0 sudo[122142]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:05 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef8001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:05 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:51:05 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:51:05 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:51:05.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:51:05 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:51:05 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:51:05 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:51:05.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:51:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:05 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:05 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec002000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:05 compute-0 ceph-mon[73572]: pgmap v180: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 852 B/s wr, 3 op/s
Oct 08 09:51:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:51:05] "GET /metrics HTTP/1.1" 200 48339 "" "Prometheus/2.51.0"
Oct 08 09:51:05 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:51:05] "GET /metrics HTTP/1.1" 200 48339 "" "Prometheus/2.51.0"
Oct 08 09:51:05 compute-0 sshd-session[116121]: Connection closed by 192.168.122.30 port 58126
Oct 08 09:51:05 compute-0 sshd-session[116118]: pam_unix(sshd:session): session closed for user zuul
Oct 08 09:51:05 compute-0 systemd[1]: session-43.scope: Deactivated successfully.
Oct 08 09:51:05 compute-0 systemd[1]: session-43.scope: Consumed 23.156s CPU time.
Oct 08 09:51:05 compute-0 systemd-logind[798]: Session 43 logged out. Waiting for processes to exit.
Oct 08 09:51:05 compute-0 systemd-logind[798]: Removed session 43.
Oct 08 09:51:06 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v181: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Oct 08 09:51:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/095106 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 08 09:51:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:51:06.965Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 09:51:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:51:06.966Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 09:51:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:07 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f040089d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:07 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:51:07 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:51:07 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:51:07.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:51:07 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:51:07 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:51:07 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:51:07.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:51:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:07 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef8001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:07 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:07 compute-0 ceph-mon[73572]: pgmap v181: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Oct 08 09:51:08 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v182: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Oct 08 09:51:08 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:51:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:09 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec002000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:09 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:51:09 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 09:51:09 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:51:09.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 09:51:09 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:51:09 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:51:09 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:51:09.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:51:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:09 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f040089d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:09 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef8001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:09 compute-0 ceph-mon[73572]: pgmap v182: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Oct 08 09:51:10 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v183: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 255 B/s wr, 1 op/s
Oct 08 09:51:11 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:11 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:11 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:51:11 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 09:51:11 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:51:11.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 09:51:11 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:51:11 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:51:11 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:51:11.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:51:11 compute-0 sshd-session[122178]: Accepted publickey for zuul from 192.168.122.30 port 37460 ssh2: ECDSA SHA256:7LvTHAj52RCSkKXOsIbzSDrEw7lwj23D0dAJ0Qgx0Rg
Oct 08 09:51:11 compute-0 systemd-logind[798]: New session 44 of user zuul.
Oct 08 09:51:11 compute-0 systemd[1]: Started Session 44 of User zuul.
Oct 08 09:51:11 compute-0 sshd-session[122178]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 08 09:51:11 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:11 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:11 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:11 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f040096e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:11 compute-0 sudo[122332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chhhocmykziqddkmemwytzhjsicezbvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917071.4634378-26-39767456038881/AnsiballZ_file.py'
Oct 08 09:51:11 compute-0 sudo[122332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:51:12 compute-0 ceph-mon[73572]: pgmap v183: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 255 B/s wr, 1 op/s
Oct 08 09:51:12 compute-0 python3.9[122334]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:51:12 compute-0 sudo[122332]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:12 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v184: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 09:51:12 compute-0 sudo[122484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcowfwbnsgvzfjlqnxgvgwkamoldcvxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917072.3865058-62-270490496763074/AnsiballZ_stat.py'
Oct 08 09:51:12 compute-0 sudo[122484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:51:13 compute-0 python3.9[122486]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:51:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:13 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f040096e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:13 compute-0 sudo[122484]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:13 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:51:13 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 09:51:13 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:51:13.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 09:51:13 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:51:13 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:51:13 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:51:13.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:51:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:13 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f040096e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:13 compute-0 sudo[122563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecyvypwxxouwsyungruznopdiqlcgbzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917072.3865058-62-270490496763074/AnsiballZ_file.py'
Oct 08 09:51:13 compute-0 sudo[122563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:51:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:51:13 compute-0 python3.9[122565]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:51:13 compute-0 sudo[122563]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:13 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed8000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:13 compute-0 sshd-session[122181]: Connection closed by 192.168.122.30 port 37460
Oct 08 09:51:13 compute-0 sshd-session[122178]: pam_unix(sshd:session): session closed for user zuul
Oct 08 09:51:13 compute-0 systemd[1]: session-44.scope: Deactivated successfully.
Oct 08 09:51:13 compute-0 systemd[1]: session-44.scope: Consumed 1.620s CPU time.
Oct 08 09:51:13 compute-0 systemd-logind[798]: Session 44 logged out. Waiting for processes to exit.
Oct 08 09:51:13 compute-0 systemd-logind[798]: Removed session 44.
Oct 08 09:51:14 compute-0 ceph-mon[73572]: pgmap v184: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 09:51:14 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v185: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 08 09:51:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:15 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:15 compute-0 ceph-mon[73572]: pgmap v185: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 08 09:51:15 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:51:15 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:51:15 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:51:15.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:51:15 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:51:15 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:51:15 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:51:15.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:51:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:15 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f040096e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:15 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 09:51:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:15 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f040096e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:51:15] "GET /metrics HTTP/1.1" 200 48339 "" "Prometheus/2.51.0"
Oct 08 09:51:15 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:51:15] "GET /metrics HTTP/1.1" 200 48339 "" "Prometheus/2.51.0"
Oct 08 09:51:16 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v186: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Oct 08 09:51:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:51:16.966Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 09:51:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:51:16.967Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 09:51:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:17 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:17 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:51:17 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000031s ======
Oct 08 09:51:17 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:51:17.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Oct 08 09:51:17 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:51:17 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:51:17 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:51:17.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:51:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:17 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:17 compute-0 ceph-mon[73572]: pgmap v186: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Oct 08 09:51:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:17 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f040096e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:17 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 09:51:17 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:51:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:51:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:51:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:51:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:51:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:51:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:51:18 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v187: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Oct 08 09:51:18 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:51:18 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:51:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:18 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 09:51:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:18 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 09:51:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:19 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f040096e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:19 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:51:19 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:51:19 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:51:19.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:51:19 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:51:19 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:51:19 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:51:19.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:51:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:19 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:19 compute-0 sshd-session[122596]: Accepted publickey for zuul from 192.168.122.30 port 45146 ssh2: ECDSA SHA256:7LvTHAj52RCSkKXOsIbzSDrEw7lwj23D0dAJ0Qgx0Rg
Oct 08 09:51:19 compute-0 systemd-logind[798]: New session 45 of user zuul.
Oct 08 09:51:19 compute-0 systemd[1]: Started Session 45 of User zuul.
Oct 08 09:51:19 compute-0 sshd-session[122596]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 08 09:51:19 compute-0 ceph-mon[73572]: pgmap v187: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Oct 08 09:51:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:19 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:20 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v188: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 596 B/s wr, 2 op/s
Oct 08 09:51:20 compute-0 python3.9[122750]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 08 09:51:20 compute-0 sudo[122779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 09:51:20 compute-0 sudo[122779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:51:20 compute-0 sudo[122779]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:21 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:21 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f040096e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:21 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:51:21 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:51:21 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:51:21.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:51:21 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:51:21 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:51:21 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:51:21.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:51:21 compute-0 sudo[122930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcklmjqldinbelmdtextvxepaneuaxpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917080.9234095-59-184446790959611/AnsiballZ_file.py'
Oct 08 09:51:21 compute-0 sudo[122930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:51:21 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:21 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f040096e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:21 compute-0 python3.9[122932]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:51:21 compute-0 sudo[122930]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:21 compute-0 ceph-mon[73572]: pgmap v188: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 596 B/s wr, 2 op/s
Oct 08 09:51:21 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:21 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:21 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:21 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 08 09:51:22 compute-0 sudo[123106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxebdidjrrsnytvvdsoyobwkukhsesrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917081.7617805-83-57704441638568/AnsiballZ_stat.py'
Oct 08 09:51:22 compute-0 sudo[123106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:51:22 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v189: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Oct 08 09:51:22 compute-0 python3.9[123108]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:51:22 compute-0 sudo[123106]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:22 compute-0 sudo[123184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfkiuqpeorwzezhipzqhllyqvicucmtx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917081.7617805-83-57704441638568/AnsiballZ_file.py'
Oct 08 09:51:22 compute-0 sudo[123184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:51:22 compute-0 python3.9[123186]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.zhfsalr3 recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:51:22 compute-0 sudo[123184]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:23 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:23 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:51:23 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:51:23 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:51:23.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:51:23 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:51:23 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:51:23 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:51:23.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:51:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:23 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef80034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:23 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:51:23 compute-0 ceph-mon[73572]: pgmap v189: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Oct 08 09:51:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:23 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f040096e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:23 compute-0 sudo[123337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbblrylswozzvidadxftqbpqywghkmtk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917083.4473724-143-179311626866360/AnsiballZ_stat.py'
Oct 08 09:51:23 compute-0 sudo[123337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:51:23 compute-0 python3.9[123339]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:51:23 compute-0 sudo[123337]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:24 compute-0 sudo[123416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khlqhuqgywnukutdkwdsedibiuumthla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917083.4473724-143-179311626866360/AnsiballZ_file.py'
Oct 08 09:51:24 compute-0 sudo[123416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:51:24 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v190: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 08 09:51:24 compute-0 python3.9[123418]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.8p15tyji recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:51:24 compute-0 sudo[123416]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:25 compute-0 sudo[123569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlqtywjteunzzhsafklnfxmitdsmrnhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917084.7498536-182-117849102940242/AnsiballZ_file.py'
Oct 08 09:51:25 compute-0 sudo[123569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:51:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:25 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f040096e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:25 compute-0 python3.9[123571]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:51:25 compute-0 sudo[123569]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:25 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:51:25 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:51:25 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:51:25.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:51:25 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:51:25 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:51:25 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:51:25.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:51:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:25 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:25 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef80034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:25 compute-0 ceph-mon[73572]: pgmap v190: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 08 09:51:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:51:25] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Oct 08 09:51:25 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:51:25] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Oct 08 09:51:25 compute-0 sudo[123721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhuwkawkqumnojhqtrjkqlklrvakczru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917085.5601673-206-90197665607299/AnsiballZ_stat.py'
Oct 08 09:51:25 compute-0 sudo[123721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:51:26 compute-0 python3.9[123723]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:51:26 compute-0 sudo[123721]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:26 compute-0 sudo[123800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgozodpsoovmsuysmbjrivbdhajonefa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917085.5601673-206-90197665607299/AnsiballZ_file.py'
Oct 08 09:51:26 compute-0 sudo[123800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:51:26 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v191: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 937 B/s wr, 2 op/s
Oct 08 09:51:26 compute-0 python3.9[123802]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:51:26 compute-0 sudo[123800]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/095126 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 08 09:51:26 compute-0 sudo[123952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynfblsjnwzbnakxqkdaonxikjraynqma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917086.6094139-206-160505951638168/AnsiballZ_stat.py'
Oct 08 09:51:26 compute-0 sudo[123952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:51:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:51:26.968Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 09:51:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:51:26.969Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:51:27 compute-0 python3.9[123954]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:51:27 compute-0 sudo[123952]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:27 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed8002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:27 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:51:27 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:51:27 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:51:27.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:51:27 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:51:27 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:51:27 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:51:27.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:51:27 compute-0 sudo[124031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zemqozemgrjbitlpwzowhsyxgmxvnpsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917086.6094139-206-160505951638168/AnsiballZ_file.py'
Oct 08 09:51:27 compute-0 sudo[124031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:51:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:27 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f040096e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:27 compute-0 python3.9[124033]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:51:27 compute-0 sudo[124031]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:27 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:27 compute-0 ceph-mon[73572]: pgmap v191: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 937 B/s wr, 2 op/s
Oct 08 09:51:27 compute-0 sudo[124184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tooawvxcmpshksfysfvplsxtnnhqoqhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917087.7564003-275-189107278595894/AnsiballZ_file.py'
Oct 08 09:51:27 compute-0 sudo[124184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:51:28 compute-0 python3.9[124186]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:51:28 compute-0 sudo[124184]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:28 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v192: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Oct 08 09:51:28 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:51:28 compute-0 sudo[124336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjidcjjtqczpijghjuyvdcxagjazdxkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917088.4984927-299-212861610117532/AnsiballZ_stat.py'
Oct 08 09:51:28 compute-0 sudo[124336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:51:28 compute-0 systemd[92032]: Created slice User Background Tasks Slice.
Oct 08 09:51:28 compute-0 systemd[92032]: Starting Cleanup of User's Temporary Files and Directories...
Oct 08 09:51:28 compute-0 systemd[92032]: Finished Cleanup of User's Temporary Files and Directories.
Oct 08 09:51:28 compute-0 python3.9[124338]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:51:28 compute-0 sudo[124336]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:29 compute-0 sudo[124416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgpihmgsbbucwpyvvuhxwtgabeybrybs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917088.4984927-299-212861610117532/AnsiballZ_file.py'
Oct 08 09:51:29 compute-0 sudo[124416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:51:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:29 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef80034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:29 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:51:29 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:51:29 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:51:29.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:51:29 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:51:29 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:51:29 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:51:29.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:51:29 compute-0 python3.9[124418]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:51:29 compute-0 sudo[124416]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:29 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed8002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:29 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f040096e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:29 compute-0 ceph-mon[73572]: pgmap v192: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Oct 08 09:51:29 compute-0 sudo[124569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqiownafmcprxqeevcijcxfqystmcban ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917089.6467745-335-89679269828802/AnsiballZ_stat.py'
Oct 08 09:51:29 compute-0 sudo[124569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:51:30 compute-0 python3.9[124571]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:51:30 compute-0 sudo[124569]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:30 compute-0 sudo[124647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqxbzktrknhpvydnhpbsagccqflnqmtc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917089.6467745-335-89679269828802/AnsiballZ_file.py'
Oct 08 09:51:30 compute-0 sudo[124647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:51:30 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v193: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 09:51:30 compute-0 python3.9[124649]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:51:30 compute-0 sudo[124647]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:31 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:31 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:51:31 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:51:31 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:51:31.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:51:31 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:51:31 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:51:31 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:51:31.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:51:31 compute-0 sudo[124801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nagoxbmecrhkuikuxvlilhedyttamkdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917090.824678-371-119748320963215/AnsiballZ_systemd.py'
Oct 08 09:51:31 compute-0 sudo[124801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:51:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:31 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef80034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:31 compute-0 python3.9[124803]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 09:51:31 compute-0 systemd[1]: Reloading.
Oct 08 09:51:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:31 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:31 compute-0 systemd-rc-local-generator[124829]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:51:31 compute-0 systemd-sysv-generator[124832]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:51:31 compute-0 ceph-mon[73572]: pgmap v193: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 09:51:32 compute-0 sudo[124801]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:32 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v194: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 08 09:51:32 compute-0 sudo[124990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpbtvotmyrnwzdycivxdlangwcuomjrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917092.2484694-395-102707304560440/AnsiballZ_stat.py'
Oct 08 09:51:32 compute-0 sudo[124990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:51:32 compute-0 python3.9[124992]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:51:32 compute-0 sudo[124990]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 09:51:32 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:51:32 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:51:32 compute-0 sudo[125068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftjwwzovyfguwblzjoaifrsgzippeqad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917092.2484694-395-102707304560440/AnsiballZ_file.py'
Oct 08 09:51:32 compute-0 sudo[125068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:51:33 compute-0 python3.9[125070]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:51:33 compute-0 sudo[125068]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:33 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:33 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed8003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:33 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:51:33 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:51:33 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:51:33.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:51:33 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:51:33 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:51:33 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:51:33.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:51:33 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:33 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:51:33 compute-0 sudo[125221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sspyvtgrcmzwamscemhavvswdztekbut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917093.3025813-431-199844909270590/AnsiballZ_stat.py'
Oct 08 09:51:33 compute-0 sudo[125221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:51:33 compute-0 python3.9[125223]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:51:33 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:33 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef80034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:33 compute-0 sudo[125221]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:33 compute-0 ceph-mon[73572]: pgmap v194: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 08 09:51:33 compute-0 sudo[125300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgikujaziektxvdyudladhafveyhmquj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917093.3025813-431-199844909270590/AnsiballZ_file.py'
Oct 08 09:51:33 compute-0 sudo[125300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:51:34 compute-0 python3.9[125302]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:51:34 compute-0 sudo[125300]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:34 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v195: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 08 09:51:34 compute-0 sudo[125452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qezgdtmkhnrqbilecjecqhulewajfreq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917094.454662-467-32949706962275/AnsiballZ_systemd.py'
Oct 08 09:51:34 compute-0 sudo[125452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:51:35 compute-0 python3.9[125454]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 09:51:35 compute-0 systemd[1]: Reloading.
Oct 08 09:51:35 compute-0 systemd-rc-local-generator[125482]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:51:35 compute-0 systemd-sysv-generator[125485]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:51:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:35 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:35 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:51:35 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:51:35 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:51:35.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:51:35 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:51:35 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:51:35 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:51:35.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:51:35 compute-0 systemd[1]: Starting Create netns directory...
Oct 08 09:51:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:35 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed8003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:35 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 08 09:51:35 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 08 09:51:35 compute-0 systemd[1]: Finished Create netns directory.
Oct 08 09:51:35 compute-0 sudo[125452]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:35 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:51:35] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Oct 08 09:51:35 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:51:35] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Oct 08 09:51:35 compute-0 ceph-mon[73572]: pgmap v195: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 08 09:51:36 compute-0 python3.9[125647]: ansible-ansible.builtin.service_facts Invoked
Oct 08 09:51:36 compute-0 network[125664]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 08 09:51:36 compute-0 network[125665]: 'network-scripts' will be removed from distribution in near future.
Oct 08 09:51:36 compute-0 network[125666]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 08 09:51:36 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v196: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 08 09:51:36 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:51:36.969Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:51:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:37 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef80034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:37 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:51:37 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 09:51:37 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:51:37.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 09:51:37 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:51:37 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:51:37 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:51:37.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:51:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:37 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:37 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed8003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:37 compute-0 ceph-mon[73572]: pgmap v196: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 08 09:51:38 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v197: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 08 09:51:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:51:38 compute-0 ceph-mgr[73869]: [dashboard INFO request] [192.168.122.100:45916] [POST] [200] [0.002s] [4.0B] [47b87778-d9c2-45ac-9535-7e3cd10eb0ea] /api/prometheus_receiver
Oct 08 09:51:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:39 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:39 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:51:39 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 09:51:39 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:51:39.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 09:51:39 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:51:39 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:51:39 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:51:39.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:51:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:39 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef80034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:39 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:39 compute-0 ceph-mon[73572]: pgmap v197: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 08 09:51:40 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v198: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Oct 08 09:51:40 compute-0 sudo[125808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 09:51:40 compute-0 sudo[125808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:51:40 compute-0 sudo[125808]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:41 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:41 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed8003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:41 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:51:41 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:51:41 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:51:41.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:51:41 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:51:41 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000031s ======
Oct 08 09:51:41 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:51:41.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Oct 08 09:51:41 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:41 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed8003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:41 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:41 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef80034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:41 compute-0 sudo[125960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrgxnovvpyrrbrborrjmuzygcmbprboq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917101.5904567-545-278946640890698/AnsiballZ_stat.py'
Oct 08 09:51:41 compute-0 sudo[125960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:51:42 compute-0 ceph-mon[73572]: pgmap v198: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Oct 08 09:51:42 compute-0 python3.9[125962]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:51:42 compute-0 sudo[125960]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:42 compute-0 sudo[126038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqsqvcnjizmxefrwyurbixhovqffoogg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917101.5904567-545-278946640890698/AnsiballZ_file.py'
Oct 08 09:51:42 compute-0 sudo[126038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:51:42 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v199: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:51:42 compute-0 python3.9[126040]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:51:42 compute-0 sudo[126038]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:43 compute-0 sudo[126191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcadhdojxnobahsxycsnmdnvxgwucjmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917102.8843424-584-198775582518250/AnsiballZ_file.py'
Oct 08 09:51:43 compute-0 sudo[126191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:51:43 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:43 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:43 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:51:43 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:51:43 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:51:43.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:51:43 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:51:43 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 09:51:43 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:51:43.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 09:51:43 compute-0 python3.9[126193]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:51:43 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:43 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:43 compute-0 sudo[126191]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:51:43 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:43 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed8003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:43 compute-0 sudo[126343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evjorilyqurhriuyhyrwbljpjsonmrul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917103.5785103-608-134031255344107/AnsiballZ_stat.py'
Oct 08 09:51:43 compute-0 sudo[126343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:51:44 compute-0 ceph-mon[73572]: pgmap v199: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:51:44 compute-0 python3.9[126345]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:51:44 compute-0 sudo[126343]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:44 compute-0 sudo[126422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbtwtilkqdleccyrwwfelhlbadejwgol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917103.5785103-608-134031255344107/AnsiballZ_file.py'
Oct 08 09:51:44 compute-0 sudo[126422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:51:44 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v200: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 08 09:51:44 compute-0 python3.9[126424]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:51:44 compute-0 sudo[126422]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:45 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef80034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:45 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:51:45 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:51:45 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:51:45.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:51:45 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:51:45 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:51:45 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:51:45.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:51:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:45 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:45 compute-0 sudo[126576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzjsjnljhlaoloeiqqdqtseixkkfflcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917105.042906-653-259416187370634/AnsiballZ_timezone.py'
Oct 08 09:51:45 compute-0 sudo[126576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:51:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:45 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec0008d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:51:45] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Oct 08 09:51:45 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:51:45] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Oct 08 09:51:45 compute-0 python3.9[126578]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct 08 09:51:45 compute-0 systemd[1]: Starting Time & Date Service...
Oct 08 09:51:45 compute-0 systemd[1]: Started Time & Date Service.
Oct 08 09:51:45 compute-0 sudo[126576]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:46 compute-0 ceph-mon[73572]: pgmap v200: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 08 09:51:46 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v201: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:51:46 compute-0 sudo[126733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eakbppydghybovddpspqcqhlwjdydkzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917106.1991358-680-281135372416036/AnsiballZ_file.py'
Oct 08 09:51:46 compute-0 sudo[126733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:51:46 compute-0 python3.9[126735]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:51:46 compute-0 sudo[126733]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:46 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:51:46.970Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:51:47 compute-0 sudo[126825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:51:47 compute-0 sudo[126825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:51:47 compute-0 sudo[126825]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:47 compute-0 sudo[126869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 08 09:51:47 compute-0 sudo[126869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:51:47 compute-0 sudo[126936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqkizsqluubrnwrarlluuemnqnannlmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917106.867261-704-140422389190081/AnsiballZ_stat.py'
Oct 08 09:51:47 compute-0 sudo[126936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:51:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:47 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed8003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:47 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:51:47 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:51:47 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:51:47.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:51:47 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:51:47 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 09:51:47 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:51:47.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 09:51:47 compute-0 python3.9[126938]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:51:47 compute-0 sudo[126936]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:47 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef80034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:47 compute-0 sudo[126869]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_09:51:47
Oct 08 09:51:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 08 09:51:47 compute-0 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct 08 09:51:47 compute-0 ceph-mgr[73869]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', '.rgw.root', '.mgr', 'default.rgw.log', 'cephfs.cephfs.data', 'images', 'vms', 'default.rgw.control', 'default.rgw.meta', 'backups', '.nfs']
Oct 08 09:51:47 compute-0 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct 08 09:51:47 compute-0 sudo[127045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gitabumlltdtqpsidbidubmuxqpwtqgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917106.867261-704-140422389190081/AnsiballZ_file.py'
Oct 08 09:51:47 compute-0 sudo[127045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:51:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:51:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:51:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 08 09:51:47 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 09:51:47 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v202: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 272 B/s rd, 0 op/s
Oct 08 09:51:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 08 09:51:47 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:51:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 08 09:51:47 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:51:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct 08 09:51:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 08 09:51:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 09:51:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:51:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 08 09:51:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:51:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:51:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:51:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:51:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:51:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:51:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:51:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:51:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:51:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 08 09:51:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:51:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:51:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:51:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 08 09:51:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:51:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 08 09:51:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:51:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:51:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:51:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 08 09:51:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:51:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 09:51:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 08 09:51:47 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 09:51:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:51:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:51:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:47 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:47 compute-0 sudo[127048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:51:47 compute-0 sudo[127048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:51:47 compute-0 sudo[127048]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 09:51:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:51:47 compute-0 python3.9[127047]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:51:47 compute-0 sudo[127073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 08 09:51:47 compute-0 sudo[127073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:51:47 compute-0 sudo[127045]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:51:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:51:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 08 09:51:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 09:51:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 09:51:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 09:51:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 09:51:48 compute-0 ceph-mon[73572]: pgmap v201: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:51:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:51:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 09:51:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:51:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:51:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 09:51:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 09:51:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:51:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:51:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 08 09:51:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 09:51:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 09:51:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 09:51:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 09:51:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:51:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:51:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:51:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:51:48 compute-0 podman[127261]: 2025-10-08 09:51:48.247757545 +0000 UTC m=+0.041688201 container create d3c3ecaaf3b5dcbf2a25c07d0f3c6d81742adf8a1ba9417ba93e3bcb7d01b0f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_rosalind, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:51:48 compute-0 sudo[127301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmvtffvuawnsjvaxbestebpnxjoeweuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917108.0096176-740-231467531304715/AnsiballZ_stat.py'
Oct 08 09:51:48 compute-0 sudo[127301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:51:48 compute-0 systemd[1]: Started libpod-conmon-d3c3ecaaf3b5dcbf2a25c07d0f3c6d81742adf8a1ba9417ba93e3bcb7d01b0f0.scope.
Oct 08 09:51:48 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:51:48 compute-0 podman[127261]: 2025-10-08 09:51:48.232445123 +0000 UTC m=+0.026375799 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:51:48 compute-0 podman[127261]: 2025-10-08 09:51:48.341004166 +0000 UTC m=+0.134934842 container init d3c3ecaaf3b5dcbf2a25c07d0f3c6d81742adf8a1ba9417ba93e3bcb7d01b0f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_rosalind, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:51:48 compute-0 podman[127261]: 2025-10-08 09:51:48.346709465 +0000 UTC m=+0.140640121 container start d3c3ecaaf3b5dcbf2a25c07d0f3c6d81742adf8a1ba9417ba93e3bcb7d01b0f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_rosalind, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:51:48 compute-0 podman[127261]: 2025-10-08 09:51:48.349885755 +0000 UTC m=+0.143816411 container attach d3c3ecaaf3b5dcbf2a25c07d0f3c6d81742adf8a1ba9417ba93e3bcb7d01b0f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_rosalind, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:51:48 compute-0 eloquent_rosalind[127306]: 167 167
Oct 08 09:51:48 compute-0 systemd[1]: libpod-d3c3ecaaf3b5dcbf2a25c07d0f3c6d81742adf8a1ba9417ba93e3bcb7d01b0f0.scope: Deactivated successfully.
Oct 08 09:51:48 compute-0 podman[127261]: 2025-10-08 09:51:48.35326155 +0000 UTC m=+0.147192206 container died d3c3ecaaf3b5dcbf2a25c07d0f3c6d81742adf8a1ba9417ba93e3bcb7d01b0f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:51:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-03755b4cb9422064ecdd9bd2028955c9906f88f2f9290616f773c41624281c16-merged.mount: Deactivated successfully.
Oct 08 09:51:48 compute-0 podman[127261]: 2025-10-08 09:51:48.39078887 +0000 UTC m=+0.184719526 container remove d3c3ecaaf3b5dcbf2a25c07d0f3c6d81742adf8a1ba9417ba93e3bcb7d01b0f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_rosalind, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:51:48 compute-0 systemd[1]: libpod-conmon-d3c3ecaaf3b5dcbf2a25c07d0f3c6d81742adf8a1ba9417ba93e3bcb7d01b0f0.scope: Deactivated successfully.
Oct 08 09:51:48 compute-0 python3.9[127303]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:51:48 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:51:48 compute-0 sudo[127301]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:48 compute-0 podman[127333]: 2025-10-08 09:51:48.569630631 +0000 UTC m=+0.044673725 container create 7d34c05a92d368efdcc8eadc3b82af4f3602aa28652d8b3df3bfd339f3b0134e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_sammet, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:51:48 compute-0 systemd[1]: Started libpod-conmon-7d34c05a92d368efdcc8eadc3b82af4f3602aa28652d8b3df3bfd339f3b0134e.scope.
Oct 08 09:51:48 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:51:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/400b23886f773e2595d0f2d668bde66a114d3ed02de88a8b256df0c4fb920bc0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 09:51:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/400b23886f773e2595d0f2d668bde66a114d3ed02de88a8b256df0c4fb920bc0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:51:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/400b23886f773e2595d0f2d668bde66a114d3ed02de88a8b256df0c4fb920bc0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:51:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/400b23886f773e2595d0f2d668bde66a114d3ed02de88a8b256df0c4fb920bc0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 09:51:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/400b23886f773e2595d0f2d668bde66a114d3ed02de88a8b256df0c4fb920bc0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 09:51:48 compute-0 podman[127333]: 2025-10-08 09:51:48.630365 +0000 UTC m=+0.105408124 container init 7d34c05a92d368efdcc8eadc3b82af4f3602aa28652d8b3df3bfd339f3b0134e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_sammet, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 08 09:51:48 compute-0 podman[127333]: 2025-10-08 09:51:48.640958473 +0000 UTC m=+0.116001567 container start 7d34c05a92d368efdcc8eadc3b82af4f3602aa28652d8b3df3bfd339f3b0134e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct 08 09:51:48 compute-0 podman[127333]: 2025-10-08 09:51:48.643782002 +0000 UTC m=+0.118825096 container attach 7d34c05a92d368efdcc8eadc3b82af4f3602aa28652d8b3df3bfd339f3b0134e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_sammet, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct 08 09:51:48 compute-0 podman[127333]: 2025-10-08 09:51:48.553968989 +0000 UTC m=+0.029012113 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:51:48 compute-0 sudo[127426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecnqixobtzqfcbfcnqvxlcylkjjedytp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917108.0096176-740-231467531304715/AnsiballZ_file.py'
Oct 08 09:51:48 compute-0 sudo[127426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:51:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:51:48.844Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:51:48 compute-0 python3.9[127428]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.naax4c3o recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:51:48 compute-0 sudo[127426]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:48 compute-0 heuristic_sammet[127384]: --> passed data devices: 0 physical, 1 LVM
Oct 08 09:51:48 compute-0 heuristic_sammet[127384]: --> All data devices are unavailable
Oct 08 09:51:48 compute-0 systemd[1]: libpod-7d34c05a92d368efdcc8eadc3b82af4f3602aa28652d8b3df3bfd339f3b0134e.scope: Deactivated successfully.
Oct 08 09:51:48 compute-0 podman[127333]: 2025-10-08 09:51:48.986500933 +0000 UTC m=+0.461544027 container died 7d34c05a92d368efdcc8eadc3b82af4f3602aa28652d8b3df3bfd339f3b0134e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_sammet, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:51:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-400b23886f773e2595d0f2d668bde66a114d3ed02de88a8b256df0c4fb920bc0-merged.mount: Deactivated successfully.
Oct 08 09:51:49 compute-0 podman[127333]: 2025-10-08 09:51:49.034823812 +0000 UTC m=+0.509866906 container remove 7d34c05a92d368efdcc8eadc3b82af4f3602aa28652d8b3df3bfd339f3b0134e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_sammet, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:51:49 compute-0 systemd[1]: libpod-conmon-7d34c05a92d368efdcc8eadc3b82af4f3602aa28652d8b3df3bfd339f3b0134e.scope: Deactivated successfully.
Oct 08 09:51:49 compute-0 sudo[127073]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:49 compute-0 sudo[127488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:51:49 compute-0 sudo[127488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:51:49 compute-0 sudo[127488]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:49 compute-0 ceph-mon[73572]: pgmap v202: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 272 B/s rd, 0 op/s
Oct 08 09:51:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:49 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec0008d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:49 compute-0 sudo[127546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- lvm list --format json
Oct 08 09:51:49 compute-0 sudo[127546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:51:49 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:51:49 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 09:51:49 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:51:49.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 09:51:49 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:51:49 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:51:49 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:51:49.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:51:49 compute-0 sudo[127650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsvegooifskvzoncnrubhqoikvpaogtt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917109.0959105-776-127430775793462/AnsiballZ_stat.py'
Oct 08 09:51:49 compute-0 sudo[127650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:51:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:49 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed8003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:49 compute-0 python3.9[127652]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:51:49 compute-0 podman[127692]: 2025-10-08 09:51:49.564163689 +0000 UTC m=+0.035750465 container create 182a53544e618cc294c90669b19634aec1c334e8125c9b82fcc06564d5009dcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_herschel, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:51:49 compute-0 sudo[127650]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:49 compute-0 systemd[1]: Started libpod-conmon-182a53544e618cc294c90669b19634aec1c334e8125c9b82fcc06564d5009dcd.scope.
Oct 08 09:51:49 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:51:49 compute-0 podman[127692]: 2025-10-08 09:51:49.546951308 +0000 UTC m=+0.018538094 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:51:49 compute-0 podman[127692]: 2025-10-08 09:51:49.644853164 +0000 UTC m=+0.116439940 container init 182a53544e618cc294c90669b19634aec1c334e8125c9b82fcc06564d5009dcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_herschel, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:51:49 compute-0 podman[127692]: 2025-10-08 09:51:49.653168806 +0000 UTC m=+0.124755582 container start 182a53544e618cc294c90669b19634aec1c334e8125c9b82fcc06564d5009dcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_herschel, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:51:49 compute-0 podman[127692]: 2025-10-08 09:51:49.657129431 +0000 UTC m=+0.128716227 container attach 182a53544e618cc294c90669b19634aec1c334e8125c9b82fcc06564d5009dcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_herschel, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2)
Oct 08 09:51:49 compute-0 tender_herschel[127710]: 167 167
Oct 08 09:51:49 compute-0 systemd[1]: libpod-182a53544e618cc294c90669b19634aec1c334e8125c9b82fcc06564d5009dcd.scope: Deactivated successfully.
Oct 08 09:51:49 compute-0 conmon[127710]: conmon 182a53544e618cc294c9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-182a53544e618cc294c90669b19634aec1c334e8125c9b82fcc06564d5009dcd.scope/container/memory.events
Oct 08 09:51:49 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v203: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 363 B/s rd, 0 op/s
Oct 08 09:51:49 compute-0 podman[127738]: 2025-10-08 09:51:49.698862902 +0000 UTC m=+0.026104482 container died 182a53544e618cc294c90669b19634aec1c334e8125c9b82fcc06564d5009dcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:51:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-50322bdba410914bfe547c787c2ba64bfecdb58e78265ba3fb06aef26f6af948-merged.mount: Deactivated successfully.
Oct 08 09:51:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:49 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef80034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:49 compute-0 podman[127738]: 2025-10-08 09:51:49.731136326 +0000 UTC m=+0.058377906 container remove 182a53544e618cc294c90669b19634aec1c334e8125c9b82fcc06564d5009dcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_herschel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct 08 09:51:49 compute-0 systemd[1]: libpod-conmon-182a53544e618cc294c90669b19634aec1c334e8125c9b82fcc06564d5009dcd.scope: Deactivated successfully.
Oct 08 09:51:49 compute-0 sudo[127805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khqnxjqjkqbwngcghptmzalhtbpbowvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917109.0959105-776-127430775793462/AnsiballZ_file.py'
Oct 08 09:51:49 compute-0 sudo[127805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:51:49 compute-0 podman[127813]: 2025-10-08 09:51:49.89478315 +0000 UTC m=+0.051036105 container create 145b4685da2b31a7b45acbcf4e7eb951a1ac78af9c2de7b22ddecdda5d301070 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_hellman, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:51:49 compute-0 systemd[1]: Started libpod-conmon-145b4685da2b31a7b45acbcf4e7eb951a1ac78af9c2de7b22ddecdda5d301070.scope.
Oct 08 09:51:49 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:51:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/883815bbc65ff700f3c0ed79ffd724df4b937d6808b3691ccd9fb2dc0551037f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 09:51:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/883815bbc65ff700f3c0ed79ffd724df4b937d6808b3691ccd9fb2dc0551037f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:51:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/883815bbc65ff700f3c0ed79ffd724df4b937d6808b3691ccd9fb2dc0551037f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:51:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/883815bbc65ff700f3c0ed79ffd724df4b937d6808b3691ccd9fb2dc0551037f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 09:51:49 compute-0 podman[127813]: 2025-10-08 09:51:49.87822451 +0000 UTC m=+0.034477495 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:51:49 compute-0 podman[127813]: 2025-10-08 09:51:49.981779404 +0000 UTC m=+0.138032379 container init 145b4685da2b31a7b45acbcf4e7eb951a1ac78af9c2de7b22ddecdda5d301070 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_hellman, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 08 09:51:49 compute-0 podman[127813]: 2025-10-08 09:51:49.989092144 +0000 UTC m=+0.145345099 container start 145b4685da2b31a7b45acbcf4e7eb951a1ac78af9c2de7b22ddecdda5d301070 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:51:49 compute-0 podman[127813]: 2025-10-08 09:51:49.993157752 +0000 UTC m=+0.149410697 container attach 145b4685da2b31a7b45acbcf4e7eb951a1ac78af9c2de7b22ddecdda5d301070 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_hellman, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:51:50 compute-0 python3.9[127808]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:51:50 compute-0 sudo[127805]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:50 compute-0 admiring_hellman[127831]: {
Oct 08 09:51:50 compute-0 admiring_hellman[127831]:     "1": [
Oct 08 09:51:50 compute-0 admiring_hellman[127831]:         {
Oct 08 09:51:50 compute-0 admiring_hellman[127831]:             "devices": [
Oct 08 09:51:50 compute-0 admiring_hellman[127831]:                 "/dev/loop3"
Oct 08 09:51:50 compute-0 admiring_hellman[127831]:             ],
Oct 08 09:51:50 compute-0 admiring_hellman[127831]:             "lv_name": "ceph_lv0",
Oct 08 09:51:50 compute-0 admiring_hellman[127831]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 09:51:50 compute-0 admiring_hellman[127831]:             "lv_size": "21470642176",
Oct 08 09:51:50 compute-0 admiring_hellman[127831]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 08 09:51:50 compute-0 admiring_hellman[127831]:             "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 09:51:50 compute-0 admiring_hellman[127831]:             "name": "ceph_lv0",
Oct 08 09:51:50 compute-0 admiring_hellman[127831]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 09:51:50 compute-0 admiring_hellman[127831]:             "tags": {
Oct 08 09:51:50 compute-0 admiring_hellman[127831]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 08 09:51:50 compute-0 admiring_hellman[127831]:                 "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 09:51:50 compute-0 admiring_hellman[127831]:                 "ceph.cephx_lockbox_secret": "",
Oct 08 09:51:50 compute-0 admiring_hellman[127831]:                 "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct 08 09:51:50 compute-0 admiring_hellman[127831]:                 "ceph.cluster_name": "ceph",
Oct 08 09:51:50 compute-0 admiring_hellman[127831]:                 "ceph.crush_device_class": "",
Oct 08 09:51:50 compute-0 admiring_hellman[127831]:                 "ceph.encrypted": "0",
Oct 08 09:51:50 compute-0 admiring_hellman[127831]:                 "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct 08 09:51:50 compute-0 admiring_hellman[127831]:                 "ceph.osd_id": "1",
Oct 08 09:51:50 compute-0 admiring_hellman[127831]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 08 09:51:50 compute-0 admiring_hellman[127831]:                 "ceph.type": "block",
Oct 08 09:51:50 compute-0 admiring_hellman[127831]:                 "ceph.vdo": "0",
Oct 08 09:51:50 compute-0 admiring_hellman[127831]:                 "ceph.with_tpm": "0"
Oct 08 09:51:50 compute-0 admiring_hellman[127831]:             },
Oct 08 09:51:50 compute-0 admiring_hellman[127831]:             "type": "block",
Oct 08 09:51:50 compute-0 admiring_hellman[127831]:             "vg_name": "ceph_vg0"
Oct 08 09:51:50 compute-0 admiring_hellman[127831]:         }
Oct 08 09:51:50 compute-0 admiring_hellman[127831]:     ]
Oct 08 09:51:50 compute-0 admiring_hellman[127831]: }
Oct 08 09:51:50 compute-0 systemd[1]: libpod-145b4685da2b31a7b45acbcf4e7eb951a1ac78af9c2de7b22ddecdda5d301070.scope: Deactivated successfully.
Oct 08 09:51:50 compute-0 podman[127813]: 2025-10-08 09:51:50.307437819 +0000 UTC m=+0.463690774 container died 145b4685da2b31a7b45acbcf4e7eb951a1ac78af9c2de7b22ddecdda5d301070 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:51:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-883815bbc65ff700f3c0ed79ffd724df4b937d6808b3691ccd9fb2dc0551037f-merged.mount: Deactivated successfully.
Oct 08 09:51:50 compute-0 podman[127813]: 2025-10-08 09:51:50.349454339 +0000 UTC m=+0.505707314 container remove 145b4685da2b31a7b45acbcf4e7eb951a1ac78af9c2de7b22ddecdda5d301070 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_hellman, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 08 09:51:50 compute-0 systemd[1]: libpod-conmon-145b4685da2b31a7b45acbcf4e7eb951a1ac78af9c2de7b22ddecdda5d301070.scope: Deactivated successfully.
Oct 08 09:51:50 compute-0 sudo[127546]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:50 compute-0 sudo[127927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:51:50 compute-0 sudo[127927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:51:50 compute-0 sudo[127927]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:50 compute-0 sudo[127952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- raw list --format json
Oct 08 09:51:50 compute-0 sudo[127952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:51:50 compute-0 sudo[128062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aapdybiwavvwbllskskiywirejxzfmys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917110.3172228-815-129882627300168/AnsiballZ_command.py'
Oct 08 09:51:50 compute-0 sudo[128062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:51:50 compute-0 podman[128091]: 2025-10-08 09:51:50.901705416 +0000 UTC m=+0.055939929 container create 384214a2fe712386aaa093fa93310b0e6074b306d3386bed1f409bc15ef1976f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_ramanujan, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:51:50 compute-0 systemd[1]: Started libpod-conmon-384214a2fe712386aaa093fa93310b0e6074b306d3386bed1f409bc15ef1976f.scope.
Oct 08 09:51:50 compute-0 python3.9[128076]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:51:50 compute-0 podman[128091]: 2025-10-08 09:51:50.868856964 +0000 UTC m=+0.023091507 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:51:50 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:51:50 compute-0 sudo[128062]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:51 compute-0 podman[128091]: 2025-10-08 09:51:51.058311628 +0000 UTC m=+0.212546171 container init 384214a2fe712386aaa093fa93310b0e6074b306d3386bed1f409bc15ef1976f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 08 09:51:51 compute-0 podman[128091]: 2025-10-08 09:51:51.064581756 +0000 UTC m=+0.218816269 container start 384214a2fe712386aaa093fa93310b0e6074b306d3386bed1f409bc15ef1976f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:51:51 compute-0 reverent_ramanujan[128108]: 167 167
Oct 08 09:51:51 compute-0 systemd[1]: libpod-384214a2fe712386aaa093fa93310b0e6074b306d3386bed1f409bc15ef1976f.scope: Deactivated successfully.
Oct 08 09:51:51 compute-0 podman[128091]: 2025-10-08 09:51:51.084295765 +0000 UTC m=+0.238530278 container attach 384214a2fe712386aaa093fa93310b0e6074b306d3386bed1f409bc15ef1976f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_ramanujan, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1)
Oct 08 09:51:51 compute-0 podman[128091]: 2025-10-08 09:51:51.084610235 +0000 UTC m=+0.238844748 container died 384214a2fe712386aaa093fa93310b0e6074b306d3386bed1f409bc15ef1976f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Oct 08 09:51:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4878b8c21afde0760d06941b7e1a5217c806e776c439ba57b20fba33abb9cdf-merged.mount: Deactivated successfully.
Oct 08 09:51:51 compute-0 ceph-mon[73572]: pgmap v203: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 363 B/s rd, 0 op/s
Oct 08 09:51:51 compute-0 podman[128091]: 2025-10-08 09:51:51.158430985 +0000 UTC m=+0.312665498 container remove 384214a2fe712386aaa093fa93310b0e6074b306d3386bed1f409bc15ef1976f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 08 09:51:51 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:51 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:51 compute-0 systemd[1]: libpod-conmon-384214a2fe712386aaa093fa93310b0e6074b306d3386bed1f409bc15ef1976f.scope: Deactivated successfully.
Oct 08 09:51:51 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:51:51 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:51:51 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:51:51.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:51:51 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:51:51 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 09:51:51 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:51:51.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 09:51:51 compute-0 podman[128212]: 2025-10-08 09:51:51.359796603 +0000 UTC m=+0.054290006 container create bf2fa72de15fb466db811f25640897d5209f9e469f8632e821e5ac7b2bb17d17 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Oct 08 09:51:51 compute-0 systemd[1]: Started libpod-conmon-bf2fa72de15fb466db811f25640897d5209f9e469f8632e821e5ac7b2bb17d17.scope.
Oct 08 09:51:51 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:51 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec002730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:51 compute-0 podman[128212]: 2025-10-08 09:51:51.335096807 +0000 UTC m=+0.029590280 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:51:51 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:51:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2953a4a5938efd4a88f0edace792acd8824adb9f521f634c586fe5be3d2a6eeb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 09:51:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2953a4a5938efd4a88f0edace792acd8824adb9f521f634c586fe5be3d2a6eeb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:51:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2953a4a5938efd4a88f0edace792acd8824adb9f521f634c586fe5be3d2a6eeb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:51:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2953a4a5938efd4a88f0edace792acd8824adb9f521f634c586fe5be3d2a6eeb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 09:51:51 compute-0 podman[128212]: 2025-10-08 09:51:51.456870885 +0000 UTC m=+0.151364278 container init bf2fa72de15fb466db811f25640897d5209f9e469f8632e821e5ac7b2bb17d17 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 08 09:51:51 compute-0 podman[128212]: 2025-10-08 09:51:51.464028189 +0000 UTC m=+0.158521562 container start bf2fa72de15fb466db811f25640897d5209f9e469f8632e821e5ac7b2bb17d17 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_yalow, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:51:51 compute-0 podman[128212]: 2025-10-08 09:51:51.469691957 +0000 UTC m=+0.164185350 container attach bf2fa72de15fb466db811f25640897d5209f9e469f8632e821e5ac7b2bb17d17 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_yalow, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 08 09:51:51 compute-0 sudo[128307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flhuduzyayahwtaydpscouwdhnsqdmkp ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759917111.1953347-839-103902369498215/AnsiballZ_edpm_nftables_from_files.py'
Oct 08 09:51:51 compute-0 sudo[128307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:51:51 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v204: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 272 B/s rd, 0 op/s
Oct 08 09:51:51 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:51 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec002730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:51 compute-0 python3[128310]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 08 09:51:51 compute-0 sudo[128307]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:52 compute-0 lvm[128427]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 08 09:51:52 compute-0 lvm[128427]: VG ceph_vg0 finished
Oct 08 09:51:52 compute-0 gallant_yalow[128229]: {}
Oct 08 09:51:52 compute-0 systemd[1]: libpod-bf2fa72de15fb466db811f25640897d5209f9e469f8632e821e5ac7b2bb17d17.scope: Deactivated successfully.
Oct 08 09:51:52 compute-0 podman[128212]: 2025-10-08 09:51:52.141221873 +0000 UTC m=+0.835715246 container died bf2fa72de15fb466db811f25640897d5209f9e469f8632e821e5ac7b2bb17d17 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 08 09:51:52 compute-0 systemd[1]: libpod-bf2fa72de15fb466db811f25640897d5209f9e469f8632e821e5ac7b2bb17d17.scope: Consumed 1.082s CPU time.
Oct 08 09:51:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-2953a4a5938efd4a88f0edace792acd8824adb9f521f634c586fe5be3d2a6eeb-merged.mount: Deactivated successfully.
Oct 08 09:51:52 compute-0 podman[128212]: 2025-10-08 09:51:52.18214558 +0000 UTC m=+0.876638953 container remove bf2fa72de15fb466db811f25640897d5209f9e469f8632e821e5ac7b2bb17d17 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_yalow, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct 08 09:51:52 compute-0 systemd[1]: libpod-conmon-bf2fa72de15fb466db811f25640897d5209f9e469f8632e821e5ac7b2bb17d17.scope: Deactivated successfully.
Oct 08 09:51:52 compute-0 sudo[127952]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:52 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 09:51:52 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:51:52 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 09:51:52 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:51:52 compute-0 sudo[128514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 08 09:51:52 compute-0 sudo[128514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:51:52 compute-0 sudo[128514]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:52 compute-0 sudo[128568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-corawgjnhgjnuyuytcqwfazlnvxykdsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917112.038324-863-54009256246953/AnsiballZ_stat.py'
Oct 08 09:51:52 compute-0 sudo[128568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:51:52 compute-0 python3.9[128570]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:51:52 compute-0 sudo[128568]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:52 compute-0 sudo[128646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwrdotmksknknoeeoxgixfggxoutclki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917112.038324-863-54009256246953/AnsiballZ_file.py'
Oct 08 09:51:52 compute-0 sudo[128646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:51:52 compute-0 python3.9[128648]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:51:52 compute-0 sudo[128646]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:53 compute-0 ceph-mon[73572]: pgmap v204: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 272 B/s rd, 0 op/s
Oct 08 09:51:53 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:51:53 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:51:53 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:53 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed8003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:53 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:51:53 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:51:53 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:51:53.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:51:53 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:51:53 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 09:51:53 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:51:53.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 09:51:53 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:53 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:53 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:51:53 compute-0 sudo[128799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-outfobhtwftibbicuxjvzbscdjaftdpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917113.1990461-899-192230680444395/AnsiballZ_stat.py'
Oct 08 09:51:53 compute-0 sudo[128799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:51:53 compute-0 python3.9[128801]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:51:53 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v205: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 454 B/s rd, 0 op/s
Oct 08 09:51:53 compute-0 sudo[128799]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:53 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:53 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef80034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:53 compute-0 sudo[128878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xshntkfuebnjjjqzsqgguutzdamhpwuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917113.1990461-899-192230680444395/AnsiballZ_file.py'
Oct 08 09:51:53 compute-0 sudo[128878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:51:54 compute-0 python3.9[128880]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:51:54 compute-0 ceph-mon[73572]: pgmap v205: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 454 B/s rd, 0 op/s
Oct 08 09:51:54 compute-0 sudo[128878]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:54 compute-0 sudo[129030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtrldrucslnfgpmmgxdriihcavyekwer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917114.3624468-935-46910046852198/AnsiballZ_stat.py'
Oct 08 09:51:54 compute-0 sudo[129030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:51:54 compute-0 python3.9[129032]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:51:54 compute-0 sudo[129030]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:55 compute-0 sudo[129109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whscnmqanqlseyingpbilggaasrylzwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917114.3624468-935-46910046852198/AnsiballZ_file.py'
Oct 08 09:51:55 compute-0 sudo[129109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:51:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:55 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef80034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:55 compute-0 python3.9[129111]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:51:55 compute-0 sudo[129109]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:55 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:51:55 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:51:55 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:51:55.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:51:55 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:51:55 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:51:55 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:51:55.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:51:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:55 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed8003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:55 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v206: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 272 B/s rd, 0 op/s
Oct 08 09:51:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:51:55] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Oct 08 09:51:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:55 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:55 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:51:55] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Oct 08 09:51:55 compute-0 sudo[129261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocjledhoztiekoowkchftqvwxtkgyifj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917115.5239933-971-273959927590460/AnsiballZ_stat.py'
Oct 08 09:51:55 compute-0 sudo[129261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:51:55 compute-0 python3.9[129263]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:51:56 compute-0 sudo[129261]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:56 compute-0 sudo[129340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owwxbxngtfnolplyylmsplennrpshasb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917115.5239933-971-273959927590460/AnsiballZ_file.py'
Oct 08 09:51:56 compute-0 sudo[129340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:51:56 compute-0 python3.9[129342]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:51:56 compute-0 sudo[129340]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:56 compute-0 ceph-mon[73572]: pgmap v206: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 272 B/s rd, 0 op/s
Oct 08 09:51:56 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:51:56.971Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:51:57 compute-0 sudo[129492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqigpxxqoqpwtyiatitkpqyxvtnndlru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917116.6579792-1007-66000444199382/AnsiballZ_stat.py'
Oct 08 09:51:57 compute-0 sudo[129492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:51:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:57 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed8003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:57 compute-0 python3.9[129494]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:51:57 compute-0 sudo[129492]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:57 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:51:57 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 09:51:57 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:51:57.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 09:51:57 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:51:57 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 09:51:57 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:51:57.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 09:51:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:57 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed8003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:57 compute-0 sudo[129571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clrswxajzfsvsakxkowuwuezrhwczgnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917116.6579792-1007-66000444199382/AnsiballZ_file.py'
Oct 08 09:51:57 compute-0 sudo[129571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:51:57 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v207: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 272 B/s rd, 0 op/s
Oct 08 09:51:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:57 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec002eb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:57 compute-0 python3.9[129573]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:51:57 compute-0 sudo[129571]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:58 compute-0 sudo[129726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebfdtdkzabsnyddiqqbeagsdgtxfmbny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917117.9980114-1046-147290706858438/AnsiballZ_command.py'
Oct 08 09:51:58 compute-0 sudo[129726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:51:58 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:51:58 compute-0 python3.9[129728]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:51:58 compute-0 sudo[129726]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:58 compute-0 ceph-mon[73572]: pgmap v207: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 272 B/s rd, 0 op/s
Oct 08 09:51:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:51:58.846Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:51:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:59 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec002eb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:59 compute-0 sudo[129882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqrhmsyoyphbkuebzgzyiarbpvgdkovg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917118.738269-1070-137974268265680/AnsiballZ_blockinfile.py'
Oct 08 09:51:59 compute-0 sudo[129882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:51:59 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:51:59 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:51:59 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:51:59.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:51:59 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:51:59 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:51:59 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:51:59.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:51:59 compute-0 python3.9[129884]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:51:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:59 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:59 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 08 09:51:59 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 08 09:51:59 compute-0 sudo[129882]: pam_unix(sudo:session): session closed for user root
Oct 08 09:51:59 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v208: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 08 09:51:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:59 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0003e20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:51:59 compute-0 sudo[130036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcpuwjcvxnzvazklwxddkeqqtbtttfxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917119.6605668-1097-143363887684158/AnsiballZ_file.py'
Oct 08 09:51:59 compute-0 sudo[130036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:52:00 compute-0 python3.9[130038]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:52:00 compute-0 sudo[130036]: pam_unix(sudo:session): session closed for user root
Oct 08 09:52:00 compute-0 sudo[130188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxzmlohkinrdfsakfwuxgvhobwpwzjhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917120.5146792-1097-110482412317856/AnsiballZ_file.py'
Oct 08 09:52:00 compute-0 sudo[130188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:52:00 compute-0 ceph-mon[73572]: pgmap v208: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 08 09:52:01 compute-0 python3.9[130190]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:52:01 compute-0 sudo[130188]: pam_unix(sudo:session): session closed for user root
Oct 08 09:52:01 compute-0 sudo[130191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 09:52:01 compute-0 sudo[130191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:52:01 compute-0 sudo[130191]: pam_unix(sudo:session): session closed for user root
Oct 08 09:52:01 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:01 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef80041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:01 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:52:01 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:52:01 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:52:01.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:52:01 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:52:01 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:52:01 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:52:01.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:52:01 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:01 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec002eb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:01 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v209: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:52:01 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:01 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:01 compute-0 sudo[130366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywmmzsyyeitrjmocqpnqfmtvfsfrdrib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917121.3236797-1142-63448052950090/AnsiballZ_mount.py'
Oct 08 09:52:01 compute-0 sudo[130366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:52:01 compute-0 python3.9[130368]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 08 09:52:01 compute-0 sudo[130366]: pam_unix(sudo:session): session closed for user root
Oct 08 09:52:02 compute-0 sudo[130519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrsymqetdaxjasgwpmoqirhxicsnscyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917122.1262918-1142-8040155633062/AnsiballZ_mount.py'
Oct 08 09:52:02 compute-0 sudo[130519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:52:02 compute-0 python3.9[130521]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 08 09:52:02 compute-0 sudo[130519]: pam_unix(sudo:session): session closed for user root
Oct 08 09:52:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 09:52:02 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:52:02 compute-0 ceph-mon[73572]: pgmap v209: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:52:02 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:52:03 compute-0 sshd-session[122599]: Connection closed by 192.168.122.30 port 45146
Oct 08 09:52:03 compute-0 sshd-session[122596]: pam_unix(sshd:session): session closed for user zuul
Oct 08 09:52:03 compute-0 systemd-logind[798]: Session 45 logged out. Waiting for processes to exit.
Oct 08 09:52:03 compute-0 systemd[1]: session-45.scope: Deactivated successfully.
Oct 08 09:52:03 compute-0 systemd[1]: session-45.scope: Consumed 29.215s CPU time.
Oct 08 09:52:03 compute-0 systemd-logind[798]: Removed session 45.
Oct 08 09:52:03 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:03 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0003e20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:03 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:52:03 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:52:03 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:52:03.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:52:03 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:52:03 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:52:03 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:52:03.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:52:03 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:03 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef80041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:03 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:52:03 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v210: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 08 09:52:03 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:03 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec003fb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:04 compute-0 ceph-mon[73572]: pgmap v210: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 08 09:52:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:05 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:05 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:52:05 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:52:05 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:52:05.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:52:05 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:52:05 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:52:05 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:52:05.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:52:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:05 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0003e20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:05 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v211: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:52:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:52:05] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Oct 08 09:52:05 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:52:05] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Oct 08 09:52:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:05 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef80041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:06 compute-0 ceph-mon[73572]: pgmap v211: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:52:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:52:06.972Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 09:52:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:52:06.972Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 09:52:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:07 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef80041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:07 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:52:07 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:52:07 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:52:07.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:52:07 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:52:07 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:52:07 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:52:07.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:52:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:07 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:07 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v212: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:52:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:07 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0003e20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:08 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:52:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:52:08.847Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 09:52:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:52:08.848Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 09:52:08 compute-0 ceph-mon[73572]: pgmap v212: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:52:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:09 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0003e20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:09 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:52:09 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:52:09 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:52:09.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:52:09 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:52:09 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:52:09 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:52:09.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:52:09 compute-0 sshd-session[130553]: Accepted publickey for zuul from 192.168.122.30 port 47584 ssh2: ECDSA SHA256:7LvTHAj52RCSkKXOsIbzSDrEw7lwj23D0dAJ0Qgx0Rg
Oct 08 09:52:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:09 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0003e20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:09 compute-0 systemd-logind[798]: New session 46 of user zuul.
Oct 08 09:52:09 compute-0 systemd[1]: Started Session 46 of User zuul.
Oct 08 09:52:09 compute-0 sshd-session[130553]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 08 09:52:09 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v213: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 08 09:52:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:09 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f04001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:10 compute-0 sudo[130707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxrrkodenwmtlkmwgbjwwlofqdlokklz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917129.529978-18-224441346045681/AnsiballZ_tempfile.py'
Oct 08 09:52:10 compute-0 sudo[130707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:52:10 compute-0 python3.9[130709]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Oct 08 09:52:10 compute-0 sudo[130707]: pam_unix(sudo:session): session closed for user root
Oct 08 09:52:10 compute-0 sudo[130859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjpqpeovnacujxgqgnantwaxzfhglauv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917130.4299762-54-5387466760245/AnsiballZ_stat.py'
Oct 08 09:52:10 compute-0 sudo[130859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:52:10 compute-0 ceph-mon[73572]: pgmap v213: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 08 09:52:11 compute-0 python3.9[130861]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 09:52:11 compute-0 sudo[130859]: pam_unix(sudo:session): session closed for user root
Oct 08 09:52:11 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:11 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec003fb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:11 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:52:11 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:52:11 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:52:11.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:52:11 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:52:11 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:52:11 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:52:11.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:52:11 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:11 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:11 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v214: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:52:11 compute-0 sudo[131014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsxkyczeumwiysvrdfkwqymrlhtqfmhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917131.292085-78-196997707301062/AnsiballZ_slurp.py'
Oct 08 09:52:11 compute-0 sudo[131014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:52:11 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:11 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0003e20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:11 compute-0 python3.9[131016]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Oct 08 09:52:11 compute-0 sudo[131014]: pam_unix(sudo:session): session closed for user root
Oct 08 09:52:12 compute-0 sudo[131167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcbkpguxrlbgzgolvinplurrzzsmvuoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917132.1152978-102-20639645602985/AnsiballZ_stat.py'
Oct 08 09:52:12 compute-0 sudo[131167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:52:12 compute-0 python3.9[131169]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.41qg5f3a follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:52:12 compute-0 sudo[131167]: pam_unix(sudo:session): session closed for user root
Oct 08 09:52:13 compute-0 ceph-mon[73572]: pgmap v214: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:52:13 compute-0 sudo[131293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmmubuokjahcswdhrukmbyqtimhcemmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917132.1152978-102-20639645602985/AnsiballZ_copy.py'
Oct 08 09:52:13 compute-0 sudo[131293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:52:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:13 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f04001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:13 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:52:13 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:52:13 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:52:13.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:52:13 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:52:13 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 09:52:13 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:52:13.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 09:52:13 compute-0 python3.9[131295]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.41qg5f3a mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759917132.1152978-102-20639645602985/.source.41qg5f3a _original_basename=.q76e5zuf follow=False checksum=645509817f1020adcb4b475a04ffc8472d1fc5c9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:52:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:13 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec003fb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:13 compute-0 sudo[131293]: pam_unix(sudo:session): session closed for user root
Oct 08 09:52:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:52:13 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v215: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 08 09:52:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:13 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:14 compute-0 sudo[131446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpbbhpogemwxguhwratnenlbrnrvetio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917133.6139133-147-74140088643177/AnsiballZ_setup.py'
Oct 08 09:52:14 compute-0 sudo[131446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:52:14 compute-0 python3.9[131448]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 08 09:52:14 compute-0 sudo[131446]: pam_unix(sudo:session): session closed for user root
Oct 08 09:52:15 compute-0 ceph-mon[73572]: pgmap v215: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 08 09:52:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:15 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0003e20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:15 compute-0 sudo[131599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxstfmlcyqrvcypfzfzcofoyjoykkotb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917134.861041-172-167017146139370/AnsiballZ_blockinfile.py'
Oct 08 09:52:15 compute-0 sudo[131599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:52:15 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:52:15 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 09:52:15 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:52:15.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 09:52:15 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:52:15 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 09:52:15 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:52:15.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 09:52:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:15 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f04001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:15 compute-0 python3.9[131601]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCH7J4/vrAjqY7b3+xDoxlOrkvqhtdMtNCRu8feksOJjh2Lg2Yk5a4TpRFHHcUew6Or+BSrCAe5KLIJookdMX3AnHBTeYgFVrph2Ke0jsZhtIDdYFPya4HaYgVScxezyYjpFJsOgHIasA47X1Ai7KtSHamdGUMHvyRPFaMroDQGOH5uNA58Pr0jAvA9/p32JhzVhvFTNhdp5AZuuf53LCOoAJPpvxAfhZJVwv0zpQu1qJ2MQ4F6PjmLmpJe9IFedhTbswP4+A8raCmSvJK/X3zbL6A5C78i72YF0dVlX4E5Jgq2BymgfJXA2vRrB7WzfFXN/KCT+A6KjshRy8vEZTlewfHk3bMt+IjAgRaPsvV2gwOQb0lhzfUX2RkPxHTTunUAUf1PJwBTKah0plZAQoGQce+8MWTqKP842KIoZPO7/LQQZR21apoIRIEt1OtR3pITkULZqmoYaZKqVzPCyoagXj2v0W4E//8slRvaC4n2qfMRwvp2VR0mSv9qwMeqnm0=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMt6YRNNCvMAUwHQzPKNq18k03sF+qAP+8fg1vdKmMsQ
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBN1LMOBquYaNyOmBNhqWyrm3Ot0C+prylWlOCYwa7IIp3WZH4GHwVhjD6VAwSa/KvI01xKiiJwO/WJ4zgAnMAiM=
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCYQPNjF86l7L2Hj2/ras4UwWV1W/v43YSKx2wyuHDdieMiPaKbrXfDkjmyzUBERrbiTo1QPGQAMAmA2ykBglPN8r/+0SzTmZFPysM5MwJdoYFoZLOFzs9ldQJxEusbWvZnvF+I9UgftR9Kc0etIrQ6xgLbAtGZNGqj5b2kDFCC3J7RJB10JjuqkZ7faqGp+JLC/txEe9rDOAOpOpa885Sx+ZK+5P8OmEbpqHH3vL1O9we9lyRIs2Y/RpIrncEKyaA84WKimjvp832GDFqVGlFklY8lsH31+AUKXfk65cwhnczZO7DTB1/+0QUWhiy+uUUKLdJ1C3AFfHNBBH0WWHolNsPiYjSaNrUIgxXyRLkGtLeTAtEa9LNniw8KKCXI/jptXVVqyfHGOFIzo11NDDSTeCPpVG2MrjX9vJZknGeShJLavvHzVmc1N/zNpgq0Rr0FEyFZL384e8WgnmTY1lBf7tAPdMyIaNEJgEE4MobwqVDSwMmgWKmKoOeY5jsWNlM=
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHzclsFPuApUw4nYRrZrI5lJm2aKty4lBzS+387uCINA
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCmuS8ms5fq9IWCpSG062zv6KqUIHSk9g+RlcFiU/nKSB1OMQ56HhCeuGAOEbiyfVsMqC143W9W+Q6X1JDoRkcg=
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCp3Vp6dX4ruCK781x4GIhtAtcJdT75tsPxH3O/YwMPa1JuQj17BT+IZbu0qvi56CLtWm5GwO9cF5N1u+ZpYWIwNbEJlz4q4LeJud7OFwwvwDTdM2fZylZt2dEtwqbmDJUsJxwcLQshtmSxpRR5Z53dCJAMTZiKGF/MiJrVkc7A2PfxMnLH568W9poUGj9jUYetHoRmwKl9hes+OQRljbjUi8gLpseivGxW9IAewXRhJi0ybLNDnQM0iSkdQqaTVD7laQKxpynfO1a0b7U6oyFRdyTqMJqyDKe8Vx+D1esV9oZKn7UEtj+WGUAv3StaLzrk3fjhi4XePCs0Ao1s/B1MPZCcM0Po5BdHAHhf4CbUSRS+oaAS7KaaWkWTKLTKEDWS6DjX6KUR9hUyLQ54IMYu17UP6JclJnH5c9FmUQls07pus/CkhX0IIgOTinLYeOJSdBsKA9JUrnQzXKMAwzjKL18kG8OZ+Yaf7msme1EVikR9ljtRB88k+DtapF5wub8=
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBDnMNJEcPeKIHMEAdXUabsWNwdNGhiYyZLatE1eeBqY
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLDW7MDD+6+vPlFKWCI8yHUVjDpLwcAatqV8Xhxm53MJMkyP9vCai5lIMwJluZIDUkA83WhSi06EgMc1afHFONA=
                                              create=True mode=0644 path=/tmp/ansible.41qg5f3a state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:52:15 compute-0 sudo[131599]: pam_unix(sudo:session): session closed for user root
Oct 08 09:52:15 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v216: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:52:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:52:15] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Oct 08 09:52:15 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:52:15] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Oct 08 09:52:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:15 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec003fb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:15 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 08 09:52:16 compute-0 sudo[131754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clqtkmddezlenadzeebfwcehiovvzapv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917135.7109668-196-204136833636265/AnsiballZ_command.py'
Oct 08 09:52:16 compute-0 sudo[131754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:52:16 compute-0 python3.9[131756]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.41qg5f3a' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:52:16 compute-0 sudo[131754]: pam_unix(sudo:session): session closed for user root
Oct 08 09:52:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:52:16.973Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:52:17 compute-0 ceph-mon[73572]: pgmap v216: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:52:17 compute-0 sudo[131909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vusvuhsqwcpiidtmrrfitdgmhphznfxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917136.6160858-220-78017484366016/AnsiballZ_file.py'
Oct 08 09:52:17 compute-0 sudo[131909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:52:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:17 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:17 compute-0 python3.9[131911]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.41qg5f3a state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:52:17 compute-0 sudo[131909]: pam_unix(sudo:session): session closed for user root
Oct 08 09:52:17 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:52:17 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 09:52:17 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:52:17.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 09:52:17 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:52:17 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:52:17 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:52:17.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:52:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:17 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0003e20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:17 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v217: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:52:17 compute-0 sshd-session[130556]: Connection closed by 192.168.122.30 port 47584
Oct 08 09:52:17 compute-0 sshd-session[130553]: pam_unix(sshd:session): session closed for user zuul
Oct 08 09:52:17 compute-0 systemd[1]: session-46.scope: Deactivated successfully.
Oct 08 09:52:17 compute-0 systemd[1]: session-46.scope: Consumed 4.976s CPU time.
Oct 08 09:52:17 compute-0 systemd-logind[798]: Session 46 logged out. Waiting for processes to exit.
Oct 08 09:52:17 compute-0 systemd-logind[798]: Removed session 46.
Oct 08 09:52:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:17 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f04009190 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:17 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 09:52:17 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:52:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:52:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:52:18 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:52:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:52:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:52:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:52:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:52:18 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:52:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:52:18.848Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 09:52:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:52:18.849Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:52:19 compute-0 ceph-mon[73572]: pgmap v217: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:52:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:19 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec003fb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:19 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:52:19 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:52:19 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:52:19.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:52:19 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:52:19 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 09:52:19 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:52:19.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 09:52:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:19 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:19 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v218: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 08 09:52:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:19 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0003e20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:21 compute-0 sudo[131940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 09:52:21 compute-0 ceph-mon[73572]: pgmap v218: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 08 09:52:21 compute-0 sudo[131940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:52:21 compute-0 sudo[131940]: pam_unix(sudo:session): session closed for user root
Oct 08 09:52:21 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:21 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f04009190 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:21 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:52:21 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:52:21 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:52:21.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:52:21 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:52:21 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:52:21 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:52:21.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:52:21 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:21 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec003fb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:21 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v219: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:52:21 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:21 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:22 compute-0 sshd-session[131966]: Accepted publickey for zuul from 192.168.122.30 port 40538 ssh2: ECDSA SHA256:7LvTHAj52RCSkKXOsIbzSDrEw7lwj23D0dAJ0Qgx0Rg
Oct 08 09:52:22 compute-0 systemd-logind[798]: New session 47 of user zuul.
Oct 08 09:52:22 compute-0 systemd[1]: Started Session 47 of User zuul.
Oct 08 09:52:22 compute-0 sshd-session[131966]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 08 09:52:23 compute-0 ceph-mon[73572]: pgmap v219: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:52:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:23 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0003e20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:23 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:52:23 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:52:23 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:52:23.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:52:23 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:52:23 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:52:23 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:52:23.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:52:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:23 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f04009190 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:23 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:52:23 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v220: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 08 09:52:23 compute-0 python3.9[132120]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 08 09:52:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:23 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec003fb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:24 compute-0 ceph-mon[73572]: pgmap v220: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 08 09:52:24 compute-0 sshd-session[70484]: Received disconnect from 38.102.83.97 port 59276:11: disconnected by user
Oct 08 09:52:24 compute-0 sshd-session[70484]: Disconnected from user zuul 38.102.83.97 port 59276
Oct 08 09:52:24 compute-0 sshd-session[70481]: pam_unix(sshd:session): session closed for user zuul
Oct 08 09:52:24 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Oct 08 09:52:24 compute-0 systemd[1]: session-19.scope: Consumed 1min 32.168s CPU time.
Oct 08 09:52:24 compute-0 systemd-logind[798]: Session 19 logged out. Waiting for processes to exit.
Oct 08 09:52:24 compute-0 systemd-logind[798]: Removed session 19.
Oct 08 09:52:24 compute-0 sudo[132275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkpbngkfyunlkcrmgqixzatuuamhjega ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917144.109347-56-46011764175822/AnsiballZ_systemd.py'
Oct 08 09:52:24 compute-0 sudo[132275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:52:25 compute-0 python3.9[132277]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 08 09:52:25 compute-0 sudo[132275]: pam_unix(sudo:session): session closed for user root
Oct 08 09:52:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:25 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:25 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:52:25 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 09:52:25 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:52:25.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 09:52:25 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:52:25 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 09:52:25 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:52:25.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 09:52:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:25 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0003e20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:25 compute-0 sudo[132430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyfaeidfwahwmdsnxqptxrbsgjxavebg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917145.3046334-80-148752790816950/AnsiballZ_systemd.py'
Oct 08 09:52:25 compute-0 sudo[132430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:52:25 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v221: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:52:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:52:25] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Oct 08 09:52:25 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:52:25] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Oct 08 09:52:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:25 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f04009190 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:25 compute-0 python3.9[132432]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 08 09:52:25 compute-0 sudo[132430]: pam_unix(sudo:session): session closed for user root
Oct 08 09:52:26 compute-0 sudo[132584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgnzseqsefbamnljaxzeixykajeaijeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917146.2320457-107-102421625199998/AnsiballZ_command.py'
Oct 08 09:52:26 compute-0 sudo[132584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:52:26 compute-0 ceph-mon[73572]: pgmap v221: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:52:26 compute-0 python3.9[132586]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:52:26 compute-0 sudo[132584]: pam_unix(sudo:session): session closed for user root
Oct 08 09:52:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:52:26.973Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:52:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:27 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec003fb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:27 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:52:27 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:52:27 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:52:27.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:52:27 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:52:27 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:52:27 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:52:27.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:52:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:27 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:27 compute-0 sudo[132738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irasoreyskbiaendcqkzzjrqglctomla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917147.1570203-131-189113462746718/AnsiballZ_stat.py'
Oct 08 09:52:27 compute-0 sudo[132738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:52:27 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v222: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:52:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:27 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:27 compute-0 python3.9[132740]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 09:52:27 compute-0 sudo[132738]: pam_unix(sudo:session): session closed for user root
Oct 08 09:52:28 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:52:28 compute-0 sudo[132891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcpgzijxsmwozwirjaivrzlnorsrqbzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917148.0340967-158-42824691352168/AnsiballZ_file.py'
Oct 08 09:52:28 compute-0 sudo[132891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:52:28 compute-0 python3.9[132893]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:52:28 compute-0 sudo[132891]: pam_unix(sudo:session): session closed for user root
Oct 08 09:52:28 compute-0 ceph-mon[73572]: pgmap v222: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:52:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:52:28.850Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:52:29 compute-0 sshd-session[131969]: Connection closed by 192.168.122.30 port 40538
Oct 08 09:52:29 compute-0 sshd-session[131966]: pam_unix(sshd:session): session closed for user zuul
Oct 08 09:52:29 compute-0 systemd[1]: session-47.scope: Deactivated successfully.
Oct 08 09:52:29 compute-0 systemd[1]: session-47.scope: Consumed 3.959s CPU time.
Oct 08 09:52:29 compute-0 systemd-logind[798]: Session 47 logged out. Waiting for processes to exit.
Oct 08 09:52:29 compute-0 systemd-logind[798]: Removed session 47.
Oct 08 09:52:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:29 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f0400a680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:29 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:52:29 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:52:29 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:52:29.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:52:29 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:52:29 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:52:29 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:52:29.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:52:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:29 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec003fb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:29 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v223: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 08 09:52:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:29 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:30 compute-0 ceph-mon[73572]: pgmap v223: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 08 09:52:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:31 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0003ef0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:31 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:52:31 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:52:31 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:52:31.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:52:31 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:52:31 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:52:31 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:52:31.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:52:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:31 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f0400a680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:31 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v224: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:52:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:31 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f0400a680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 09:52:32 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:52:32 compute-0 ceph-mon[73572]: pgmap v224: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:52:33 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:33 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f0400a680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:33 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:52:33 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:52:33 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:52:33.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:52:33 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:52:33 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:52:33 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:52:33.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:52:33 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:33 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0003f10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:52:33 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v225: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 08 09:52:33 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:33 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed8001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:33 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:52:34 compute-0 ceph-mon[73572]: pgmap v225: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 08 09:52:34 compute-0 sshd-session[132927]: Accepted publickey for zuul from 192.168.122.30 port 38428 ssh2: ECDSA SHA256:7LvTHAj52RCSkKXOsIbzSDrEw7lwj23D0dAJ0Qgx0Rg
Oct 08 09:52:34 compute-0 systemd-logind[798]: New session 48 of user zuul.
Oct 08 09:52:34 compute-0 systemd[1]: Started Session 48 of User zuul.
Oct 08 09:52:34 compute-0 sshd-session[132927]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 08 09:52:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:35 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec8000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:35 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:52:35 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:52:35 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:52:35.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:52:35 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:52:35 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:52:35 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:52:35.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:52:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:35 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:35 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v226: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:52:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:52:35] "GET /metrics HTTP/1.1" 200 48334 "" "Prometheus/2.51.0"
Oct 08 09:52:35 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:52:35] "GET /metrics HTTP/1.1" 200 48334 "" "Prometheus/2.51.0"
Oct 08 09:52:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:35 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0003f30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:36 compute-0 python3.9[133081]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 08 09:52:36 compute-0 sudo[133236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqmpjxgoymiujopypaoleqawdaaijcrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917156.4878263-62-104421064501822/AnsiballZ_setup.py'
Oct 08 09:52:36 compute-0 sudo[133236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:52:36 compute-0 ceph-mon[73572]: pgmap v226: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:52:36 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:52:36.974Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:52:37 compute-0 python3.9[133238]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 08 09:52:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:37 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed8001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:37 compute-0 sudo[133236]: pam_unix(sudo:session): session closed for user root
Oct 08 09:52:37 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:52:37 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:52:37 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:52:37.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:52:37 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:52:37 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 09:52:37 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:52:37.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 09:52:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:37 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:37 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v227: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:52:37 compute-0 sudo[133321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqqzitmwjrlvsgvitonjakugsdvhawvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917156.4878263-62-104421064501822/AnsiballZ_dnf.py'
Oct 08 09:52:37 compute-0 sudo[133321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:52:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:37 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:37 compute-0 python3.9[133323]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 08 09:52:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:52:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:52:38.851Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:52:38 compute-0 ceph-mon[73572]: pgmap v227: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:52:39 compute-0 sudo[133321]: pam_unix(sudo:session): session closed for user root
Oct 08 09:52:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:39 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0003f30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:39 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:52:39 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:52:39 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:52:39.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:52:39 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:52:39 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:52:39 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:52:39.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:52:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:39 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed8001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:39 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v228: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 08 09:52:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:39 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:40 compute-0 python3.9[133477]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:52:40 compute-0 ceph-mon[73572]: pgmap v228: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 08 09:52:41 compute-0 sudo[133579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 09:52:41 compute-0 sudo[133579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:52:41 compute-0 sudo[133579]: pam_unix(sudo:session): session closed for user root
Oct 08 09:52:41 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:41 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:41 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:52:41 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:52:41 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:52:41.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:52:41 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:52:41 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 09:52:41 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:52:41.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 09:52:41 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:41 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0003f30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:41 compute-0 python3.9[133654]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 08 09:52:41 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v229: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:52:41 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:41 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed8001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:42 compute-0 python3.9[133805]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 09:52:42 compute-0 python3.9[133955]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 09:52:42 compute-0 ceph-mon[73572]: pgmap v229: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:52:43 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:43 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:43 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:52:43 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:52:43 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:52:43.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:52:43 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:43 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:43 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:52:43 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:52:43 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:52:43.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:52:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:52:43 compute-0 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Oct 08 09:52:43 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:52:43.533342) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 08 09:52:43 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Oct 08 09:52:43 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917163533381, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 1982, "num_deletes": 251, "total_data_size": 3894857, "memory_usage": 3951704, "flush_reason": "Manual Compaction"}
Oct 08 09:52:43 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Oct 08 09:52:43 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917163544932, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 2343400, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10780, "largest_seqno": 12761, "table_properties": {"data_size": 2336695, "index_size": 3583, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16477, "raw_average_key_size": 20, "raw_value_size": 2322072, "raw_average_value_size": 2863, "num_data_blocks": 157, "num_entries": 811, "num_filter_entries": 811, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916974, "oldest_key_time": 1759916974, "file_creation_time": 1759917163, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Oct 08 09:52:43 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 11638 microseconds, and 5497 cpu microseconds.
Oct 08 09:52:43 compute-0 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 08 09:52:43 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:52:43.544982) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 2343400 bytes OK
Oct 08 09:52:43 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:52:43.545001) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Oct 08 09:52:43 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:52:43.546202) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Oct 08 09:52:43 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:52:43.546221) EVENT_LOG_v1 {"time_micros": 1759917163546216, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 08 09:52:43 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:52:43.546241) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 08 09:52:43 compute-0 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3886780, prev total WAL file size 3886780, number of live WAL files 2.
Oct 08 09:52:43 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 09:52:43 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:52:43.547522) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323532' seq:0, type:0; will stop at (end)
Oct 08 09:52:43 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 08 09:52:43 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(2288KB)], [26(13MB)]
Oct 08 09:52:43 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917163547599, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 16583344, "oldest_snapshot_seqno": -1}
Oct 08 09:52:43 compute-0 sshd-session[132930]: Connection closed by 192.168.122.30 port 38428
Oct 08 09:52:43 compute-0 sshd-session[132927]: pam_unix(sshd:session): session closed for user zuul
Oct 08 09:52:43 compute-0 systemd[1]: session-48.scope: Deactivated successfully.
Oct 08 09:52:43 compute-0 systemd[1]: session-48.scope: Consumed 5.817s CPU time.
Oct 08 09:52:43 compute-0 systemd-logind[798]: Session 48 logged out. Waiting for processes to exit.
Oct 08 09:52:43 compute-0 systemd-logind[798]: Removed session 48.
Oct 08 09:52:43 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 4411 keys, 14671363 bytes, temperature: kUnknown
Oct 08 09:52:43 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917163607566, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 14671363, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14637639, "index_size": 21582, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11077, "raw_key_size": 111216, "raw_average_key_size": 25, "raw_value_size": 14552948, "raw_average_value_size": 3299, "num_data_blocks": 924, "num_entries": 4411, "num_filter_entries": 4411, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916581, "oldest_key_time": 0, "file_creation_time": 1759917163, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Oct 08 09:52:43 compute-0 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 08 09:52:43 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:52:43.607794) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 14671363 bytes
Oct 08 09:52:43 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:52:43.608799) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 276.2 rd, 244.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 13.6 +0.0 blob) out(14.0 +0.0 blob), read-write-amplify(13.3) write-amplify(6.3) OK, records in: 4843, records dropped: 432 output_compression: NoCompression
Oct 08 09:52:43 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:52:43.608817) EVENT_LOG_v1 {"time_micros": 1759917163608809, "job": 10, "event": "compaction_finished", "compaction_time_micros": 60042, "compaction_time_cpu_micros": 30171, "output_level": 6, "num_output_files": 1, "total_output_size": 14671363, "num_input_records": 4843, "num_output_records": 4411, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 08 09:52:43 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 09:52:43 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917163609300, "job": 10, "event": "table_file_deletion", "file_number": 28}
Oct 08 09:52:43 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 09:52:43 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917163611903, "job": 10, "event": "table_file_deletion", "file_number": 26}
Oct 08 09:52:43 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:52:43.547388) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 09:52:43 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:52:43.611989) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 09:52:43 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:52:43.611995) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 09:52:43 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:52:43.611996) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 09:52:43 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:52:43.611998) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 09:52:43 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:52:43.611999) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 09:52:43 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v230: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 08 09:52:43 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:43 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0003f30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:44 compute-0 ceph-mon[73572]: pgmap v230: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 08 09:52:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:45 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed80032b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:45 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:52:45 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:52:45 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:52:45.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:52:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:45 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec8002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:45 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:52:45 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:52:45 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:52:45.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:52:45 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v231: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:52:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:52:45] "GET /metrics HTTP/1.1" 200 48334 "" "Prometheus/2.51.0"
Oct 08 09:52:45 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:52:45] "GET /metrics HTTP/1.1" 200 48334 "" "Prometheus/2.51.0"
Oct 08 09:52:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:45 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:46 compute-0 ceph-mon[73572]: pgmap v231: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:52:46 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:52:46.975Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:52:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:47 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:47 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:52:47 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:52:47 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:52:47.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:52:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:47 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed80032b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:47 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:52:47 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:52:47 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:52:47.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:52:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_09:52:47
Oct 08 09:52:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 08 09:52:47 compute-0 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct 08 09:52:47 compute-0 ceph-mgr[73869]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.meta', 'backups', 'images', 'vms', '.mgr', 'volumes', '.rgw.root', 'default.rgw.control', '.nfs', 'default.rgw.log']
Oct 08 09:52:47 compute-0 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct 08 09:52:47 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v232: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:52:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct 08 09:52:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:52:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 08 09:52:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:52:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:52:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:52:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:52:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:52:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:52:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:52:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:52:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:52:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 08 09:52:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:52:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:52:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:52:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 08 09:52:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:52:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 08 09:52:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:52:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:52:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:52:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 08 09:52:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:52:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 09:52:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:47 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec8002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 09:52:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:52:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:52:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:52:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 08 09:52:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 09:52:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 09:52:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 09:52:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 09:52:47 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:52:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 08 09:52:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 09:52:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 09:52:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 09:52:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 09:52:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:52:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:52:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:52:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:52:48 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:52:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:52:48.852Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:52:48 compute-0 sshd-session[133987]: Accepted publickey for zuul from 192.168.122.30 port 59186 ssh2: ECDSA SHA256:7LvTHAj52RCSkKXOsIbzSDrEw7lwj23D0dAJ0Qgx0Rg
Oct 08 09:52:48 compute-0 systemd-logind[798]: New session 49 of user zuul.
Oct 08 09:52:48 compute-0 systemd[1]: Started Session 49 of User zuul.
Oct 08 09:52:48 compute-0 sshd-session[133987]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 08 09:52:48 compute-0 ceph-mon[73572]: pgmap v232: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:52:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:49 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0003f30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:49 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:52:49 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:52:49 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:52:49.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:52:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:49 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:49 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:52:49 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 09:52:49 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:52:49.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 09:52:49 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v233: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 08 09:52:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:49 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed80032b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:50 compute-0 python3.9[134141]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 08 09:52:51 compute-0 ceph-mon[73572]: pgmap v233: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 08 09:52:51 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:51 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec8002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:51 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:52:51 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:52:51 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:52:51.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:52:51 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:51 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0003f30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:51 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:52:51 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:52:51 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:52:51.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:52:51 compute-0 sudo[134297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsaepxjolegtizzdcmbkkzqmuxqzofib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917171.199655-111-219156692448181/AnsiballZ_file.py'
Oct 08 09:52:51 compute-0 sudo[134297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:52:51 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v234: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:52:51 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:51 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:51 compute-0 python3.9[134299]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:52:51 compute-0 sudo[134297]: pam_unix(sudo:session): session closed for user root
Oct 08 09:52:52 compute-0 sudo[134450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyyjlgapoqhduxtkszigcnvzupaclryp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917172.0594923-111-131344831857123/AnsiballZ_file.py'
Oct 08 09:52:52 compute-0 sudo[134450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:52:52 compute-0 sudo[134453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:52:52 compute-0 python3.9[134452]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:52:52 compute-0 sudo[134453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:52:52 compute-0 sudo[134453]: pam_unix(sudo:session): session closed for user root
Oct 08 09:52:52 compute-0 sudo[134450]: pam_unix(sudo:session): session closed for user root
Oct 08 09:52:52 compute-0 sudo[134478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 08 09:52:52 compute-0 sudo[134478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:52:53 compute-0 ceph-mon[73572]: pgmap v234: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:52:53 compute-0 sudo[134478]: pam_unix(sudo:session): session closed for user root
Oct 08 09:52:53 compute-0 sudo[134684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpizghmwmpyouiotqigwdoljjzpvuecb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917172.7414677-160-274451087208633/AnsiballZ_stat.py'
Oct 08 09:52:53 compute-0 sudo[134684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:52:53 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:52:53 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:52:53 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 08 09:52:53 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 09:52:53 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v235: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 267 B/s rd, 0 op/s
Oct 08 09:52:53 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 08 09:52:53 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:52:53 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 08 09:52:53 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:52:53 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 08 09:52:53 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 09:52:53 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 08 09:52:53 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 09:52:53 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:52:53 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:52:53 compute-0 sudo[134687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:52:53 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:53 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed80032b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:53 compute-0 sudo[134687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:52:53 compute-0 sudo[134687]: pam_unix(sudo:session): session closed for user root
Oct 08 09:52:53 compute-0 python3.9[134686]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:52:53 compute-0 sudo[134712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 08 09:52:53 compute-0 sudo[134712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:52:53 compute-0 sudo[134684]: pam_unix(sudo:session): session closed for user root
Oct 08 09:52:53 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:52:53 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:52:53 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:52:53.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:52:53 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:53 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:53 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:52:53 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:52:53 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:52:53.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:52:53 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:52:53 compute-0 podman[134849]: 2025-10-08 09:52:53.696845972 +0000 UTC m=+0.048692259 container create 865943d4a73bf043deab8d175e7ea29abb151e2db4d3408da23be58e10492ea5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 08 09:52:53 compute-0 systemd[1]: Started libpod-conmon-865943d4a73bf043deab8d175e7ea29abb151e2db4d3408da23be58e10492ea5.scope.
Oct 08 09:52:53 compute-0 podman[134849]: 2025-10-08 09:52:53.678094961 +0000 UTC m=+0.029941268 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:52:53 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:52:53 compute-0 sudo[134919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtteauaatqwovbiypjqyoluepdmbrjvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917172.7414677-160-274451087208633/AnsiballZ_copy.py'
Oct 08 09:52:53 compute-0 sudo[134919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:52:53 compute-0 podman[134849]: 2025-10-08 09:52:53.789476045 +0000 UTC m=+0.141322342 container init 865943d4a73bf043deab8d175e7ea29abb151e2db4d3408da23be58e10492ea5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_keldysh, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True)
Oct 08 09:52:53 compute-0 podman[134849]: 2025-10-08 09:52:53.796325773 +0000 UTC m=+0.148172060 container start 865943d4a73bf043deab8d175e7ea29abb151e2db4d3408da23be58e10492ea5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:52:53 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:53 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee00040d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:53 compute-0 podman[134849]: 2025-10-08 09:52:53.799855257 +0000 UTC m=+0.151701544 container attach 865943d4a73bf043deab8d175e7ea29abb151e2db4d3408da23be58e10492ea5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_keldysh, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct 08 09:52:53 compute-0 zealous_keldysh[134912]: 167 167
Oct 08 09:52:53 compute-0 systemd[1]: libpod-865943d4a73bf043deab8d175e7ea29abb151e2db4d3408da23be58e10492ea5.scope: Deactivated successfully.
Oct 08 09:52:53 compute-0 podman[134849]: 2025-10-08 09:52:53.803360119 +0000 UTC m=+0.155206416 container died 865943d4a73bf043deab8d175e7ea29abb151e2db4d3408da23be58e10492ea5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_keldysh, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:52:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-31ff4be75cf2cb65c051caa49347eefb8bf505f88b1ecf023c5c322fe907c03c-merged.mount: Deactivated successfully.
Oct 08 09:52:53 compute-0 podman[134849]: 2025-10-08 09:52:53.854575887 +0000 UTC m=+0.206422204 container remove 865943d4a73bf043deab8d175e7ea29abb151e2db4d3408da23be58e10492ea5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:52:53 compute-0 systemd[1]: libpod-conmon-865943d4a73bf043deab8d175e7ea29abb151e2db4d3408da23be58e10492ea5.scope: Deactivated successfully.
Oct 08 09:52:53 compute-0 python3.9[134921]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917172.7414677-160-274451087208633/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=03e9ebef9d51a593a38c809f93442d2e40b72597 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:52:54 compute-0 sudo[134919]: pam_unix(sudo:session): session closed for user root
Oct 08 09:52:54 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:52:54 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 09:52:54 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:52:54 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:52:54 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 09:52:54 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 09:52:54 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:52:54 compute-0 podman[134943]: 2025-10-08 09:52:54.067642072 +0000 UTC m=+0.049501524 container create d3e0c68d9da20e68a8ec0754135114be32524b84f3bdddd9943f65c019c76f12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_margulis, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Oct 08 09:52:54 compute-0 systemd[1]: Started libpod-conmon-d3e0c68d9da20e68a8ec0754135114be32524b84f3bdddd9943f65c019c76f12.scope.
Oct 08 09:52:54 compute-0 podman[134943]: 2025-10-08 09:52:54.043124768 +0000 UTC m=+0.024984240 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:52:54 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:52:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eab7240473297106459156914ebfeacb6da7fb2c0c68303f4a510df36dcbafec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 09:52:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eab7240473297106459156914ebfeacb6da7fb2c0c68303f4a510df36dcbafec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:52:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eab7240473297106459156914ebfeacb6da7fb2c0c68303f4a510df36dcbafec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:52:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eab7240473297106459156914ebfeacb6da7fb2c0c68303f4a510df36dcbafec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 09:52:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eab7240473297106459156914ebfeacb6da7fb2c0c68303f4a510df36dcbafec/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 09:52:54 compute-0 podman[134943]: 2025-10-08 09:52:54.168797468 +0000 UTC m=+0.150656970 container init d3e0c68d9da20e68a8ec0754135114be32524b84f3bdddd9943f65c019c76f12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_margulis, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:52:54 compute-0 podman[134943]: 2025-10-08 09:52:54.18414622 +0000 UTC m=+0.166005672 container start d3e0c68d9da20e68a8ec0754135114be32524b84f3bdddd9943f65c019c76f12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Oct 08 09:52:54 compute-0 podman[134943]: 2025-10-08 09:52:54.195214734 +0000 UTC m=+0.177074196 container attach d3e0c68d9da20e68a8ec0754135114be32524b84f3bdddd9943f65c019c76f12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:52:54 compute-0 sudo[135120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzkuwfffzuneqawbtwbnzhpwqytkvijy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917174.1658678-160-91309174977549/AnsiballZ_stat.py'
Oct 08 09:52:54 compute-0 sudo[135120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:52:54 compute-0 boring_margulis[134982]: --> passed data devices: 0 physical, 1 LVM
Oct 08 09:52:54 compute-0 boring_margulis[134982]: --> All data devices are unavailable
Oct 08 09:52:54 compute-0 systemd[1]: libpod-d3e0c68d9da20e68a8ec0754135114be32524b84f3bdddd9943f65c019c76f12.scope: Deactivated successfully.
Oct 08 09:52:54 compute-0 podman[134943]: 2025-10-08 09:52:54.532082739 +0000 UTC m=+0.513942201 container died d3e0c68d9da20e68a8ec0754135114be32524b84f3bdddd9943f65c019c76f12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_margulis, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:52:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-eab7240473297106459156914ebfeacb6da7fb2c0c68303f4a510df36dcbafec-merged.mount: Deactivated successfully.
Oct 08 09:52:54 compute-0 podman[134943]: 2025-10-08 09:52:54.581234152 +0000 UTC m=+0.563093604 container remove d3e0c68d9da20e68a8ec0754135114be32524b84f3bdddd9943f65c019c76f12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_margulis, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct 08 09:52:54 compute-0 systemd[1]: libpod-conmon-d3e0c68d9da20e68a8ec0754135114be32524b84f3bdddd9943f65c019c76f12.scope: Deactivated successfully.
Oct 08 09:52:54 compute-0 sudo[134712]: pam_unix(sudo:session): session closed for user root
Oct 08 09:52:54 compute-0 sudo[135141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:52:54 compute-0 sudo[135141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:52:54 compute-0 sudo[135141]: pam_unix(sudo:session): session closed for user root
Oct 08 09:52:54 compute-0 python3.9[135123]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:52:54 compute-0 sudo[135120]: pam_unix(sudo:session): session closed for user root
Oct 08 09:52:54 compute-0 sudo[135166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- lvm list --format json
Oct 08 09:52:54 compute-0 sudo[135166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:52:55 compute-0 sudo[135339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aiuiwxzrhjelkqtzamaudvqhjrathlyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917174.1658678-160-91309174977549/AnsiballZ_copy.py'
Oct 08 09:52:55 compute-0 sudo[135339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:52:55 compute-0 ceph-mon[73572]: pgmap v235: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 267 B/s rd, 0 op/s
Oct 08 09:52:55 compute-0 podman[135357]: 2025-10-08 09:52:55.120810102 +0000 UTC m=+0.033250415 container create 720522e9ee6f0deb45e5b0700dcaf90703cb5b03e7c9c2c7e8d218b22d7e697d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_galois, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Oct 08 09:52:55 compute-0 systemd[1]: Started libpod-conmon-720522e9ee6f0deb45e5b0700dcaf90703cb5b03e7c9c2c7e8d218b22d7e697d.scope.
Oct 08 09:52:55 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:52:55 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v236: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 267 B/s rd, 0 op/s
Oct 08 09:52:55 compute-0 podman[135357]: 2025-10-08 09:52:55.190059467 +0000 UTC m=+0.102499800 container init 720522e9ee6f0deb45e5b0700dcaf90703cb5b03e7c9c2c7e8d218b22d7e697d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_galois, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 08 09:52:55 compute-0 podman[135357]: 2025-10-08 09:52:55.197079302 +0000 UTC m=+0.109519615 container start 720522e9ee6f0deb45e5b0700dcaf90703cb5b03e7c9c2c7e8d218b22d7e697d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_galois, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Oct 08 09:52:55 compute-0 podman[135357]: 2025-10-08 09:52:55.200536573 +0000 UTC m=+0.112976886 container attach 720522e9ee6f0deb45e5b0700dcaf90703cb5b03e7c9c2c7e8d218b22d7e697d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_galois, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 08 09:52:55 compute-0 cranky_galois[135373]: 167 167
Oct 08 09:52:55 compute-0 systemd[1]: libpod-720522e9ee6f0deb45e5b0700dcaf90703cb5b03e7c9c2c7e8d218b22d7e697d.scope: Deactivated successfully.
Oct 08 09:52:55 compute-0 podman[135357]: 2025-10-08 09:52:55.201974728 +0000 UTC m=+0.114415041 container died 720522e9ee6f0deb45e5b0700dcaf90703cb5b03e7c9c2c7e8d218b22d7e697d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:52:55 compute-0 podman[135357]: 2025-10-08 09:52:55.107428974 +0000 UTC m=+0.019869317 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:52:55 compute-0 python3.9[135344]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917174.1658678-160-91309174977549/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=ca91fd4512d7d0461b1179af92a523d933a341ea backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:52:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ead4f79fa5217edf2e401a91599a952ac02ddf406fdce28b26c3081b7fd2956-merged.mount: Deactivated successfully.
Oct 08 09:52:55 compute-0 podman[135357]: 2025-10-08 09:52:55.251329898 +0000 UTC m=+0.163770251 container remove 720522e9ee6f0deb45e5b0700dcaf90703cb5b03e7c9c2c7e8d218b22d7e697d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_galois, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:52:55 compute-0 sudo[135339]: pam_unix(sudo:session): session closed for user root
Oct 08 09:52:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:55 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:55 compute-0 systemd[1]: libpod-conmon-720522e9ee6f0deb45e5b0700dcaf90703cb5b03e7c9c2c7e8d218b22d7e697d.scope: Deactivated successfully.
Oct 08 09:52:55 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:52:55 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:52:55 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:52:55.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:52:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:55 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed80032b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:55 compute-0 podman[135443]: 2025-10-08 09:52:55.439242238 +0000 UTC m=+0.043439420 container create bfec76834dd8787349be3f1cf01d7fde31bc246c9efc31e3e7f0528f84502531 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_knuth, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct 08 09:52:55 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:52:55 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:52:55 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:52:55.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:52:55 compute-0 systemd[1]: Started libpod-conmon-bfec76834dd8787349be3f1cf01d7fde31bc246c9efc31e3e7f0528f84502531.scope.
Oct 08 09:52:55 compute-0 podman[135443]: 2025-10-08 09:52:55.421442359 +0000 UTC m=+0.025639551 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:52:55 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:52:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3813cf5aa0c5227e51c7d3f448bd358144896f29f40027732233c94684a79a18/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 09:52:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3813cf5aa0c5227e51c7d3f448bd358144896f29f40027732233c94684a79a18/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:52:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3813cf5aa0c5227e51c7d3f448bd358144896f29f40027732233c94684a79a18/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:52:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3813cf5aa0c5227e51c7d3f448bd358144896f29f40027732233c94684a79a18/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 09:52:55 compute-0 podman[135443]: 2025-10-08 09:52:55.560082183 +0000 UTC m=+0.164279375 container init bfec76834dd8787349be3f1cf01d7fde31bc246c9efc31e3e7f0528f84502531 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_knuth, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 08 09:52:55 compute-0 podman[135443]: 2025-10-08 09:52:55.570974882 +0000 UTC m=+0.175172094 container start bfec76834dd8787349be3f1cf01d7fde31bc246c9efc31e3e7f0528f84502531 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_knuth, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:52:55 compute-0 podman[135443]: 2025-10-08 09:52:55.574462084 +0000 UTC m=+0.178659266 container attach bfec76834dd8787349be3f1cf01d7fde31bc246c9efc31e3e7f0528f84502531 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:52:55 compute-0 sudo[135567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhfdgkpgjydickwlvhsmsvtxpvwbxvvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917175.3847125-160-65360322527584/AnsiballZ_stat.py'
Oct 08 09:52:55 compute-0 sudo[135567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:52:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:52:55] "GET /metrics HTTP/1.1" 200 48330 "" "Prometheus/2.51.0"
Oct 08 09:52:55 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:52:55] "GET /metrics HTTP/1.1" 200 48330 "" "Prometheus/2.51.0"
Oct 08 09:52:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:55 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:55 compute-0 interesting_knuth[135512]: {
Oct 08 09:52:55 compute-0 interesting_knuth[135512]:     "1": [
Oct 08 09:52:55 compute-0 interesting_knuth[135512]:         {
Oct 08 09:52:55 compute-0 interesting_knuth[135512]:             "devices": [
Oct 08 09:52:55 compute-0 interesting_knuth[135512]:                 "/dev/loop3"
Oct 08 09:52:55 compute-0 interesting_knuth[135512]:             ],
Oct 08 09:52:55 compute-0 interesting_knuth[135512]:             "lv_name": "ceph_lv0",
Oct 08 09:52:55 compute-0 interesting_knuth[135512]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 09:52:55 compute-0 interesting_knuth[135512]:             "lv_size": "21470642176",
Oct 08 09:52:55 compute-0 interesting_knuth[135512]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 08 09:52:55 compute-0 interesting_knuth[135512]:             "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 09:52:55 compute-0 interesting_knuth[135512]:             "name": "ceph_lv0",
Oct 08 09:52:55 compute-0 interesting_knuth[135512]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 09:52:55 compute-0 interesting_knuth[135512]:             "tags": {
Oct 08 09:52:55 compute-0 interesting_knuth[135512]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 08 09:52:55 compute-0 interesting_knuth[135512]:                 "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 09:52:55 compute-0 interesting_knuth[135512]:                 "ceph.cephx_lockbox_secret": "",
Oct 08 09:52:55 compute-0 interesting_knuth[135512]:                 "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct 08 09:52:55 compute-0 interesting_knuth[135512]:                 "ceph.cluster_name": "ceph",
Oct 08 09:52:55 compute-0 interesting_knuth[135512]:                 "ceph.crush_device_class": "",
Oct 08 09:52:55 compute-0 interesting_knuth[135512]:                 "ceph.encrypted": "0",
Oct 08 09:52:55 compute-0 interesting_knuth[135512]:                 "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct 08 09:52:55 compute-0 interesting_knuth[135512]:                 "ceph.osd_id": "1",
Oct 08 09:52:55 compute-0 interesting_knuth[135512]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 08 09:52:55 compute-0 interesting_knuth[135512]:                 "ceph.type": "block",
Oct 08 09:52:55 compute-0 interesting_knuth[135512]:                 "ceph.vdo": "0",
Oct 08 09:52:55 compute-0 interesting_knuth[135512]:                 "ceph.with_tpm": "0"
Oct 08 09:52:55 compute-0 interesting_knuth[135512]:             },
Oct 08 09:52:55 compute-0 interesting_knuth[135512]:             "type": "block",
Oct 08 09:52:55 compute-0 interesting_knuth[135512]:             "vg_name": "ceph_vg0"
Oct 08 09:52:55 compute-0 interesting_knuth[135512]:         }
Oct 08 09:52:55 compute-0 interesting_knuth[135512]:     ]
Oct 08 09:52:55 compute-0 interesting_knuth[135512]: }
Oct 08 09:52:55 compute-0 systemd[1]: libpod-bfec76834dd8787349be3f1cf01d7fde31bc246c9efc31e3e7f0528f84502531.scope: Deactivated successfully.
Oct 08 09:52:55 compute-0 podman[135443]: 2025-10-08 09:52:55.872526479 +0000 UTC m=+0.476723671 container died bfec76834dd8787349be3f1cf01d7fde31bc246c9efc31e3e7f0528f84502531 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_knuth, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct 08 09:52:55 compute-0 python3.9[135569]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:52:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-3813cf5aa0c5227e51c7d3f448bd358144896f29f40027732233c94684a79a18-merged.mount: Deactivated successfully.
Oct 08 09:52:55 compute-0 sudo[135567]: pam_unix(sudo:session): session closed for user root
Oct 08 09:52:55 compute-0 podman[135443]: 2025-10-08 09:52:55.948278592 +0000 UTC m=+0.552475774 container remove bfec76834dd8787349be3f1cf01d7fde31bc246c9efc31e3e7f0528f84502531 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_knuth, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:52:55 compute-0 systemd[1]: libpod-conmon-bfec76834dd8787349be3f1cf01d7fde31bc246c9efc31e3e7f0528f84502531.scope: Deactivated successfully.
Oct 08 09:52:56 compute-0 sudo[135166]: pam_unix(sudo:session): session closed for user root
Oct 08 09:52:56 compute-0 sudo[135622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:52:56 compute-0 sudo[135622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:52:56 compute-0 sudo[135622]: pam_unix(sudo:session): session closed for user root
Oct 08 09:52:56 compute-0 sudo[135673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- raw list --format json
Oct 08 09:52:56 compute-0 sudo[135673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:52:56 compute-0 sudo[135759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpvnsssnbdmzycaojcopdsgeoioxwtcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917175.3847125-160-65360322527584/AnsiballZ_copy.py'
Oct 08 09:52:56 compute-0 sudo[135759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:52:56 compute-0 python3.9[135761]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917175.3847125-160-65360322527584/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=ad35324b46d028e64dbb491e0ae0f5e3bb7a2175 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:52:56 compute-0 sudo[135759]: pam_unix(sudo:session): session closed for user root
Oct 08 09:52:56 compute-0 podman[135826]: 2025-10-08 09:52:56.550525446 +0000 UTC m=+0.039042520 container create 5ef7bd5215d28d50d2888c371aa4605e67f8e0912fbeb03a30f4b8497b4f0b4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:52:56 compute-0 systemd[1]: Started libpod-conmon-5ef7bd5215d28d50d2888c371aa4605e67f8e0912fbeb03a30f4b8497b4f0b4e.scope.
Oct 08 09:52:56 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:52:56 compute-0 podman[135826]: 2025-10-08 09:52:56.628732179 +0000 UTC m=+0.117249283 container init 5ef7bd5215d28d50d2888c371aa4605e67f8e0912fbeb03a30f4b8497b4f0b4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_brown, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:52:56 compute-0 podman[135826]: 2025-10-08 09:52:56.532933634 +0000 UTC m=+0.021450738 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:52:56 compute-0 podman[135826]: 2025-10-08 09:52:56.640215365 +0000 UTC m=+0.128732429 container start 5ef7bd5215d28d50d2888c371aa4605e67f8e0912fbeb03a30f4b8497b4f0b4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_brown, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True)
Oct 08 09:52:56 compute-0 podman[135826]: 2025-10-08 09:52:56.64348533 +0000 UTC m=+0.132002404 container attach 5ef7bd5215d28d50d2888c371aa4605e67f8e0912fbeb03a30f4b8497b4f0b4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_brown, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:52:56 compute-0 dreamy_brown[135843]: 167 167
Oct 08 09:52:56 compute-0 systemd[1]: libpod-5ef7bd5215d28d50d2888c371aa4605e67f8e0912fbeb03a30f4b8497b4f0b4e.scope: Deactivated successfully.
Oct 08 09:52:56 compute-0 podman[135826]: 2025-10-08 09:52:56.647122027 +0000 UTC m=+0.135639141 container died 5ef7bd5215d28d50d2888c371aa4605e67f8e0912fbeb03a30f4b8497b4f0b4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_brown, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 08 09:52:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-320d0fdaed1082cef5b7ec1a4af066075b1f2329f4b4d410544630978833fbfd-merged.mount: Deactivated successfully.
Oct 08 09:52:56 compute-0 podman[135826]: 2025-10-08 09:52:56.701679652 +0000 UTC m=+0.190196746 container remove 5ef7bd5215d28d50d2888c371aa4605e67f8e0912fbeb03a30f4b8497b4f0b4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_brown, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:52:56 compute-0 systemd[1]: libpod-conmon-5ef7bd5215d28d50d2888c371aa4605e67f8e0912fbeb03a30f4b8497b4f0b4e.scope: Deactivated successfully.
Oct 08 09:52:56 compute-0 podman[135941]: 2025-10-08 09:52:56.859458609 +0000 UTC m=+0.040482195 container create b398eb90f4290cf7c92e5d1f3af8ff6dec8be23f56d2a5c4dda97cf553fe542b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325)
Oct 08 09:52:56 compute-0 systemd[1]: Started libpod-conmon-b398eb90f4290cf7c92e5d1f3af8ff6dec8be23f56d2a5c4dda97cf553fe542b.scope.
Oct 08 09:52:56 compute-0 podman[135941]: 2025-10-08 09:52:56.841550666 +0000 UTC m=+0.022574272 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:52:56 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:52:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6340d649be06ddc9b77f366849e4019b2e358f1eec930b2af3c446accae33e37/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 09:52:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6340d649be06ddc9b77f366849e4019b2e358f1eec930b2af3c446accae33e37/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:52:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6340d649be06ddc9b77f366849e4019b2e358f1eec930b2af3c446accae33e37/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:52:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6340d649be06ddc9b77f366849e4019b2e358f1eec930b2af3c446accae33e37/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 09:52:56 compute-0 sudo[136011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-voueqmlyzcoryovyvvjlbaapzcjwmznb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917176.6453152-295-175656181716523/AnsiballZ_file.py'
Oct 08 09:52:56 compute-0 sudo[136011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:52:56 compute-0 podman[135941]: 2025-10-08 09:52:56.957730953 +0000 UTC m=+0.138754579 container init b398eb90f4290cf7c92e5d1f3af8ff6dec8be23f56d2a5c4dda97cf553fe542b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_dhawan, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:52:56 compute-0 podman[135941]: 2025-10-08 09:52:56.965112609 +0000 UTC m=+0.146136235 container start b398eb90f4290cf7c92e5d1f3af8ff6dec8be23f56d2a5c4dda97cf553fe542b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct 08 09:52:56 compute-0 podman[135941]: 2025-10-08 09:52:56.968625631 +0000 UTC m=+0.149649227 container attach b398eb90f4290cf7c92e5d1f3af8ff6dec8be23f56d2a5c4dda97cf553fe542b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_dhawan, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Oct 08 09:52:56 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:52:56.976Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:52:57 compute-0 ceph-mon[73572]: pgmap v236: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 267 B/s rd, 0 op/s
Oct 08 09:52:57 compute-0 python3.9[136013]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:52:57 compute-0 sudo[136011]: pam_unix(sudo:session): session closed for user root
Oct 08 09:52:57 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v237: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 267 B/s rd, 0 op/s
Oct 08 09:52:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:57 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee00040f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:57 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:52:57 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:52:57 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:52:57.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:52:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:57 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:57 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:52:57 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:52:57 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:52:57.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:52:57 compute-0 sudo[136231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yizgfpbtijljvvpdmkcgrahslkpczynf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917177.2852957-295-248483539919667/AnsiballZ_file.py'
Oct 08 09:52:57 compute-0 sudo[136231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:52:57 compute-0 lvm[136238]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 08 09:52:57 compute-0 lvm[136238]: VG ceph_vg0 finished
Oct 08 09:52:57 compute-0 strange_dhawan[135993]: {}
Oct 08 09:52:57 compute-0 podman[135941]: 2025-10-08 09:52:57.643644564 +0000 UTC m=+0.824668190 container died b398eb90f4290cf7c92e5d1f3af8ff6dec8be23f56d2a5c4dda97cf553fe542b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_dhawan, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325)
Oct 08 09:52:57 compute-0 systemd[1]: libpod-b398eb90f4290cf7c92e5d1f3af8ff6dec8be23f56d2a5c4dda97cf553fe542b.scope: Deactivated successfully.
Oct 08 09:52:57 compute-0 systemd[1]: libpod-b398eb90f4290cf7c92e5d1f3af8ff6dec8be23f56d2a5c4dda97cf553fe542b.scope: Consumed 1.095s CPU time.
Oct 08 09:52:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-6340d649be06ddc9b77f366849e4019b2e358f1eec930b2af3c446accae33e37-merged.mount: Deactivated successfully.
Oct 08 09:52:57 compute-0 podman[135941]: 2025-10-08 09:52:57.706053961 +0000 UTC m=+0.887077547 container remove b398eb90f4290cf7c92e5d1f3af8ff6dec8be23f56d2a5c4dda97cf553fe542b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 08 09:52:57 compute-0 python3.9[136236]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:52:57 compute-0 systemd[1]: libpod-conmon-b398eb90f4290cf7c92e5d1f3af8ff6dec8be23f56d2a5c4dda97cf553fe542b.scope: Deactivated successfully.
Oct 08 09:52:57 compute-0 sudo[136231]: pam_unix(sudo:session): session closed for user root
Oct 08 09:52:57 compute-0 sudo[135673]: pam_unix(sudo:session): session closed for user root
Oct 08 09:52:57 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 09:52:57 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:52:57 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 09:52:57 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:52:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:57 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed80032b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:57 compute-0 sudo[136281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 08 09:52:57 compute-0 sudo[136281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:52:57 compute-0 sudo[136281]: pam_unix(sudo:session): session closed for user root
Oct 08 09:52:58 compute-0 sudo[136432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcheadgvkkkiqmnwaibagazgnulvorhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917177.8831885-341-230264003955239/AnsiballZ_stat.py'
Oct 08 09:52:58 compute-0 sudo[136432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:52:58 compute-0 python3.9[136434]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:52:58 compute-0 sudo[136432]: pam_unix(sudo:session): session closed for user root
Oct 08 09:52:58 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:52:58 compute-0 sudo[136555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wytwpucwauepjlexogquhwoekumxnyej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917177.8831885-341-230264003955239/AnsiballZ_copy.py'
Oct 08 09:52:58 compute-0 sudo[136555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:52:58 compute-0 ceph-mon[73572]: pgmap v237: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 267 B/s rd, 0 op/s
Oct 08 09:52:58 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:52:58 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:52:58 compute-0 python3.9[136557]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917177.8831885-341-230264003955239/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=3f143f01cb342955611becbf857e62f04ecd5a97 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:52:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:52:58.853Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:52:58 compute-0 sudo[136555]: pam_unix(sudo:session): session closed for user root
Oct 08 09:52:59 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v238: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 356 B/s rd, 0 op/s
Oct 08 09:52:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:59 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:59 compute-0 sudo[136708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eiwalbcxxhjlrgmrnpskrrkosvwsbjyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917179.0224504-341-158103416950833/AnsiballZ_stat.py'
Oct 08 09:52:59 compute-0 sudo[136708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:52:59 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:52:59 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:52:59 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:52:59.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:52:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:59 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0004110 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:59 compute-0 python3.9[136710]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:52:59 compute-0 sudo[136708]: pam_unix(sudo:session): session closed for user root
Oct 08 09:52:59 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:52:59 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:52:59 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:52:59.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:52:59 compute-0 sudo[136831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vthewcsutjrizbzvvpeermytfymcbegc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917179.0224504-341-158103416950833/AnsiballZ_copy.py'
Oct 08 09:52:59 compute-0 sudo[136831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:52:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:59 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:52:59 compute-0 python3.9[136833]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917179.0224504-341-158103416950833/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=48470e628d65eda3076b7ed534cda7f3290d3587 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:52:59 compute-0 sudo[136831]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:00 compute-0 sudo[136984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srjhjemqydthvbudyfndnplmefurmsqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917180.1175025-341-167730293587877/AnsiballZ_stat.py'
Oct 08 09:53:00 compute-0 sudo[136984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:53:00 compute-0 python3.9[136986]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:53:00 compute-0 sudo[136984]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:00 compute-0 sudo[137107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hspxdksgudnvsvgqemihbkihngceqxao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917180.1175025-341-167730293587877/AnsiballZ_copy.py'
Oct 08 09:53:00 compute-0 sudo[137107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:53:00 compute-0 ceph-mon[73572]: pgmap v238: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 356 B/s rd, 0 op/s
Oct 08 09:53:01 compute-0 python3.9[137109]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917180.1175025-341-167730293587877/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=c55d6c2cc7f81b34bd89a051ca87d4a2fe6fb78b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:53:01 compute-0 sudo[137107]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:01 compute-0 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Oct 08 09:53:01 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:53:01.186404) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 08 09:53:01 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Oct 08 09:53:01 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917181186463, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 436, "num_deletes": 251, "total_data_size": 411254, "memory_usage": 419800, "flush_reason": "Manual Compaction"}
Oct 08 09:53:01 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Oct 08 09:53:01 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v239: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 267 B/s rd, 0 op/s
Oct 08 09:53:01 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917181254299, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 406477, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12762, "largest_seqno": 13197, "table_properties": {"data_size": 403969, "index_size": 608, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6103, "raw_average_key_size": 18, "raw_value_size": 398839, "raw_average_value_size": 1201, "num_data_blocks": 26, "num_entries": 332, "num_filter_entries": 332, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759917164, "oldest_key_time": 1759917164, "file_creation_time": 1759917181, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Oct 08 09:53:01 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 67927 microseconds, and 1659 cpu microseconds.
Oct 08 09:53:01 compute-0 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 08 09:53:01 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:53:01.254341) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 406477 bytes OK
Oct 08 09:53:01 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:53:01.254360) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Oct 08 09:53:01 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:53:01.255733) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Oct 08 09:53:01 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:53:01.255746) EVENT_LOG_v1 {"time_micros": 1759917181255742, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 08 09:53:01 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:53:01.255761) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 08 09:53:01 compute-0 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 408599, prev total WAL file size 408599, number of live WAL files 2.
Oct 08 09:53:01 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 09:53:01 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:53:01.256206) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Oct 08 09:53:01 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 08 09:53:01 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(396KB)], [29(13MB)]
Oct 08 09:53:01 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917181256323, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 15077840, "oldest_snapshot_seqno": -1}
Oct 08 09:53:01 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:01 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed80032b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:01 compute-0 sudo[137135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 09:53:01 compute-0 sudo[137135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:53:01 compute-0 sudo[137135]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:01 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:53:01 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:53:01 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:53:01.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:53:01 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:01 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:01 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:53:01 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 09:53:01 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:53:01.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 09:53:01 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4228 keys, 12630119 bytes, temperature: kUnknown
Oct 08 09:53:01 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917181506046, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 12630119, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12599370, "index_size": 19055, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10629, "raw_key_size": 108333, "raw_average_key_size": 25, "raw_value_size": 12519588, "raw_average_value_size": 2961, "num_data_blocks": 804, "num_entries": 4228, "num_filter_entries": 4228, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916581, "oldest_key_time": 0, "file_creation_time": 1759917181, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Oct 08 09:53:01 compute-0 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 08 09:53:01 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:53:01.506523) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 12630119 bytes
Oct 08 09:53:01 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:53:01.535767) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 60.3 rd, 50.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 14.0 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(68.2) write-amplify(31.1) OK, records in: 4743, records dropped: 515 output_compression: NoCompression
Oct 08 09:53:01 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:53:01.535803) EVENT_LOG_v1 {"time_micros": 1759917181535789, "job": 12, "event": "compaction_finished", "compaction_time_micros": 250041, "compaction_time_cpu_micros": 34227, "output_level": 6, "num_output_files": 1, "total_output_size": 12630119, "num_input_records": 4743, "num_output_records": 4228, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 08 09:53:01 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 09:53:01 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917181536669, "job": 12, "event": "table_file_deletion", "file_number": 31}
Oct 08 09:53:01 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 09:53:01 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917181540046, "job": 12, "event": "table_file_deletion", "file_number": 29}
Oct 08 09:53:01 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:53:01.256104) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 09:53:01 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:53:01.540267) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 09:53:01 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:53:01.540281) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 09:53:01 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:53:01.540284) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 09:53:01 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:53:01.540287) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 09:53:01 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:53:01.540290) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 09:53:01 compute-0 sudo[137285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oejpncciqtyitieweeujpwxrgahchquf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917181.331992-467-254426646523606/AnsiballZ_file.py'
Oct 08 09:53:01 compute-0 sudo[137285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:53:01 compute-0 python3.9[137287]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:53:01 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:01 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0004130 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:01 compute-0 sudo[137285]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:02 compute-0 ceph-mon[73572]: pgmap v239: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 267 B/s rd, 0 op/s
Oct 08 09:53:02 compute-0 sudo[137438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmzqkwklrhfmkhefvdsszrwxlkrrdnpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917181.9726708-467-128440247350975/AnsiballZ_file.py'
Oct 08 09:53:02 compute-0 sudo[137438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:53:02 compute-0 python3.9[137440]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:53:02 compute-0 sudo[137438]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 09:53:02 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:53:02 compute-0 sudo[137590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfdinchvxelexlxvuoweeycpfrgyqysv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917182.6103377-514-35270683229774/AnsiballZ_stat.py'
Oct 08 09:53:02 compute-0 sudo[137590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:53:03 compute-0 python3.9[137592]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:53:03 compute-0 sudo[137590]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:03 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v240: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 267 B/s rd, 0 op/s
Oct 08 09:53:03 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:53:03 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:03 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:03 compute-0 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 08 09:53:03 compute-0 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 2758 writes, 13K keys, 2758 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s
                                           Cumulative WAL: 2758 writes, 2758 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2758 writes, 13K keys, 2758 commit groups, 1.0 writes per commit group, ingest: 24.36 MB, 0.04 MB/s
                                           Interval WAL: 2758 writes, 2758 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     98.9      0.21              0.05         6    0.036       0      0       0.0       0.0
                                             L6      1/0   12.05 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.0    119.5    104.5      0.60              0.15         5    0.120     21K   2300       0.0       0.0
                                            Sum      1/0   12.05 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   4.0     88.2    103.0      0.81              0.19        11    0.074     21K   2300       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   4.0     88.6    103.4      0.81              0.19        10    0.081     21K   2300       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   0.0    119.5    104.5      0.60              0.15         5    0.120     21K   2300       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    100.2      0.21              0.05         5    0.042       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     16.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.021, interval 0.021
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.08 GB write, 0.14 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.8 seconds
                                           Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.8 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f7a1ce3350#2 capacity: 304.00 MB usage: 2.76 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 6.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(179,2.56 MB,0.840769%) FilterBlock(12,69.05 KB,0.0221805%) IndexBlock(12,139.64 KB,0.0448578%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 08 09:53:03 compute-0 sudo[137715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xaksjzwbtbndqusgwdyqlvxnwzaftwoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917182.6103377-514-35270683229774/AnsiballZ_copy.py'
Oct 08 09:53:03 compute-0 sudo[137715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:53:03 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:53:03 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:53:03 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:53:03.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:53:03 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:03 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed80032b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:03 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:53:03 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:53:03 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:53:03.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:53:03 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:53:03 compute-0 python3.9[137717]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917182.6103377-514-35270683229774/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=03fb8466dc9bc88568994ca20bb9a6a853d6a7b1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:53:03 compute-0 sudo[137715]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:03 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:03 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec0021a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:03 compute-0 sudo[137868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qiybkrerusdmidrpddtimxejizeihddl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917183.6904774-514-99374613159652/AnsiballZ_stat.py'
Oct 08 09:53:03 compute-0 sudo[137868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:53:04 compute-0 python3.9[137870]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:53:04 compute-0 sudo[137868]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:04 compute-0 ceph-mon[73572]: pgmap v240: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 267 B/s rd, 0 op/s
Oct 08 09:53:04 compute-0 sudo[137991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avznolmuwslnaosklhksxibhwbwtcogf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917183.6904774-514-99374613159652/AnsiballZ_copy.py'
Oct 08 09:53:04 compute-0 sudo[137991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:53:04 compute-0 python3.9[137993]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917183.6904774-514-99374613159652/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=48470e628d65eda3076b7ed534cda7f3290d3587 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:53:04 compute-0 sudo[137991]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:05 compute-0 sudo[138143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzvyfdqsueptmjfdqrqjngpkovltskrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917184.7747977-514-64563649056360/AnsiballZ_stat.py'
Oct 08 09:53:05 compute-0 sudo[138143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:53:05 compute-0 python3.9[138145]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:53:05 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v241: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 08 09:53:05 compute-0 sudo[138143]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:05 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee00041c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:05 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:53:05 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 09:53:05 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:53:05.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 09:53:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:05 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:05 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:53:05 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:53:05 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:53:05.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:53:05 compute-0 sudo[138268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnvaqwruugijmkulklrawzsaxafzvxre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917184.7747977-514-64563649056360/AnsiballZ_copy.py'
Oct 08 09:53:05 compute-0 sudo[138268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:53:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:53:05] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Oct 08 09:53:05 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:53:05] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Oct 08 09:53:05 compute-0 python3.9[138270]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917184.7747977-514-64563649056360/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=418fd7eda72a3b52b4f2ef9bbd18a4fa7984c61a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:53:05 compute-0 sudo[138268]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:05 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f04008d10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:06 compute-0 ceph-mon[73572]: pgmap v241: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 08 09:53:06 compute-0 sudo[138421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-przzucxnoyjmndxfgsbkwigrcptabbhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917186.5578997-676-109020439174922/AnsiballZ_file.py'
Oct 08 09:53:06 compute-0 sudo[138421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:53:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:53:06.977Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:53:07 compute-0 python3.9[138423]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:53:07 compute-0 sudo[138421]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:07 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v242: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:53:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:07 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec0021a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:07 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:53:07 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 09:53:07 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:53:07.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 09:53:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:07 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee00041c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:07 compute-0 sudo[138574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfzcztcrdvgnzcfkjlvkruhfkzhtvgfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917187.1929162-711-69490102780985/AnsiballZ_stat.py'
Oct 08 09:53:07 compute-0 sudo[138574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:53:07 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:53:07 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 09:53:07 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:53:07.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 09:53:07 compute-0 python3.9[138576]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:53:07 compute-0 sudo[138574]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:07 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:07 compute-0 sudo[138698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wekqelaidqqgthyhiwxmvwjzmcvtdjtg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917187.1929162-711-69490102780985/AnsiballZ_copy.py'
Oct 08 09:53:07 compute-0 sudo[138698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:53:08 compute-0 python3.9[138700]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917187.1929162-711-69490102780985/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=9b1ec9ef1baf0871d11fb19dd2fc6e37ec07cf31 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:53:08 compute-0 sudo[138698]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:08 compute-0 ceph-mon[73572]: pgmap v242: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:53:08 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:53:08 compute-0 sudo[138850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agxzbsbrlijiloaetsjijztuatdjuahm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917188.3514464-763-1577408924673/AnsiballZ_file.py'
Oct 08 09:53:08 compute-0 sudo[138850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:53:08 compute-0 python3.9[138852]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:53:08 compute-0 sudo[138850]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:53:08.853Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 09:53:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:53:08.853Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 09:53:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:53:08.854Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 09:53:09 compute-0 sudo[139003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsaibuxogjlmnwmdnrcgoqivqzzveadm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917188.9388752-785-158450116942737/AnsiballZ_stat.py'
Oct 08 09:53:09 compute-0 sudo[139003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:53:09 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v243: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 08 09:53:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:09 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f04008d10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:09 compute-0 python3.9[139005]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:53:09 compute-0 sudo[139003]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:09 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:53:09 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 09:53:09 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:53:09.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 09:53:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:09 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec0021a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:09 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:53:09 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:53:09 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:53:09.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:53:09 compute-0 sudo[139126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewjhnbbttxklerukysutxrwnsbahnolu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917188.9388752-785-158450116942737/AnsiballZ_copy.py'
Oct 08 09:53:09 compute-0 sudo[139126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:53:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:09 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee00041c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:09 compute-0 python3.9[139128]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917188.9388752-785-158450116942737/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=9b1ec9ef1baf0871d11fb19dd2fc6e37ec07cf31 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:53:09 compute-0 sudo[139126]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:10 compute-0 ceph-mon[73572]: pgmap v243: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 08 09:53:10 compute-0 sudo[139279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svfemcrotlunzmixdlfxdzhxvsepozwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917190.0544987-827-18851205100459/AnsiballZ_file.py'
Oct 08 09:53:10 compute-0 sudo[139279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:53:10 compute-0 python3.9[139281]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:53:10 compute-0 sudo[139279]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:10 compute-0 sudo[139431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnjggqurkuqrqrffkuzjfqmrawpspukn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917190.6573083-849-62592092562032/AnsiballZ_stat.py'
Oct 08 09:53:10 compute-0 sudo[139431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:53:11 compute-0 python3.9[139433]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:53:11 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v244: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:53:11 compute-0 sudo[139431]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:11 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:11 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:11 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:53:11 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 09:53:11 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:53:11.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 09:53:11 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:11 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f04008d10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:11 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:53:11 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:53:11 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:53:11.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:53:11 compute-0 sudo[139555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzskaubcmvrwpngookasjzejdbcskggx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917190.6573083-849-62592092562032/AnsiballZ_copy.py'
Oct 08 09:53:11 compute-0 sudo[139555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:53:11 compute-0 python3.9[139557]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917190.6573083-849-62592092562032/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=9b1ec9ef1baf0871d11fb19dd2fc6e37ec07cf31 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:53:11 compute-0 sudo[139555]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:11 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:11 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec0021a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:12 compute-0 sudo[139708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npxwnmgaeijtiinnfqsenwokqqwtxwel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917191.9962404-901-110204182027437/AnsiballZ_file.py'
Oct 08 09:53:12 compute-0 sudo[139708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:53:12 compute-0 python3.9[139710]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:53:12 compute-0 ceph-mon[73572]: pgmap v244: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:53:12 compute-0 sudo[139708]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:12 compute-0 sudo[139860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvlrxhqvzrpvhrzuttihbzhmgiwybmrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917192.5912046-924-182610053387294/AnsiballZ_stat.py'
Oct 08 09:53:12 compute-0 sudo[139860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:53:13 compute-0 python3.9[139862]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:53:13 compute-0 sudo[139860]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:13 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v245: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:53:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:13 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f04008d10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:13 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:53:13 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 09:53:13 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:53:13.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 09:53:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:13 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee00041c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:13 compute-0 sudo[139984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvevjvlsbohgnnfjxckuxpghsrxvqrfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917192.5912046-924-182610053387294/AnsiballZ_copy.py'
Oct 08 09:53:13 compute-0 sudo[139984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:53:13 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:53:13 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:53:13 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:53:13.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:53:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:53:13 compute-0 python3.9[139986]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917192.5912046-924-182610053387294/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=9b1ec9ef1baf0871d11fb19dd2fc6e37ec07cf31 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:53:13 compute-0 sudo[139984]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:13 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:14 compute-0 sudo[140137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzqjktqihlgijdilnwmqxebwccczlolb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917193.8931048-976-261560543261340/AnsiballZ_file.py'
Oct 08 09:53:14 compute-0 sudo[140137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:53:14 compute-0 python3.9[140139]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:53:14 compute-0 sudo[140137]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:14 compute-0 ceph-mon[73572]: pgmap v245: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:53:14 compute-0 sudo[140289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybabkpvoopiubdkjcubudyxhynswulct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917194.5153232-1000-16154548506359/AnsiballZ_stat.py'
Oct 08 09:53:14 compute-0 sudo[140289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:53:14 compute-0 python3.9[140291]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:53:14 compute-0 sudo[140289]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:15 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v246: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 08 09:53:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:15 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec0037d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:15 compute-0 sudo[140413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgkzcxldotvpibeoaizgggqqsdofwctj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917194.5153232-1000-16154548506359/AnsiballZ_copy.py'
Oct 08 09:53:15 compute-0 sudo[140413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:53:15 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:53:15 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:53:15 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:53:15.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:53:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:15 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f04008d10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:15 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:53:15 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 09:53:15 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:53:15.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 09:53:15 compute-0 python3.9[140415]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917194.5153232-1000-16154548506359/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=9b1ec9ef1baf0871d11fb19dd2fc6e37ec07cf31 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:53:15 compute-0 sudo[140413]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:53:15] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Oct 08 09:53:15 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:53:15] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Oct 08 09:53:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:15 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee00041e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:16 compute-0 sudo[140566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddlmqmonjghvrhifdanxfwwcaxxudqoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917195.7931526-1047-259898838654563/AnsiballZ_file.py'
Oct 08 09:53:16 compute-0 sudo[140566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:53:16 compute-0 python3.9[140568]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:53:16 compute-0 sudo[140566]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:16 compute-0 ceph-mon[73572]: pgmap v246: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 08 09:53:16 compute-0 sudo[140718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olxsztvirgyfmupshnxpqgmcmardsbjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917196.425451-1070-95273254218371/AnsiballZ_stat.py'
Oct 08 09:53:16 compute-0 sudo[140718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:53:16 compute-0 python3.9[140720]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:53:16 compute-0 sudo[140718]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:53:16.978Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:53:17 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v247: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:53:17 compute-0 sudo[140842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snqlltznmbdjutpkjehtfujdfrofnumb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917196.425451-1070-95273254218371/AnsiballZ_copy.py'
Oct 08 09:53:17 compute-0 sudo[140842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:53:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:17 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:17 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:53:17 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:53:17 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:53:17.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:53:17 compute-0 python3.9[140844]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917196.425451-1070-95273254218371/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=9b1ec9ef1baf0871d11fb19dd2fc6e37ec07cf31 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:53:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:17 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec0037d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:17 compute-0 sudo[140842]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:17 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:53:17 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 09:53:17 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:53:17.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 09:53:17 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 09:53:17 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:53:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:17 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f04008d10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:53:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:53:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:53:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:53:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:53:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:53:18 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:53:18 compute-0 ceph-mon[73572]: pgmap v247: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:53:18 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:53:18 compute-0 sshd-session[133990]: Connection closed by 192.168.122.30 port 59186
Oct 08 09:53:18 compute-0 sshd-session[133987]: pam_unix(sshd:session): session closed for user zuul
Oct 08 09:53:18 compute-0 systemd[1]: session-49.scope: Deactivated successfully.
Oct 08 09:53:18 compute-0 systemd[1]: session-49.scope: Consumed 22.408s CPU time.
Oct 08 09:53:18 compute-0 systemd-logind[798]: Session 49 logged out. Waiting for processes to exit.
Oct 08 09:53:18 compute-0 systemd-logind[798]: Removed session 49.
Oct 08 09:53:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:53:18.854Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:53:19 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v248: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 08 09:53:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:19 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f04008d10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:19 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:53:19 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:53:19 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:53:19.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:53:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:19 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0004200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:19 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:53:19 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:53:19 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:53:19.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:53:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:19 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:20 compute-0 ceph-mon[73572]: pgmap v248: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 08 09:53:21 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v249: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:53:21 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:21 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:21 compute-0 sudo[140874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 09:53:21 compute-0 sudo[140874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:53:21 compute-0 sudo[140874]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:21 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:53:21 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:53:21 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:53:21.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:53:21 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:21 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f0400a6e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:21 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:53:21 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:53:21 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:53:21.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:53:21 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:21 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0004220 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:22 compute-0 ceph-mon[73572]: pgmap v249: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:53:23 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v250: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:53:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:23 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:23 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:53:23 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 09:53:23 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:53:23.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 09:53:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:23 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:23 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:53:23 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:53:23 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:53:23 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:53:23.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:53:23 compute-0 sshd-session[140901]: Accepted publickey for zuul from 192.168.122.30 port 44554 ssh2: ECDSA SHA256:7LvTHAj52RCSkKXOsIbzSDrEw7lwj23D0dAJ0Qgx0Rg
Oct 08 09:53:23 compute-0 systemd-logind[798]: New session 50 of user zuul.
Oct 08 09:53:23 compute-0 systemd[1]: Started Session 50 of User zuul.
Oct 08 09:53:23 compute-0 sshd-session[140901]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 08 09:53:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:23 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f0400a6e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:24 compute-0 sudo[141055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgzxwecwveqefureeqxyklhitpnqhlib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917203.7247086-26-17706282709065/AnsiballZ_file.py'
Oct 08 09:53:24 compute-0 sudo[141055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:53:24 compute-0 python3.9[141057]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:53:24 compute-0 sudo[141055]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:24 compute-0 ceph-mon[73572]: pgmap v250: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:53:25 compute-0 sudo[141208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nntwefgcmedkgzutixtzxnqaubfxitbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917204.716148-62-186037131184901/AnsiballZ_stat.py'
Oct 08 09:53:25 compute-0 sudo[141208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:53:25 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v251: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 08 09:53:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:25 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0004240 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:25 compute-0 python3.9[141210]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:53:25 compute-0 sudo[141208]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:25 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:53:25 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:53:25 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:53:25.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:53:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:25 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0004240 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:25 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:53:25 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 09:53:25 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:53:25.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 09:53:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:53:25] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Oct 08 09:53:25 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:53:25] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Oct 08 09:53:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:25 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:25 compute-0 sudo[141331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-seakrjssrlaqjnqlkerdkcseluojfzmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917204.716148-62-186037131184901/AnsiballZ_copy.py'
Oct 08 09:53:25 compute-0 sudo[141331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:53:26 compute-0 python3.9[141333]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759917204.716148-62-186037131184901/.source.conf _original_basename=ceph.conf follow=False checksum=3890a3deab572d09518a0c50863eda009c004945 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:53:26 compute-0 sudo[141331]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:26 compute-0 sudo[141484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-excofzjszqynerlebeiurggbmsknffbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917206.2224674-62-118677937392417/AnsiballZ_stat.py'
Oct 08 09:53:26 compute-0 sudo[141484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:53:26 compute-0 python3.9[141486]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:53:26 compute-0 sudo[141484]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:26 compute-0 ceph-mon[73572]: pgmap v251: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 08 09:53:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:53:26.980Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:53:27 compute-0 sudo[141608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebeuwlxxzmyssyvctxpoeipgphhlmptv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917206.2224674-62-118677937392417/AnsiballZ_copy.py'
Oct 08 09:53:27 compute-0 sudo[141608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:53:27 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v252: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:53:27 compute-0 python3.9[141610]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759917206.2224674-62-118677937392417/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=fbda66f5b6d5a9cd8683861e87e5a427d546a56c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:53:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:27 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:27 compute-0 sudo[141608]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:27 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:53:27 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:53:27 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:53:27.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:53:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:27 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:27 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:53:27 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 09:53:27 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:53:27.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 09:53:27 compute-0 sshd-session[140904]: Connection closed by 192.168.122.30 port 44554
Oct 08 09:53:27 compute-0 sshd-session[140901]: pam_unix(sshd:session): session closed for user zuul
Oct 08 09:53:27 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Oct 08 09:53:27 compute-0 systemd[1]: session-50.scope: Consumed 2.827s CPU time.
Oct 08 09:53:27 compute-0 systemd-logind[798]: Session 50 logged out. Waiting for processes to exit.
Oct 08 09:53:27 compute-0 systemd-logind[798]: Removed session 50.
Oct 08 09:53:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:27 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0004260 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:28 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:53:28 compute-0 ceph-mon[73572]: pgmap v252: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:53:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:53:28.855Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:53:29 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v253: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 08 09:53:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:29 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f0400a6e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:29 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:53:29 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:53:29 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:53:29.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:53:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:29 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:29 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:53:29 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:53:29 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:53:29.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:53:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:29 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:30 compute-0 ceph-mon[73572]: pgmap v253: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 08 09:53:31 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v254: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:53:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:31 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0004280 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:31 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:53:31 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:53:31 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:53:31.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:53:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:31 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0004280 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:31 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:53:31 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 09:53:31 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:53:31.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 09:53:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:31 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:32 compute-0 sshd-session[141640]: Accepted publickey for zuul from 192.168.122.30 port 34184 ssh2: ECDSA SHA256:7LvTHAj52RCSkKXOsIbzSDrEw7lwj23D0dAJ0Qgx0Rg
Oct 08 09:53:32 compute-0 systemd-logind[798]: New session 51 of user zuul.
Oct 08 09:53:32 compute-0 systemd[1]: Started Session 51 of User zuul.
Oct 08 09:53:32 compute-0 sshd-session[141640]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 08 09:53:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 09:53:32 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:53:32 compute-0 ceph-mon[73572]: pgmap v254: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:53:32 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:53:33 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v255: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:53:33 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:33 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4003240 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:33 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:53:33 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:53:33 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:53:33.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:53:33 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:33 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f0400a6e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:53:33 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:53:33 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 09:53:33 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:53:33.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 09:53:33 compute-0 python3.9[141794]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 08 09:53:33 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:33 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0004280 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:34 compute-0 sudo[141949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxasqdtqxjdbouuvyqoxphqxlwbtecwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917214.1990318-62-113818126166850/AnsiballZ_file.py'
Oct 08 09:53:34 compute-0 sudo[141949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:53:34 compute-0 python3.9[141951]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:53:34 compute-0 sudo[141949]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:34 compute-0 ceph-mon[73572]: pgmap v255: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:53:35 compute-0 sudo[142102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osatvsxuzusyqsgfvutjdonkqtrqvxht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917214.9336157-62-97137421471038/AnsiballZ_file.py'
Oct 08 09:53:35 compute-0 sudo[142102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:53:35 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v256: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 08 09:53:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:35 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:35 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:53:35 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 09:53:35 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:53:35.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 09:53:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:35 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4003240 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:35 compute-0 python3.9[142104]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:53:35 compute-0 sudo[142102]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:35 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:53:35 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:53:35 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:53:35.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:53:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:53:35] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Oct 08 09:53:35 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:53:35] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Oct 08 09:53:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:35 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4003240 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:36 compute-0 python3.9[142255]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 08 09:53:36 compute-0 ceph-mon[73572]: pgmap v256: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 08 09:53:36 compute-0 sudo[142405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khrwbhqgurxphwhgprtcwyuccpzbsudm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917216.499994-131-66721966958549/AnsiballZ_seboolean.py'
Oct 08 09:53:36 compute-0 sudo[142405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:53:36 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:53:36.981Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:53:37 compute-0 python3.9[142407]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct 08 09:53:37 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v257: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:53:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:37 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4003240 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:37 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:53:37 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:53:37 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:53:37.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:53:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:37 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0004280 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:37 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:53:37 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:53:37 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:53:37.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:53:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:37 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f0400a6e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:53:38 compute-0 ceph-mon[73572]: pgmap v257: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:53:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:53:38.856Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:53:39 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v258: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 08 09:53:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:39 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f0400a6e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:39 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:53:39 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 09:53:39 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:53:39.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 09:53:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:39 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f0400a6e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:39 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:53:39 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 09:53:39 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:53:39.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 09:53:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:39 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0004280 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:40 compute-0 sudo[142405]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:40 compute-0 sudo[142565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltcbzcixxmcxglwvqkgfcjsaujcxegjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917220.4419289-161-171812309176700/AnsiballZ_setup.py'
Oct 08 09:53:40 compute-0 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Oct 08 09:53:40 compute-0 sudo[142565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:53:40 compute-0 ceph-mon[73572]: pgmap v258: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 08 09:53:40 compute-0 python3.9[142567]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 08 09:53:41 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v259: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:53:41 compute-0 sudo[142565]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:41 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:41 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:41 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:53:41 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:53:41 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:53:41.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:53:41 compute-0 sudo[142577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 09:53:41 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:41 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4003240 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:41 compute-0 sudo[142577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:53:41 compute-0 sudo[142577]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:41 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:53:41 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:53:41 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:53:41.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:53:41 compute-0 sudo[142675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-liqqoyxkgttwqikauzaqdngootmuwogu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917220.4419289-161-171812309176700/AnsiballZ_dnf.py'
Oct 08 09:53:41 compute-0 sudo[142675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:53:41 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:41 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f0400a6e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:41 compute-0 python3.9[142677]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 08 09:53:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-crash-compute-0[78863]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Oct 08 09:53:42 compute-0 ceph-mon[73572]: pgmap v259: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:53:43 compute-0 sudo[142675]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:43 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v260: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:53:43 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:43 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0004280 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:43 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:53:43 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:53:43 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:53:43.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:53:43 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:43 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:53:43 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:53:43 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:53:43 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:53:43.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:53:43 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:43 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4003240 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:43 compute-0 sudo[142831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hardmepdbijtwkazlojwtfywaxwqarmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917223.2884665-197-143761836102108/AnsiballZ_systemd.py'
Oct 08 09:53:43 compute-0 sudo[142831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:53:44 compute-0 python3.9[142833]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 08 09:53:44 compute-0 sudo[142831]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:44 compute-0 ceph-mon[73572]: pgmap v260: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:53:45 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v261: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 08 09:53:45 compute-0 sudo[142987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgqocqjnsfhuxvhqqbpxorshvrawlnww ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759917224.7785263-221-96638640717466/AnsiballZ_edpm_nftables_snippet.py'
Oct 08 09:53:45 compute-0 sudo[142987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:53:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:45 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f0400a6e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:45 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:53:45 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:53:45 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:53:45.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:53:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:45 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0004280 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:45 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:53:45 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:53:45 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:53:45.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:53:45 compute-0 python3[142989]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Oct 08 09:53:45 compute-0 sudo[142987]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:53:45] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Oct 08 09:53:45 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:53:45] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Oct 08 09:53:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:45 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:46 compute-0 sudo[143140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmrgbtlkiizfkwaqvwexpvrvqgfcumog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917225.8480358-248-132689356445956/AnsiballZ_file.py'
Oct 08 09:53:46 compute-0 sudo[143140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:53:46 compute-0 python3.9[143142]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:53:46 compute-0 sudo[143140]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:46 compute-0 ceph-mon[73572]: pgmap v261: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 08 09:53:46 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:53:46.981Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:53:47 compute-0 sudo[143293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbplrwtcrauwkmuxxizhbbgpexhhodsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917226.6304061-272-56435141813335/AnsiballZ_stat.py'
Oct 08 09:53:47 compute-0 sudo[143293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:53:47 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v262: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:53:47 compute-0 python3.9[143295]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:53:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:47 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:47 compute-0 sudo[143293]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:47 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:53:47 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:53:47 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:53:47.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:53:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:47 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f0400a6e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:47 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:53:47 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:53:47 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:53:47.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:53:47 compute-0 sudo[143371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exzffqopactqvjoshumvsultmucacyhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917226.6304061-272-56435141813335/AnsiballZ_file.py'
Oct 08 09:53:47 compute-0 sudo[143371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:53:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_09:53:47
Oct 08 09:53:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 08 09:53:47 compute-0 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct 08 09:53:47 compute-0 ceph-mgr[73869]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', '.mgr', 'images', 'default.rgw.meta', 'backups', 'vms', 'volumes', '.nfs']
Oct 08 09:53:47 compute-0 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct 08 09:53:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct 08 09:53:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:53:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 08 09:53:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:53:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:53:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:53:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:53:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:53:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:53:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:53:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:53:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:53:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 08 09:53:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:53:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:53:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:53:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 08 09:53:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:53:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 08 09:53:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:53:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:53:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:53:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 08 09:53:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:53:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 09:53:47 compute-0 python3.9[143373]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:53:47 compute-0 sudo[143371]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 09:53:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:53:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:47 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0004280 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:53:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:53:47 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:53:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 08 09:53:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 09:53:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 09:53:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 09:53:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 09:53:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 08 09:53:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 09:53:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 09:53:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 09:53:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 09:53:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:53:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:53:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:53:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:53:48 compute-0 sudo[143524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znrzwmygkliculfyniesinuonlyrwhwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917228.1494627-308-232412761509438/AnsiballZ_stat.py'
Oct 08 09:53:48 compute-0 sudo[143524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:53:48 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:53:48 compute-0 python3.9[143526]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:53:48 compute-0 sudo[143524]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:53:48.857Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:53:48 compute-0 sudo[143602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djpvndgjakjzhsahuvtsvtgsjpxdwzvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917228.1494627-308-232412761509438/AnsiballZ_file.py'
Oct 08 09:53:48 compute-0 sudo[143602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:53:48 compute-0 ceph-mon[73572]: pgmap v262: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:53:49 compute-0 python3.9[143604]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.zvs6eopd recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:53:49 compute-0 sudo[143602]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:49 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v263: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 08 09:53:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:49 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4003240 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:49 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:53:49 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:53:49 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:53:49.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:53:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:49 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:49 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:53:49 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:53:49 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:53:49.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:53:49 compute-0 sudo[143757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjqxcqvdimnqfuhmgrylpmsedztqmvbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917229.2854686-344-10799244928839/AnsiballZ_stat.py'
Oct 08 09:53:49 compute-0 sudo[143757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:53:49 compute-0 python3.9[143759]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:53:49 compute-0 sudo[143757]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:49 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4003240 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:49 compute-0 sudo[143836]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzlyrnnkvkpooiqdwvoznrvekgggtoxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917229.2854686-344-10799244928839/AnsiballZ_file.py'
Oct 08 09:53:49 compute-0 sudo[143836]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:53:50 compute-0 python3.9[143838]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:53:50 compute-0 sudo[143836]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:50 compute-0 sudo[143988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zeulsbfbwkbdiftgfokeyvbiuqouvjgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917230.4078243-383-25233574186698/AnsiballZ_command.py'
Oct 08 09:53:50 compute-0 sudo[143988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:53:50 compute-0 ceph-mon[73572]: pgmap v263: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 08 09:53:51 compute-0 python3.9[143990]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:53:51 compute-0 sudo[143988]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:51 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v264: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:53:51 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:51 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec001ab0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:51 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:53:51 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:53:51 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:53:51.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:53:51 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:51 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed8002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:51 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:53:51 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:53:51 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:53:51.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:53:51 compute-0 sudo[144142]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzousmaelevvlegihqkvlleitdqzncdd ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759917231.2783847-407-156841419114899/AnsiballZ_edpm_nftables_from_files.py'
Oct 08 09:53:51 compute-0 sudo[144142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:53:51 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:51 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:51 compute-0 python3[144144]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 08 09:53:51 compute-0 sudo[144142]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:52 compute-0 sudo[144295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yajepfglbnwzggfbucsclvatmdccaede ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917232.3561835-431-261950654288084/AnsiballZ_stat.py'
Oct 08 09:53:52 compute-0 sudo[144295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:53:52 compute-0 python3.9[144297]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:53:52 compute-0 sudo[144295]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:52 compute-0 ceph-mon[73572]: pgmap v264: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:53:53 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v265: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:53:53 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:53 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4003240 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:53 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:53:53 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:53:53 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:53:53.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:53:53 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:53 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec001ab0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:53 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:53:53 compute-0 sudo[144421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qybcdzrzlmqiaabqlmzbjbwrubxgsxtt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917232.3561835-431-261950654288084/AnsiballZ_copy.py'
Oct 08 09:53:53 compute-0 sudo[144421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:53:53 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:53:53 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:53:53 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:53:53.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:53:53 compute-0 python3.9[144423]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917232.3561835-431-261950654288084/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:53:53 compute-0 sudo[144421]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:53 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:53 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed8001bd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:54 compute-0 sudo[144574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohabnbogczbsvnbvemofpjdavaydzpnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917233.9144201-476-193822884529516/AnsiballZ_stat.py'
Oct 08 09:53:54 compute-0 sudo[144574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:53:54 compute-0 python3.9[144576]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:53:54 compute-0 sudo[144574]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:54 compute-0 sudo[144699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-engqqgjcxtlzjpvuzsubmcpzrgfyulvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917233.9144201-476-193822884529516/AnsiballZ_copy.py'
Oct 08 09:53:54 compute-0 sudo[144699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:53:54 compute-0 ceph-mon[73572]: pgmap v265: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:53:55 compute-0 python3.9[144701]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917233.9144201-476-193822884529516/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:53:55 compute-0 sudo[144699]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:55 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v266: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 08 09:53:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:55 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:55 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:53:55 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:53:55 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:53:55.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:53:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:55 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:55 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:53:55 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:53:55 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:53:55.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:53:55 compute-0 sudo[144854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzijsfiudclksyqifmrfarpujtxklkvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917235.270399-521-258401605318005/AnsiballZ_stat.py'
Oct 08 09:53:55 compute-0 sudo[144854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:53:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:53:55] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Oct 08 09:53:55 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:53:55] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Oct 08 09:53:55 compute-0 python3.9[144856]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:53:55 compute-0 sudo[144854]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:55 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:56 compute-0 sudo[144980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-getwzkdarzbyudeprffzhvezwmprftue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917235.270399-521-258401605318005/AnsiballZ_copy.py'
Oct 08 09:53:56 compute-0 sudo[144980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:53:56 compute-0 python3.9[144982]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917235.270399-521-258401605318005/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:53:56 compute-0 sudo[144980]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:56 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:53:56.983Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 09:53:56 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:53:56.983Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 09:53:56 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:53:56.983Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:53:57 compute-0 ceph-mon[73572]: pgmap v266: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 08 09:53:57 compute-0 sudo[145133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-couactpgoruqxsmsmhvrgnvmlvujruwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917236.766317-566-270897633463533/AnsiballZ_stat.py'
Oct 08 09:53:57 compute-0 sudo[145133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:53:57 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v267: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:53:57 compute-0 python3.9[145135]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:53:57 compute-0 sudo[145133]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:57 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec8001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:57 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:53:57 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:53:57 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:53:57.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:53:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:57 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef8001080 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:57 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:53:57 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:53:57 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:53:57.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:53:57 compute-0 sudo[145258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnvjahdbkvfwhikzousttjntvtynrhwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917236.766317-566-270897633463533/AnsiballZ_copy.py'
Oct 08 09:53:57 compute-0 sudo[145258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:53:57 compute-0 python3.9[145260]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917236.766317-566-270897633463533/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:53:57 compute-0 sudo[145258]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:57 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:58 compute-0 sudo[145286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:53:58 compute-0 sudo[145286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:53:58 compute-0 sudo[145286]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:58 compute-0 sudo[145332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Oct 08 09:53:58 compute-0 sudo[145332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:53:58 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 08 09:53:58 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:53:58 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 08 09:53:58 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:53:58 compute-0 sudo[145332]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:58 compute-0 sudo[145482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxkpdktynlpwxcybaeslkltpbcdutnhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917238.1312068-611-150404884701521/AnsiballZ_stat.py'
Oct 08 09:53:58 compute-0 sudo[145482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:53:58 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 09:53:58 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:53:58 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:53:58 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 09:53:58 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:53:58 compute-0 sudo[145485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:53:58 compute-0 sudo[145485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:53:58 compute-0 sudo[145485]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:58 compute-0 sudo[145510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 08 09:53:58 compute-0 sudo[145510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:53:58 compute-0 python3.9[145484]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:53:58 compute-0 sudo[145482]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:53:58.857Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:53:59 compute-0 ceph-mon[73572]: pgmap v267: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:53:59 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:53:59 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:53:59 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:53:59 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:53:59 compute-0 sudo[145691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmdtxolqinzfvneraccwspcgatzropsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917238.1312068-611-150404884701521/AnsiballZ_copy.py'
Oct 08 09:53:59 compute-0 sudo[145691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:53:59 compute-0 sudo[145510]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:59 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:53:59 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:53:59 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 08 09:53:59 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 09:53:59 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 08 09:53:59 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v268: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 343 B/s rd, 0 op/s
Oct 08 09:53:59 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:53:59 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 08 09:53:59 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:53:59 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 08 09:53:59 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 09:53:59 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 08 09:53:59 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 09:53:59 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:53:59 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:53:59 compute-0 sudo[145694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:53:59 compute-0 sudo[145694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:53:59 compute-0 sudo[145694]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:59 compute-0 sudo[145719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 08 09:53:59 compute-0 python3.9[145693]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917238.1312068-611-150404884701521/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:53:59 compute-0 sudo[145719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:53:59 compute-0 sudo[145691]: pam_unix(sudo:session): session closed for user root
Oct 08 09:53:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:59 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:59 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:53:59 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:53:59 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:53:59.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:53:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:59 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec8001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:59 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:53:59 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:53:59 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:53:59.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:53:59 compute-0 podman[145884]: 2025-10-08 09:53:59.708353284 +0000 UTC m=+0.051124158 container create 1e09e85eca49e7348e5d64ddfe5c7a8c0fe1a83f6e5dd5a616a65cb80a4cefc5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_curran, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:53:59 compute-0 systemd[1]: Started libpod-conmon-1e09e85eca49e7348e5d64ddfe5c7a8c0fe1a83f6e5dd5a616a65cb80a4cefc5.scope.
Oct 08 09:53:59 compute-0 sudo[145950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwviwbmeevfjciwdtpxhxclgfkuzmyuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917239.51328-656-231845156690019/AnsiballZ_file.py'
Oct 08 09:53:59 compute-0 podman[145884]: 2025-10-08 09:53:59.688163578 +0000 UTC m=+0.030934432 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:53:59 compute-0 sudo[145950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:53:59 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:53:59 compute-0 podman[145884]: 2025-10-08 09:53:59.823440854 +0000 UTC m=+0.166211708 container init 1e09e85eca49e7348e5d64ddfe5c7a8c0fe1a83f6e5dd5a616a65cb80a4cefc5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_curran, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 08 09:53:59 compute-0 podman[145884]: 2025-10-08 09:53:59.83151865 +0000 UTC m=+0.174289484 container start 1e09e85eca49e7348e5d64ddfe5c7a8c0fe1a83f6e5dd5a616a65cb80a4cefc5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_curran, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Oct 08 09:53:59 compute-0 podman[145884]: 2025-10-08 09:53:59.834891541 +0000 UTC m=+0.177662375 container attach 1e09e85eca49e7348e5d64ddfe5c7a8c0fe1a83f6e5dd5a616a65cb80a4cefc5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_curran, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct 08 09:53:59 compute-0 systemd[1]: libpod-1e09e85eca49e7348e5d64ddfe5c7a8c0fe1a83f6e5dd5a616a65cb80a4cefc5.scope: Deactivated successfully.
Oct 08 09:53:59 compute-0 determined_curran[145952]: 167 167
Oct 08 09:53:59 compute-0 conmon[145952]: conmon 1e09e85eca49e7348e5d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1e09e85eca49e7348e5d64ddfe5c7a8c0fe1a83f6e5dd5a616a65cb80a4cefc5.scope/container/memory.events
Oct 08 09:53:59 compute-0 podman[145884]: 2025-10-08 09:53:59.838699477 +0000 UTC m=+0.181470311 container died 1e09e85eca49e7348e5d64ddfe5c7a8c0fe1a83f6e5dd5a616a65cb80a4cefc5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_curran, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:53:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-743054a477bf8633542c0f3e5bbe82fc0b267f95270242f509e30c63bf0db02c-merged.mount: Deactivated successfully.
Oct 08 09:53:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:59 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef8001080 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:53:59 compute-0 podman[145884]: 2025-10-08 09:53:59.885715549 +0000 UTC m=+0.228486383 container remove 1e09e85eca49e7348e5d64ddfe5c7a8c0fe1a83f6e5dd5a616a65cb80a4cefc5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_curran, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct 08 09:53:59 compute-0 systemd[1]: libpod-conmon-1e09e85eca49e7348e5d64ddfe5c7a8c0fe1a83f6e5dd5a616a65cb80a4cefc5.scope: Deactivated successfully.
Oct 08 09:53:59 compute-0 python3.9[145954]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:54:00 compute-0 sudo[145950]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:00 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:54:00 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 09:54:00 compute-0 ceph-mon[73572]: pgmap v268: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 343 B/s rd, 0 op/s
Oct 08 09:54:00 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:54:00 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:54:00 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 09:54:00 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 09:54:00 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:54:00 compute-0 podman[145978]: 2025-10-08 09:54:00.053980883 +0000 UTC m=+0.062370940 container create 40a97aa94b96ef52677f0698987f488703ed2470de048fd975f32b156d01402c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_williamson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:54:00 compute-0 systemd[1]: Started libpod-conmon-40a97aa94b96ef52677f0698987f488703ed2470de048fd975f32b156d01402c.scope.
Oct 08 09:54:00 compute-0 podman[145978]: 2025-10-08 09:54:00.025681189 +0000 UTC m=+0.034071296 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:54:00 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:54:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34241b96de75b6e577e1e3d0292de8e05c8090924926b0ed05e720c56052528d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 09:54:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34241b96de75b6e577e1e3d0292de8e05c8090924926b0ed05e720c56052528d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:54:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34241b96de75b6e577e1e3d0292de8e05c8090924926b0ed05e720c56052528d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:54:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34241b96de75b6e577e1e3d0292de8e05c8090924926b0ed05e720c56052528d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 09:54:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34241b96de75b6e577e1e3d0292de8e05c8090924926b0ed05e720c56052528d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 09:54:00 compute-0 podman[145978]: 2025-10-08 09:54:00.170290063 +0000 UTC m=+0.178680150 container init 40a97aa94b96ef52677f0698987f488703ed2470de048fd975f32b156d01402c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_williamson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:54:00 compute-0 podman[145978]: 2025-10-08 09:54:00.179149995 +0000 UTC m=+0.187540052 container start 40a97aa94b96ef52677f0698987f488703ed2470de048fd975f32b156d01402c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:54:00 compute-0 podman[145978]: 2025-10-08 09:54:00.194699558 +0000 UTC m=+0.203089645 container attach 40a97aa94b96ef52677f0698987f488703ed2470de048fd975f32b156d01402c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:54:00 compute-0 sudo[146156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqxaghskxklhudfibrlbfwynpbzjkexh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917240.2187536-680-64030077384909/AnsiballZ_command.py'
Oct 08 09:54:00 compute-0 sudo[146156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:54:00 compute-0 funny_williamson[146020]: --> passed data devices: 0 physical, 1 LVM
Oct 08 09:54:00 compute-0 funny_williamson[146020]: --> All data devices are unavailable
Oct 08 09:54:00 compute-0 systemd[1]: libpod-40a97aa94b96ef52677f0698987f488703ed2470de048fd975f32b156d01402c.scope: Deactivated successfully.
Oct 08 09:54:00 compute-0 podman[145978]: 2025-10-08 09:54:00.534494395 +0000 UTC m=+0.542884482 container died 40a97aa94b96ef52677f0698987f488703ed2470de048fd975f32b156d01402c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_williamson, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct 08 09:54:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-34241b96de75b6e577e1e3d0292de8e05c8090924926b0ed05e720c56052528d-merged.mount: Deactivated successfully.
Oct 08 09:54:00 compute-0 podman[145978]: 2025-10-08 09:54:00.623459372 +0000 UTC m=+0.631849429 container remove 40a97aa94b96ef52677f0698987f488703ed2470de048fd975f32b156d01402c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_williamson, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:54:00 compute-0 systemd[1]: libpod-conmon-40a97aa94b96ef52677f0698987f488703ed2470de048fd975f32b156d01402c.scope: Deactivated successfully.
Oct 08 09:54:00 compute-0 sudo[145719]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:00 compute-0 python3.9[146161]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:54:00 compute-0 sudo[146176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:54:00 compute-0 sudo[146176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:54:00 compute-0 sudo[146176]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:00 compute-0 sudo[146156]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:00 compute-0 sudo[146204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- lvm list --format json
Oct 08 09:54:00 compute-0 sudo[146204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:54:01 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v269: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 257 B/s rd, 0 op/s
Oct 08 09:54:01 compute-0 podman[146347]: 2025-10-08 09:54:01.276306693 +0000 UTC m=+0.115160483 container create 9274abf9b107d784a397ac74f32150d02c50b569e164d04663f4da1229bbc6ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_curie, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct 08 09:54:01 compute-0 podman[146347]: 2025-10-08 09:54:01.18565325 +0000 UTC m=+0.024507090 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:54:01 compute-0 ceph-mon[73572]: pgmap v269: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 257 B/s rd, 0 op/s
Oct 08 09:54:01 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:01 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:01 compute-0 systemd[1]: Started libpod-conmon-9274abf9b107d784a397ac74f32150d02c50b569e164d04663f4da1229bbc6ad.scope.
Oct 08 09:54:01 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:54:01 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:54:01 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:54:01 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:54:01.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:54:01 compute-0 podman[146347]: 2025-10-08 09:54:01.456520581 +0000 UTC m=+0.295374421 container init 9274abf9b107d784a397ac74f32150d02c50b569e164d04663f4da1229bbc6ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0)
Oct 08 09:54:01 compute-0 podman[146347]: 2025-10-08 09:54:01.464561246 +0000 UTC m=+0.303414996 container start 9274abf9b107d784a397ac74f32150d02c50b569e164d04663f4da1229bbc6ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:54:01 compute-0 quirky_curie[146398]: 167 167
Oct 08 09:54:01 compute-0 systemd[1]: libpod-9274abf9b107d784a397ac74f32150d02c50b569e164d04663f4da1229bbc6ad.scope: Deactivated successfully.
Oct 08 09:54:01 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:01 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec8001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:01 compute-0 podman[146347]: 2025-10-08 09:54:01.499890063 +0000 UTC m=+0.338743903 container attach 9274abf9b107d784a397ac74f32150d02c50b569e164d04663f4da1229bbc6ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_curie, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 08 09:54:01 compute-0 podman[146347]: 2025-10-08 09:54:01.501154674 +0000 UTC m=+0.340008444 container died 9274abf9b107d784a397ac74f32150d02c50b569e164d04663f4da1229bbc6ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_curie, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:54:01 compute-0 sudo[146453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myebovymgqziwfgosjbbfmjolsadupbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917241.0147846-704-172998350349886/AnsiballZ_blockinfile.py'
Oct 08 09:54:01 compute-0 sudo[146453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:54:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0774eacbe220f0970995c0f21d551887b66aefb3de69dca0b0d477efc4f2508-merged.mount: Deactivated successfully.
Oct 08 09:54:01 compute-0 podman[146347]: 2025-10-08 09:54:01.552383575 +0000 UTC m=+0.391237345 container remove 9274abf9b107d784a397ac74f32150d02c50b569e164d04663f4da1229bbc6ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Oct 08 09:54:01 compute-0 sudo[146443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 09:54:01 compute-0 sudo[146443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:54:01 compute-0 sudo[146443]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:01 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:54:01 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:54:01 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:54:01.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:54:01 compute-0 systemd[1]: libpod-conmon-9274abf9b107d784a397ac74f32150d02c50b569e164d04663f4da1229bbc6ad.scope: Deactivated successfully.
Oct 08 09:54:01 compute-0 python3.9[146476]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:54:01 compute-0 podman[146488]: 2025-10-08 09:54:01.759859874 +0000 UTC m=+0.068261094 container create 6da0becab4243fdaeb9fde1a8fd8405a0d397952e3af14a6afa481d2a5340abb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:54:01 compute-0 sudo[146453]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:01 compute-0 systemd[1]: Started libpod-conmon-6da0becab4243fdaeb9fde1a8fd8405a0d397952e3af14a6afa481d2a5340abb.scope.
Oct 08 09:54:01 compute-0 podman[146488]: 2025-10-08 09:54:01.733266906 +0000 UTC m=+0.041668166 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:54:01 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:54:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b128eda9ede4fbd9a5c3a72694b00b1ae4a2be8ee7d996d5dfec8afd3b4159c9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 09:54:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b128eda9ede4fbd9a5c3a72694b00b1ae4a2be8ee7d996d5dfec8afd3b4159c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:54:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b128eda9ede4fbd9a5c3a72694b00b1ae4a2be8ee7d996d5dfec8afd3b4159c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:54:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b128eda9ede4fbd9a5c3a72694b00b1ae4a2be8ee7d996d5dfec8afd3b4159c9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 09:54:01 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:01 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:01 compute-0 podman[146488]: 2025-10-08 09:54:01.874938573 +0000 UTC m=+0.183339843 container init 6da0becab4243fdaeb9fde1a8fd8405a0d397952e3af14a6afa481d2a5340abb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_lovelace, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 08 09:54:01 compute-0 podman[146488]: 2025-10-08 09:54:01.882240944 +0000 UTC m=+0.190642154 container start 6da0becab4243fdaeb9fde1a8fd8405a0d397952e3af14a6afa481d2a5340abb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_lovelace, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:54:01 compute-0 podman[146488]: 2025-10-08 09:54:01.885967017 +0000 UTC m=+0.194368287 container attach 6da0becab4243fdaeb9fde1a8fd8405a0d397952e3af14a6afa481d2a5340abb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:54:02 compute-0 relaxed_lovelace[146513]: {
Oct 08 09:54:02 compute-0 relaxed_lovelace[146513]:     "1": [
Oct 08 09:54:02 compute-0 relaxed_lovelace[146513]:         {
Oct 08 09:54:02 compute-0 relaxed_lovelace[146513]:             "devices": [
Oct 08 09:54:02 compute-0 relaxed_lovelace[146513]:                 "/dev/loop3"
Oct 08 09:54:02 compute-0 relaxed_lovelace[146513]:             ],
Oct 08 09:54:02 compute-0 relaxed_lovelace[146513]:             "lv_name": "ceph_lv0",
Oct 08 09:54:02 compute-0 relaxed_lovelace[146513]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 09:54:02 compute-0 relaxed_lovelace[146513]:             "lv_size": "21470642176",
Oct 08 09:54:02 compute-0 relaxed_lovelace[146513]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 08 09:54:02 compute-0 relaxed_lovelace[146513]:             "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 09:54:02 compute-0 relaxed_lovelace[146513]:             "name": "ceph_lv0",
Oct 08 09:54:02 compute-0 relaxed_lovelace[146513]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 09:54:02 compute-0 relaxed_lovelace[146513]:             "tags": {
Oct 08 09:54:02 compute-0 relaxed_lovelace[146513]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 08 09:54:02 compute-0 relaxed_lovelace[146513]:                 "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 09:54:02 compute-0 relaxed_lovelace[146513]:                 "ceph.cephx_lockbox_secret": "",
Oct 08 09:54:02 compute-0 relaxed_lovelace[146513]:                 "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct 08 09:54:02 compute-0 relaxed_lovelace[146513]:                 "ceph.cluster_name": "ceph",
Oct 08 09:54:02 compute-0 relaxed_lovelace[146513]:                 "ceph.crush_device_class": "",
Oct 08 09:54:02 compute-0 relaxed_lovelace[146513]:                 "ceph.encrypted": "0",
Oct 08 09:54:02 compute-0 relaxed_lovelace[146513]:                 "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct 08 09:54:02 compute-0 relaxed_lovelace[146513]:                 "ceph.osd_id": "1",
Oct 08 09:54:02 compute-0 relaxed_lovelace[146513]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 08 09:54:02 compute-0 relaxed_lovelace[146513]:                 "ceph.type": "block",
Oct 08 09:54:02 compute-0 relaxed_lovelace[146513]:                 "ceph.vdo": "0",
Oct 08 09:54:02 compute-0 relaxed_lovelace[146513]:                 "ceph.with_tpm": "0"
Oct 08 09:54:02 compute-0 relaxed_lovelace[146513]:             },
Oct 08 09:54:02 compute-0 relaxed_lovelace[146513]:             "type": "block",
Oct 08 09:54:02 compute-0 relaxed_lovelace[146513]:             "vg_name": "ceph_vg0"
Oct 08 09:54:02 compute-0 relaxed_lovelace[146513]:         }
Oct 08 09:54:02 compute-0 relaxed_lovelace[146513]:     ]
Oct 08 09:54:02 compute-0 relaxed_lovelace[146513]: }
Oct 08 09:54:02 compute-0 systemd[1]: libpod-6da0becab4243fdaeb9fde1a8fd8405a0d397952e3af14a6afa481d2a5340abb.scope: Deactivated successfully.
Oct 08 09:54:02 compute-0 podman[146488]: 2025-10-08 09:54:02.206521508 +0000 UTC m=+0.514922728 container died 6da0becab4243fdaeb9fde1a8fd8405a0d397952e3af14a6afa481d2a5340abb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Oct 08 09:54:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-b128eda9ede4fbd9a5c3a72694b00b1ae4a2be8ee7d996d5dfec8afd3b4159c9-merged.mount: Deactivated successfully.
Oct 08 09:54:02 compute-0 podman[146488]: 2025-10-08 09:54:02.264282566 +0000 UTC m=+0.572683796 container remove 6da0becab4243fdaeb9fde1a8fd8405a0d397952e3af14a6afa481d2a5340abb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_lovelace, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:54:02 compute-0 systemd[1]: libpod-conmon-6da0becab4243fdaeb9fde1a8fd8405a0d397952e3af14a6afa481d2a5340abb.scope: Deactivated successfully.
Oct 08 09:54:02 compute-0 sudo[146678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksjuqwsuivwednvfxllbelqknuhaxuib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917242.0054204-731-161897901354053/AnsiballZ_command.py'
Oct 08 09:54:02 compute-0 sudo[146678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:54:02 compute-0 sudo[146204]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:02 compute-0 sudo[146681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:54:02 compute-0 sudo[146681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:54:02 compute-0 sudo[146681]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:02 compute-0 sudo[146706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- raw list --format json
Oct 08 09:54:02 compute-0 sudo[146706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:54:02 compute-0 python3.9[146680]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:54:02 compute-0 sudo[146678]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:02 compute-0 podman[146795]: 2025-10-08 09:54:02.758051344 +0000 UTC m=+0.038928826 container create 1c5d1953349eb02ab1d29e03ff1de465b69a9c4fc1431b8dd046636b60409830 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 08 09:54:02 compute-0 systemd[1]: Started libpod-conmon-1c5d1953349eb02ab1d29e03ff1de465b69a9c4fc1431b8dd046636b60409830.scope.
Oct 08 09:54:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 09:54:02 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:54:02 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:54:02 compute-0 podman[146795]: 2025-10-08 09:54:02.830699382 +0000 UTC m=+0.111576874 container init 1c5d1953349eb02ab1d29e03ff1de465b69a9c4fc1431b8dd046636b60409830 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_goodall, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:54:02 compute-0 podman[146795]: 2025-10-08 09:54:02.740176445 +0000 UTC m=+0.021053957 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:54:02 compute-0 podman[146795]: 2025-10-08 09:54:02.836961739 +0000 UTC m=+0.117839221 container start 1c5d1953349eb02ab1d29e03ff1de465b69a9c4fc1431b8dd046636b60409830 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1)
Oct 08 09:54:02 compute-0 brave_goodall[146846]: 167 167
Oct 08 09:54:02 compute-0 systemd[1]: libpod-1c5d1953349eb02ab1d29e03ff1de465b69a9c4fc1431b8dd046636b60409830.scope: Deactivated successfully.
Oct 08 09:54:02 compute-0 podman[146795]: 2025-10-08 09:54:02.841961444 +0000 UTC m=+0.122838926 container attach 1c5d1953349eb02ab1d29e03ff1de465b69a9c4fc1431b8dd046636b60409830 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Oct 08 09:54:02 compute-0 podman[146795]: 2025-10-08 09:54:02.842263474 +0000 UTC m=+0.123140956 container died 1c5d1953349eb02ab1d29e03ff1de465b69a9c4fc1431b8dd046636b60409830 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_goodall, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:54:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b090f1cc8af709c95def9ecf2c7ea915ac9df236554c08d3d2e0cfd99320b23-merged.mount: Deactivated successfully.
Oct 08 09:54:02 compute-0 podman[146795]: 2025-10-08 09:54:02.877715055 +0000 UTC m=+0.158592537 container remove 1c5d1953349eb02ab1d29e03ff1de465b69a9c4fc1431b8dd046636b60409830 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_goodall, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:54:02 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:54:02 compute-0 systemd[1]: libpod-conmon-1c5d1953349eb02ab1d29e03ff1de465b69a9c4fc1431b8dd046636b60409830.scope: Deactivated successfully.
Oct 08 09:54:03 compute-0 sudo[146973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbvetcvurpmjkeyfvxyknjixuikuhegp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917242.7698975-755-255351225764163/AnsiballZ_stat.py'
Oct 08 09:54:03 compute-0 sudo[146973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:54:03 compute-0 podman[146936]: 2025-10-08 09:54:03.026087263 +0000 UTC m=+0.043464827 container create a8656a9b92451e8f7491a31fe6975423bed486c020acb65f89cc2ddb02873e92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_clarke, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:54:03 compute-0 systemd[1]: Started libpod-conmon-a8656a9b92451e8f7491a31fe6975423bed486c020acb65f89cc2ddb02873e92.scope.
Oct 08 09:54:03 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:54:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1d670a9dcd8067784a010ab8f4ea1ab886c4a42954512b30dd6500b9f0a2b85/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 09:54:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1d670a9dcd8067784a010ab8f4ea1ab886c4a42954512b30dd6500b9f0a2b85/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:54:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1d670a9dcd8067784a010ab8f4ea1ab886c4a42954512b30dd6500b9f0a2b85/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:54:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1d670a9dcd8067784a010ab8f4ea1ab886c4a42954512b30dd6500b9f0a2b85/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 09:54:03 compute-0 podman[146936]: 2025-10-08 09:54:03.09478839 +0000 UTC m=+0.112165964 container init a8656a9b92451e8f7491a31fe6975423bed486c020acb65f89cc2ddb02873e92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:54:03 compute-0 podman[146936]: 2025-10-08 09:54:03.00692968 +0000 UTC m=+0.024307284 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:54:03 compute-0 podman[146936]: 2025-10-08 09:54:03.103615692 +0000 UTC m=+0.120993256 container start a8656a9b92451e8f7491a31fe6975423bed486c020acb65f89cc2ddb02873e92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_clarke, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:54:03 compute-0 podman[146936]: 2025-10-08 09:54:03.107745277 +0000 UTC m=+0.125122871 container attach a8656a9b92451e8f7491a31fe6975423bed486c020acb65f89cc2ddb02873e92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_clarke, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:54:03 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v270: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 257 B/s rd, 0 op/s
Oct 08 09:54:03 compute-0 python3.9[146975]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 09:54:03 compute-0 sudo[146973]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:03 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:03 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:03 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:54:03 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:54:03 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:54:03.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:54:03 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:03 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef8002910 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:03 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:54:03 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:54:03 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:54:03 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:54:03.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:54:03 compute-0 lvm[147184]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 08 09:54:03 compute-0 lvm[147184]: VG ceph_vg0 finished
Oct 08 09:54:03 compute-0 sudo[147207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqiemaumtgeodxnsrfohvptqkzautniz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917243.4845207-779-154916421469896/AnsiballZ_command.py'
Oct 08 09:54:03 compute-0 sudo[147207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:54:03 compute-0 trusting_clarke[146979]: {}
Oct 08 09:54:03 compute-0 podman[146936]: 2025-10-08 09:54:03.807892539 +0000 UTC m=+0.825270103 container died a8656a9b92451e8f7491a31fe6975423bed486c020acb65f89cc2ddb02873e92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_clarke, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 08 09:54:03 compute-0 systemd[1]: libpod-a8656a9b92451e8f7491a31fe6975423bed486c020acb65f89cc2ddb02873e92.scope: Deactivated successfully.
Oct 08 09:54:03 compute-0 systemd[1]: libpod-a8656a9b92451e8f7491a31fe6975423bed486c020acb65f89cc2ddb02873e92.scope: Consumed 1.015s CPU time.
Oct 08 09:54:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1d670a9dcd8067784a010ab8f4ea1ab886c4a42954512b30dd6500b9f0a2b85-merged.mount: Deactivated successfully.
Oct 08 09:54:03 compute-0 podman[146936]: 2025-10-08 09:54:03.864874221 +0000 UTC m=+0.882251795 container remove a8656a9b92451e8f7491a31fe6975423bed486c020acb65f89cc2ddb02873e92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_clarke, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:54:03 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:03 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:03 compute-0 systemd[1]: libpod-conmon-a8656a9b92451e8f7491a31fe6975423bed486c020acb65f89cc2ddb02873e92.scope: Deactivated successfully.
Oct 08 09:54:03 compute-0 ceph-mon[73572]: pgmap v270: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 257 B/s rd, 0 op/s
Oct 08 09:54:03 compute-0 sudo[146706]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:03 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 09:54:03 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:54:03 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 09:54:03 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:54:03 compute-0 python3.9[147210]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:54:03 compute-0 sudo[147224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 08 09:54:03 compute-0 sudo[147224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:54:03 compute-0 sudo[147224]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:04 compute-0 sudo[147207]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:04 compute-0 sudo[147401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tytlarcbxzuwqxydwiosyklfyhkuznco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917244.1835618-803-175758745463619/AnsiballZ_file.py'
Oct 08 09:54:04 compute-0 sudo[147401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:54:04 compute-0 python3.9[147403]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:54:04 compute-0 sudo[147401]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:04 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:54:04 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:54:05 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v271: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 429 B/s rd, 0 op/s
Oct 08 09:54:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:05 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:05 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:54:05 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:54:05 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:54:05.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:54:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:05 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:05 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:54:05 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 09:54:05 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:54:05.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 09:54:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:54:05] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Oct 08 09:54:05 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:54:05] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Oct 08 09:54:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:05 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef8002910 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:05 compute-0 python3.9[147554]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 08 09:54:05 compute-0 ceph-mon[73572]: pgmap v271: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 429 B/s rd, 0 op/s
Oct 08 09:54:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:54:06.984Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 09:54:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:54:06.985Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:54:07 compute-0 sudo[147707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbclmkphysrxlrsrqpqltubikjubbnci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917246.746753-923-202068371300274/AnsiballZ_command.py'
Oct 08 09:54:07 compute-0 sudo[147707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:54:07 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v272: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 257 B/s rd, 0 op/s
Oct 08 09:54:07 compute-0 python3.9[147709]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:3e:0a:d8:76:c8:90" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:54:07 compute-0 ovs-vsctl[147710]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:3e:0a:d8:76:c8:90 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Oct 08 09:54:07 compute-0 sudo[147707]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:07 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:07 compute-0 ceph-mon[73572]: pgmap v272: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 257 B/s rd, 0 op/s
Oct 08 09:54:07 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:54:07 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:54:07 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:54:07.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:54:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:07 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef8002910 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:07 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:54:07 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 09:54:07 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:54:07.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 09:54:07 compute-0 sudo[147860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvzowbbeomotpaclxxuyzczeypusodhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917247.5181048-950-208991605322492/AnsiballZ_command.py'
Oct 08 09:54:07 compute-0 sudo[147860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:54:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:07 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec8001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:07 compute-0 python3.9[147862]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:54:07 compute-0 sudo[147860]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:08 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:54:08 compute-0 sudo[148016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkqstqcrfyvrvwrwphfelohvaoyzujgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917248.2727962-974-156486503619334/AnsiballZ_command.py'
Oct 08 09:54:08 compute-0 sudo[148016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:54:08 compute-0 python3.9[148018]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:54:08 compute-0 ovs-vsctl[148019]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Oct 08 09:54:08 compute-0 sudo[148016]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:54:08.858Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:54:09 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v273: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 343 B/s rd, 0 op/s
Oct 08 09:54:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:09 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:09 compute-0 python3.9[148170]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 09:54:09 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:54:09 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:54:09 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:54:09.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:54:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:09 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:09 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:54:09 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:54:09 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:54:09.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:54:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:09 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef8003a10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:10 compute-0 sudo[148323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acxpvqdgqnghggjaritffeoaltoqgiaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917249.8097413-1025-39624402788924/AnsiballZ_file.py'
Oct 08 09:54:10 compute-0 sudo[148323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:54:10 compute-0 ceph-mon[73572]: pgmap v273: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 343 B/s rd, 0 op/s
Oct 08 09:54:10 compute-0 python3.9[148325]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:54:10 compute-0 sudo[148323]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:10 compute-0 sudo[148475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzfsrghavfmhdatrmtowxzjrfbdhfcly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917250.5772748-1049-240700581171623/AnsiballZ_stat.py'
Oct 08 09:54:10 compute-0 sudo[148475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:54:11 compute-0 python3.9[148477]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:54:11 compute-0 sudo[148475]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:11 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v274: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:54:11 compute-0 ceph-mon[73572]: pgmap v274: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:54:11 compute-0 sudo[148554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inbbjkcsvsfpuyfqciptxiourhnemdvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917250.5772748-1049-240700581171623/AnsiballZ_file.py'
Oct 08 09:54:11 compute-0 sudo[148554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:54:11 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:11 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec8003240 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:11 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:54:11 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 09:54:11 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:54:11.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 09:54:11 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:11 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:11 compute-0 python3.9[148556]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:54:11 compute-0 sudo[148554]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:11 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:54:11 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:54:11 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:54:11.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:54:11 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:11 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:11 compute-0 sudo[148707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvhmkdcvfxfoclfyezwwapdjwngxlwtg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917251.63288-1049-276378876294903/AnsiballZ_stat.py'
Oct 08 09:54:11 compute-0 sudo[148707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:54:12 compute-0 python3.9[148709]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:54:12 compute-0 sudo[148707]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:12 compute-0 sudo[148785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxhdntakjwumtwiqkobcnzgpqfftiacr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917251.63288-1049-276378876294903/AnsiballZ_file.py'
Oct 08 09:54:12 compute-0 sudo[148785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:54:12 compute-0 python3.9[148787]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:54:12 compute-0 sudo[148785]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:13 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v275: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:54:13 compute-0 sudo[148938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifjujsznutfsswehfmidrgvetsqbcfdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917252.9741375-1118-76674465298863/AnsiballZ_file.py'
Oct 08 09:54:13 compute-0 sudo[148938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:54:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:13 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef8003a10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:13 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:54:13 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:54:13 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:54:13.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:54:13 compute-0 python3.9[148940]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:54:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:13 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec8003240 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:13 compute-0 sudo[148938]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:54:13 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:54:13 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:54:13 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:54:13.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:54:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:13 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:14 compute-0 sudo[149091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvfhssunkemcwgogwlzwzsjmrjndtplw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917253.7345915-1142-260963523326136/AnsiballZ_stat.py'
Oct 08 09:54:14 compute-0 sudo[149091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:54:14 compute-0 ceph-mon[73572]: pgmap v275: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:54:14 compute-0 python3.9[149093]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:54:14 compute-0 sudo[149091]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:14 compute-0 sudo[149169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgbojhhachsuonvpgswfquvdjzlbglyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917253.7345915-1142-260963523326136/AnsiballZ_file.py'
Oct 08 09:54:14 compute-0 sudo[149169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:54:14 compute-0 python3.9[149171]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:54:14 compute-0 sudo[149169]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:15 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v276: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 08 09:54:15 compute-0 sudo[149322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftxmmyvsvfqbsemljnnqadfcapirdyeu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917254.9571164-1178-95618011773166/AnsiballZ_stat.py'
Oct 08 09:54:15 compute-0 sudo[149322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:54:15 compute-0 ceph-mon[73572]: pgmap v276: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 08 09:54:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:15 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:15 compute-0 python3.9[149324]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:54:15 compute-0 sudo[149322]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:15 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:54:15 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:54:15 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:54:15.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:54:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:15 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef8003a10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:15 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:54:15 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 09:54:15 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:54:15.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 09:54:15 compute-0 sudo[149400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnitpkwnckwucntjomdojvxojyxusjez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917254.9571164-1178-95618011773166/AnsiballZ_file.py'
Oct 08 09:54:15 compute-0 sudo[149400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:54:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:54:15] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Oct 08 09:54:15 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:54:15] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Oct 08 09:54:15 compute-0 python3.9[149402]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:54:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:15 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec8003240 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:15 compute-0 sudo[149400]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/095416 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 08 09:54:16 compute-0 sudo[149553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvvppjicvjzofiopcisldqqmppopsznq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917256.1519701-1214-228154459510615/AnsiballZ_systemd.py'
Oct 08 09:54:16 compute-0 sudo[149553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:54:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:54:16.986Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:54:17 compute-0 python3.9[149555]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 09:54:17 compute-0 systemd[1]: Reloading.
Oct 08 09:54:17 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v277: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 09:54:17 compute-0 systemd-rc-local-generator[149579]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:54:17 compute-0 systemd-sysv-generator[149582]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:54:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:17 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:17 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:54:17 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:54:17 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:54:17.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:54:17 compute-0 sudo[149553]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:17 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:17 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:54:17 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:54:17 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:54:17.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:54:17 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 09:54:17 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:54:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:17 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef8004720 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:54:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:54:17 compute-0 sudo[149745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gognprkckvmpxmlvzmbhldqtvyarwrgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917257.657338-1238-33972715372262/AnsiballZ_stat.py'
Oct 08 09:54:17 compute-0 sudo[149745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:54:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:54:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:54:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:54:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:54:18 compute-0 python3.9[149747]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:54:18 compute-0 sudo[149745]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:18 compute-0 ceph-mon[73572]: pgmap v277: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 09:54:18 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:54:18 compute-0 sudo[149823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbhmwjbqynuvtviuxebpriuydnuuyyfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917257.657338-1238-33972715372262/AnsiballZ_file.py'
Oct 08 09:54:18 compute-0 sudo[149823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:54:18 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:54:18 compute-0 python3.9[149825]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:54:18 compute-0 sudo[149823]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:54:18.859Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:54:19 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v278: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:54:19 compute-0 sudo[149976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mahsqebztcghxmjcwoptpdposfepdqbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917258.8770134-1274-14281994122809/AnsiballZ_stat.py'
Oct 08 09:54:19 compute-0 sudo[149976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:54:19 compute-0 ceph-mon[73572]: pgmap v278: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:54:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:19 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec8003240 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:19 compute-0 python3.9[149978]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:54:19 compute-0 sudo[149976]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:19 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:54:19 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:54:19 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:54:19.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:54:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:19 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:19 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:54:19 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:54:19 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:54:19.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:54:19 compute-0 sudo[150054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbuspugprheoixtgyxbimconnwrmchxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917258.8770134-1274-14281994122809/AnsiballZ_file.py'
Oct 08 09:54:19 compute-0 sudo[150054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:54:19 compute-0 python3.9[150056]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:54:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:19 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:19 compute-0 sudo[150054]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:20 compute-0 sudo[150207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-guqywwzugjboqmrhyheqrbvfxtneiojc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917260.123599-1310-82002434966635/AnsiballZ_systemd.py'
Oct 08 09:54:20 compute-0 sudo[150207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:54:20 compute-0 python3.9[150209]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 09:54:20 compute-0 systemd[1]: Reloading.
Oct 08 09:54:20 compute-0 systemd-rc-local-generator[150236]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:54:20 compute-0 systemd-sysv-generator[150239]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:54:21 compute-0 systemd[1]: Starting Create netns directory...
Oct 08 09:54:21 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 08 09:54:21 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 08 09:54:21 compute-0 systemd[1]: Finished Create netns directory.
Oct 08 09:54:21 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v279: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 09:54:21 compute-0 sudo[150207]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:21 compute-0 ceph-mon[73572]: pgmap v279: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 09:54:21 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:21 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef8004720 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:21 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:54:21 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:54:21 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:54:21.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:54:21 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:21 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec8003240 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:21 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:54:21 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:54:21 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:54:21.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:54:21 compute-0 sudo[150329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 09:54:21 compute-0 sudo[150329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:54:21 compute-0 sudo[150329]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:21 compute-0 sudo[150427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hneiazzanisirywmgwdrenfcvjmbqyvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917261.4970436-1340-162977662220709/AnsiballZ_file.py'
Oct 08 09:54:21 compute-0 sudo[150427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:54:21 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:21 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:21 compute-0 python3.9[150429]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:54:21 compute-0 sudo[150427]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:22 compute-0 sudo[150580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obdemikdisqprotzgvwcutzdbznhcala ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917262.210522-1364-91997752062573/AnsiballZ_stat.py'
Oct 08 09:54:22 compute-0 sudo[150580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:54:22 compute-0 python3.9[150582]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:54:22 compute-0 sudo[150580]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:23 compute-0 sudo[150704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvlhjzljncslfjzynlrcxrigciksrfoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917262.210522-1364-91997752062573/AnsiballZ_copy.py'
Oct 08 09:54:23 compute-0 sudo[150704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:54:23 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v280: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 09:54:23 compute-0 python3.9[150706]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759917262.210522-1364-91997752062573/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:54:23 compute-0 sudo[150704]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:23 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:23 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:54:23 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:54:23 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:54:23.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:54:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:23 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef8004720 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:23 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:54:23 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:54:23 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:54:23 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:54:23.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:54:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:23 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec8003240 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:23 compute-0 sudo[150857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejsbbshtordgpkroomobdjxesfelvdcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917263.712184-1415-128808773869072/AnsiballZ_file.py'
Oct 08 09:54:23 compute-0 sudo[150857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:54:24 compute-0 python3.9[150859]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:54:24 compute-0 ceph-mon[73572]: pgmap v280: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 09:54:24 compute-0 sudo[150857]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:24 compute-0 sudo[151009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juqckrckyzizbsklkoisbgcnopdcodvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917264.3798847-1439-215924471615963/AnsiballZ_stat.py'
Oct 08 09:54:24 compute-0 sudo[151009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:54:24 compute-0 python3.9[151011]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:54:24 compute-0 sudo[151009]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:25 compute-0 sudo[151133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdjufcogaymyadzxbqwplviaojjfkfwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917264.3798847-1439-215924471615963/AnsiballZ_copy.py'
Oct 08 09:54:25 compute-0 sudo[151133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:54:25 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v281: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Oct 08 09:54:25 compute-0 ceph-mon[73572]: pgmap v281: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Oct 08 09:54:25 compute-0 python3.9[151135]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759917264.3798847-1439-215924471615963/.source.json _original_basename=.ast15ltk follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:54:25 compute-0 sudo[151133]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:25 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004160 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:25 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:54:25 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:54:25 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:54:25.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:54:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:25 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:25 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 09:54:25 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:54:25 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:54:25 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:54:25.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:54:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:54:25] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Oct 08 09:54:25 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:54:25] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Oct 08 09:54:25 compute-0 sudo[151286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xixdbzwwahskwrhvicgvlqwomgvibrji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917265.6465929-1484-129140338054105/AnsiballZ_file.py'
Oct 08 09:54:25 compute-0 sudo[151286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:54:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:25 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef8004720 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:26 compute-0 python3.9[151288]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:54:26 compute-0 sudo[151286]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:26 compute-0 sudo[151438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elazqgafqlkaoibyyyjpnlwaivlvchrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917266.3318439-1508-114451629219697/AnsiballZ_stat.py'
Oct 08 09:54:26 compute-0 sudo[151438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:54:26 compute-0 sudo[151438]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:54:26.986Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 09:54:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:54:26.987Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:54:27 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v282: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Oct 08 09:54:27 compute-0 sudo[151562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tunwtjffdyetnetpmiwmvuehpnekvinc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917266.3318439-1508-114451629219697/AnsiballZ_copy.py'
Oct 08 09:54:27 compute-0 sudo[151562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:54:27 compute-0 sudo[151562]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:27 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec8003240 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:27 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:54:27 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:54:27 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:54:27.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:54:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:27 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004180 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:27 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:54:27 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 09:54:27 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:54:27.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 09:54:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:27 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec001ab0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:28 compute-0 sudo[151717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uioajtyicrrgoguudzjkdefajxbfvrho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917267.6543424-1559-262420060866093/AnsiballZ_container_config_data.py'
Oct 08 09:54:28 compute-0 sudo[151717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:54:28 compute-0 ceph-mon[73572]: pgmap v282: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Oct 08 09:54:28 compute-0 python3.9[151719]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Oct 08 09:54:28 compute-0 sudo[151717]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:28 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:54:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:28 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 09:54:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:28 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 09:54:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:54:28.860Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:54:28 compute-0 sudo[151869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qphmmswdfufzazdiamgszqtpafiwodtj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917268.5635045-1586-32612776010274/AnsiballZ_container_config_hash.py'
Oct 08 09:54:28 compute-0 sudo[151869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:54:29 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v283: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 09:54:29 compute-0 python3.9[151871]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 08 09:54:29 compute-0 sudo[151869]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:29 compute-0 ceph-mon[73572]: pgmap v283: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 09:54:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:29 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:29 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:54:29 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:54:29 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:54:29.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:54:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:29 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef8004720 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:29 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:54:29 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:54:29 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:54:29.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:54:29 compute-0 sudo[152023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptaicmaprbrnzowgoywqrsibwtexhbzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917269.4719594-1613-22505172543345/AnsiballZ_podman_container_info.py'
Oct 08 09:54:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:29 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc0041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:29 compute-0 sudo[152023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:54:30 compute-0 python3.9[152025]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 08 09:54:30 compute-0 sudo[152023]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:31 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v284: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 09:54:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:31 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec001ab0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:31 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:54:31 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 09:54:31 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:54:31.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 09:54:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:31 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:31 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 08 09:54:31 compute-0 sudo[152202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cajlgnmhswjqyvvfynejjfszxzalatne ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759917271.0843346-1652-166363542607023/AnsiballZ_edpm_container_manage.py'
Oct 08 09:54:31 compute-0 sudo[152202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:54:31 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:54:31 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:54:31 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:54:31.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:54:31 compute-0 python3[152204]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 08 09:54:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:31 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef8004720 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:31 compute-0 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 08 09:54:31 compute-0 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 8245 writes, 33K keys, 8245 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s
                                           Cumulative WAL: 8245 writes, 1525 syncs, 5.41 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8245 writes, 33K keys, 8245 commit groups, 1.0 writes per commit group, ingest: 21.32 MB, 0.04 MB/s
                                           Interval WAL: 8245 writes, 1525 syncs, 5.41 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb69b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb69b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb69b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 08 09:54:32 compute-0 ceph-mon[73572]: pgmap v284: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 09:54:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 09:54:32 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:54:33 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v285: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 09:54:33 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:54:33 compute-0 ceph-mon[73572]: pgmap v285: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 09:54:33 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:33 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc0041c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:33 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:54:33 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:54:33 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:54:33.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:54:33 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:33 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec001ab0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:54:33 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:54:33 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:54:33 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:54:33.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:54:33 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:33 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:35 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v286: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 08 09:54:35 compute-0 ceph-mon[73572]: pgmap v286: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 08 09:54:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:35 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef8004720 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:35 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:54:35 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:54:35 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:54:35.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:54:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:35 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc0041c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:35 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:54:35 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:54:35 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:54:35.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:54:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:54:35] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Oct 08 09:54:35 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:54:35] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Oct 08 09:54:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:35 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec001ab0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:36 compute-0 podman[152219]: 2025-10-08 09:54:36.952389881 +0000 UTC m=+5.102858281 image pull 70c92fb64e1eda6ef063d34e60e9a541e44edbaa51e757e8304331202c76a3a7 quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857
Oct 08 09:54:36 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:54:36.987Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:54:37 compute-0 podman[152341]: 2025-10-08 09:54:37.093335408 +0000 UTC m=+0.047892519 container create 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible)
Oct 08 09:54:37 compute-0 podman[152341]: 2025-10-08 09:54:37.066160697 +0000 UTC m=+0.020717828 image pull 70c92fb64e1eda6ef063d34e60e9a541e44edbaa51e757e8304331202c76a3a7 quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857
Oct 08 09:54:37 compute-0 python3[152204]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857
Oct 08 09:54:37 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v287: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Oct 08 09:54:37 compute-0 sudo[152202]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:37 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:37 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:54:37 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:54:37 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:54:37.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:54:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:37 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef8004720 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:37 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:54:37 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:54:37 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:54:37.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:54:37 compute-0 sudo[152530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thafqfsiaktqtaholdhsjfzjadvxdgtr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917277.414272-1676-90951765846564/AnsiballZ_stat.py'
Oct 08 09:54:37 compute-0 sudo[152530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:54:37 compute-0 python3.9[152532]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 09:54:37 compute-0 sudo[152530]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:37 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc0041c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:38 compute-0 ceph-mon[73572]: pgmap v287: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Oct 08 09:54:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/095438 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 08 09:54:38 compute-0 sudo[152685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlpjybctraguzjohrzhayvchdhsjoaei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917278.2332282-1703-229077725623529/AnsiballZ_file.py'
Oct 08 09:54:38 compute-0 sudo[152685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:54:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:54:38 compute-0 python3.9[152687]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:54:38 compute-0 sudo[152685]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:54:38.861Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:54:38 compute-0 sudo[152761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkxaipreonglduyabpjyfhyfooujevao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917278.2332282-1703-229077725623529/AnsiballZ_stat.py'
Oct 08 09:54:38 compute-0 sudo[152761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:54:39 compute-0 python3.9[152763]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 09:54:39 compute-0 sudo[152761]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:39 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v288: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Oct 08 09:54:39 compute-0 ceph-mon[73572]: pgmap v288: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Oct 08 09:54:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:39 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec001ab0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:39 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:54:39 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 09:54:39 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:54:39.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 09:54:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:39 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:39 compute-0 sudo[152913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsmuoidaonzlfypzlsfrpoonhlwojxef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917279.1625543-1703-115701383212134/AnsiballZ_copy.py'
Oct 08 09:54:39 compute-0 sudo[152913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:54:39 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:54:39 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:54:39 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:54:39.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:54:39 compute-0 python3.9[152915]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759917279.1625543-1703-115701383212134/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:54:39 compute-0 sudo[152913]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:39 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef800c300 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:40 compute-0 sudo[152990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrpkctktkbpnsmtpvptizixozoqkpflu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917279.1625543-1703-115701383212134/AnsiballZ_systemd.py'
Oct 08 09:54:40 compute-0 sudo[152990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:54:40 compute-0 python3.9[152992]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 08 09:54:40 compute-0 systemd[1]: Reloading.
Oct 08 09:54:40 compute-0 systemd-rc-local-generator[153018]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:54:40 compute-0 systemd-sysv-generator[153021]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:54:40 compute-0 sudo[152990]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:40 compute-0 sudo[153103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajhhwyxjrfjgbdladbponrcltpefgynz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917279.1625543-1703-115701383212134/AnsiballZ_systemd.py'
Oct 08 09:54:40 compute-0 sudo[153103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:54:41 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v289: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct 08 09:54:41 compute-0 python3.9[153105]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 09:54:41 compute-0 systemd[1]: Reloading.
Oct 08 09:54:41 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:41 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc0041c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:41 compute-0 systemd-sysv-generator[153137]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:54:41 compute-0 systemd-rc-local-generator[153133]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:54:41 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:54:41 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:54:41 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:54:41.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:54:41 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:41 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec001ab0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:41 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:54:41 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:54:41 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:54:41.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:54:41 compute-0 systemd[1]: Starting ovn_controller container...
Oct 08 09:54:41 compute-0 sudo[153147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 09:54:41 compute-0 sudo[153147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:54:41 compute-0 sudo[153147]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:41 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:54:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56be1da2d7b5a9f201fba1da953ea696763ec191ff50f2e7e39fa2399a7ba07a/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Oct 08 09:54:41 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f.
Oct 08 09:54:41 compute-0 podman[153153]: 2025-10-08 09:54:41.815130131 +0000 UTC m=+0.142160076 container init 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, config_id=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:54:41 compute-0 ovn_controller[153187]: + sudo -E kolla_set_configs
Oct 08 09:54:41 compute-0 podman[153153]: 2025-10-08 09:54:41.845789413 +0000 UTC m=+0.172819318 container start 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 08 09:54:41 compute-0 edpm-start-podman-container[153153]: ovn_controller
Oct 08 09:54:41 compute-0 systemd[1]: Created slice User Slice of UID 0.
Oct 08 09:54:41 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct 08 09:54:41 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct 08 09:54:41 compute-0 systemd[1]: Starting User Manager for UID 0...
Oct 08 09:54:41 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:41 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:41 compute-0 systemd[153219]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Oct 08 09:54:41 compute-0 edpm-start-podman-container[153146]: Creating additional drop-in dependency for "ovn_controller" (750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f)
Oct 08 09:54:41 compute-0 podman[153194]: 2025-10-08 09:54:41.93087113 +0000 UTC m=+0.075328349 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 08 09:54:41 compute-0 systemd[1]: 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f-61a69f9453ed5888.service: Main process exited, code=exited, status=1/FAILURE
Oct 08 09:54:41 compute-0 systemd[1]: 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f-61a69f9453ed5888.service: Failed with result 'exit-code'.
Oct 08 09:54:41 compute-0 systemd[1]: Reloading.
Oct 08 09:54:42 compute-0 systemd-sysv-generator[153280]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:54:42 compute-0 systemd-rc-local-generator[153276]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:54:42 compute-0 systemd[153219]: Queued start job for default target Main User Target.
Oct 08 09:54:42 compute-0 systemd[153219]: Created slice User Application Slice.
Oct 08 09:54:42 compute-0 systemd[153219]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct 08 09:54:42 compute-0 systemd[153219]: Started Daily Cleanup of User's Temporary Directories.
Oct 08 09:54:42 compute-0 systemd[153219]: Reached target Paths.
Oct 08 09:54:42 compute-0 systemd[153219]: Reached target Timers.
Oct 08 09:54:42 compute-0 systemd[153219]: Starting D-Bus User Message Bus Socket...
Oct 08 09:54:42 compute-0 systemd[153219]: Starting Create User's Volatile Files and Directories...
Oct 08 09:54:42 compute-0 systemd[153219]: Finished Create User's Volatile Files and Directories.
Oct 08 09:54:42 compute-0 systemd[153219]: Listening on D-Bus User Message Bus Socket.
Oct 08 09:54:42 compute-0 systemd[153219]: Reached target Sockets.
Oct 08 09:54:42 compute-0 systemd[153219]: Reached target Basic System.
Oct 08 09:54:42 compute-0 systemd[153219]: Reached target Main User Target.
Oct 08 09:54:42 compute-0 systemd[153219]: Startup finished in 156ms.
Oct 08 09:54:42 compute-0 systemd[1]: Started User Manager for UID 0.
Oct 08 09:54:42 compute-0 systemd[1]: Started ovn_controller container.
Oct 08 09:54:42 compute-0 ceph-mon[73572]: pgmap v289: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct 08 09:54:42 compute-0 systemd[1]: Started Session c1 of User root.
Oct 08 09:54:42 compute-0 sudo[153103]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:42 compute-0 ovn_controller[153187]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 08 09:54:42 compute-0 ovn_controller[153187]: INFO:__main__:Validating config file
Oct 08 09:54:42 compute-0 ovn_controller[153187]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 08 09:54:42 compute-0 ovn_controller[153187]: INFO:__main__:Writing out command to execute
Oct 08 09:54:42 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Oct 08 09:54:42 compute-0 ovn_controller[153187]: ++ cat /run_command
Oct 08 09:54:42 compute-0 ovn_controller[153187]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct 08 09:54:42 compute-0 ovn_controller[153187]: + ARGS=
Oct 08 09:54:42 compute-0 ovn_controller[153187]: + sudo kolla_copy_cacerts
Oct 08 09:54:42 compute-0 systemd[1]: Started Session c2 of User root.
Oct 08 09:54:42 compute-0 ovn_controller[153187]: + [[ ! -n '' ]]
Oct 08 09:54:42 compute-0 ovn_controller[153187]: + . kolla_extend_start
Oct 08 09:54:42 compute-0 ovn_controller[153187]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct 08 09:54:42 compute-0 ovn_controller[153187]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Oct 08 09:54:42 compute-0 ovn_controller[153187]: + umask 0022
Oct 08 09:54:42 compute-0 ovn_controller[153187]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Oct 08 09:54:42 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Oct 08 09:54:42 compute-0 ovn_controller[153187]: 2025-10-08T09:54:42Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct 08 09:54:42 compute-0 ovn_controller[153187]: 2025-10-08T09:54:42Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct 08 09:54:42 compute-0 ovn_controller[153187]: 2025-10-08T09:54:42Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Oct 08 09:54:42 compute-0 ovn_controller[153187]: 2025-10-08T09:54:42Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Oct 08 09:54:42 compute-0 ovn_controller[153187]: 2025-10-08T09:54:42Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct 08 09:54:42 compute-0 ovn_controller[153187]: 2025-10-08T09:54:42Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Oct 08 09:54:42 compute-0 NetworkManager[44872]: <info>  [1759917282.4137] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Oct 08 09:54:42 compute-0 NetworkManager[44872]: <info>  [1759917282.4145] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 08 09:54:42 compute-0 NetworkManager[44872]: <info>  [1759917282.4156] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Oct 08 09:54:42 compute-0 NetworkManager[44872]: <info>  [1759917282.4162] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Oct 08 09:54:42 compute-0 NetworkManager[44872]: <info>  [1759917282.4166] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 08 09:54:42 compute-0 kernel: br-int: entered promiscuous mode
Oct 08 09:54:42 compute-0 ovn_controller[153187]: 2025-10-08T09:54:42Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct 08 09:54:42 compute-0 ovn_controller[153187]: 2025-10-08T09:54:42Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 08 09:54:42 compute-0 ovn_controller[153187]: 2025-10-08T09:54:42Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 08 09:54:42 compute-0 ovn_controller[153187]: 2025-10-08T09:54:42Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Oct 08 09:54:42 compute-0 ovn_controller[153187]: 2025-10-08T09:54:42Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Oct 08 09:54:42 compute-0 ovn_controller[153187]: 2025-10-08T09:54:42Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Oct 08 09:54:42 compute-0 ovn_controller[153187]: 2025-10-08T09:54:42Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct 08 09:54:42 compute-0 ovn_controller[153187]: 2025-10-08T09:54:42Z|00014|main|INFO|OVS feature set changed, force recompute.
Oct 08 09:54:42 compute-0 ovn_controller[153187]: 2025-10-08T09:54:42Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 08 09:54:42 compute-0 ovn_controller[153187]: 2025-10-08T09:54:42Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 08 09:54:42 compute-0 ovn_controller[153187]: 2025-10-08T09:54:42Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 08 09:54:42 compute-0 ovn_controller[153187]: 2025-10-08T09:54:42Z|00018|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct 08 09:54:42 compute-0 ovn_controller[153187]: 2025-10-08T09:54:42Z|00019|main|INFO|OVS feature set changed, force recompute.
Oct 08 09:54:42 compute-0 ovn_controller[153187]: 2025-10-08T09:54:42Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 08 09:54:42 compute-0 ovn_controller[153187]: 2025-10-08T09:54:42Z|00021|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Oct 08 09:54:42 compute-0 ovn_controller[153187]: 2025-10-08T09:54:42Z|00022|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Oct 08 09:54:42 compute-0 ovn_controller[153187]: 2025-10-08T09:54:42Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Oct 08 09:54:42 compute-0 ovn_controller[153187]: 2025-10-08T09:54:42Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Oct 08 09:54:42 compute-0 ovn_controller[153187]: 2025-10-08T09:54:42Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 08 09:54:42 compute-0 ovn_controller[153187]: 2025-10-08T09:54:42Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 08 09:54:42 compute-0 ovn_controller[153187]: 2025-10-08T09:54:42Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 08 09:54:42 compute-0 ovn_controller[153187]: 2025-10-08T09:54:42Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 08 09:54:42 compute-0 ovn_controller[153187]: 2025-10-08T09:54:42Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 08 09:54:42 compute-0 NetworkManager[44872]: <info>  [1759917282.4328] manager: (ovn-9a0c8b-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Oct 08 09:54:42 compute-0 ovn_controller[153187]: 2025-10-08T09:54:42Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 08 09:54:42 compute-0 NetworkManager[44872]: <info>  [1759917282.4337] manager: (ovn-6f73e5-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/20)
Oct 08 09:54:42 compute-0 systemd-udevd[153322]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 09:54:42 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Oct 08 09:54:42 compute-0 systemd-udevd[153324]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 09:54:42 compute-0 NetworkManager[44872]: <info>  [1759917282.4539] device (genev_sys_6081): carrier: link connected
Oct 08 09:54:42 compute-0 NetworkManager[44872]: <info>  [1759917282.4542] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/21)
Oct 08 09:54:42 compute-0 NetworkManager[44872]: <info>  [1759917282.8823] manager: (ovn-b58ac6-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Oct 08 09:54:43 compute-0 sudo[153453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwtewrgsemzyqsifqepqzzuzlrgenvje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917282.8326519-1787-205810095467246/AnsiballZ_command.py'
Oct 08 09:54:43 compute-0 sudo[153453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:54:43 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v290: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct 08 09:54:43 compute-0 ceph-mon[73572]: pgmap v290: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct 08 09:54:43 compute-0 python3.9[153455]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:54:43 compute-0 ovs-vsctl[153456]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Oct 08 09:54:43 compute-0 sudo[153453]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:43 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:43 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef800c300 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:43 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:54:43 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:54:43 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:54:43.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:54:43 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:43 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc0041c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:54:43 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:54:43 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:54:43 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:54:43.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:54:43 compute-0 sudo[153606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqqpdkqvigyrghioeqcnkzvwrdaeyplw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917283.6020515-1811-188772472775306/AnsiballZ_command.py'
Oct 08 09:54:43 compute-0 sudo[153606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:54:43 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:43 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc0041c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:44 compute-0 python3.9[153609]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:54:44 compute-0 ovs-vsctl[153611]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Oct 08 09:54:44 compute-0 sudo[153606]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:44 compute-0 sudo[153762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdlwpkurmnpwutxdfqwcpvlubojqkcia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917284.5627267-1853-98582875670369/AnsiballZ_command.py'
Oct 08 09:54:44 compute-0 sudo[153762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:54:44 compute-0 python3.9[153764]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:54:44 compute-0 ovs-vsctl[153765]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Oct 08 09:54:45 compute-0 sudo[153762]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:45 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v291: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Oct 08 09:54:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:45 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:45 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:54:45 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:54:45 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:54:45.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:54:45 compute-0 sshd-session[141643]: Connection closed by 192.168.122.30 port 34184
Oct 08 09:54:45 compute-0 sshd-session[141640]: pam_unix(sshd:session): session closed for user zuul
Oct 08 09:54:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:45 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:45 compute-0 systemd[1]: session-51.scope: Deactivated successfully.
Oct 08 09:54:45 compute-0 systemd[1]: session-51.scope: Consumed 55.349s CPU time.
Oct 08 09:54:45 compute-0 systemd-logind[798]: Session 51 logged out. Waiting for processes to exit.
Oct 08 09:54:45 compute-0 systemd-logind[798]: Removed session 51.
Oct 08 09:54:45 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:54:45 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:54:45 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:54:45.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:54:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:54:45] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Oct 08 09:54:45 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:54:45] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Oct 08 09:54:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:45 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec001ab0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:46 compute-0 ceph-mon[73572]: pgmap v291: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Oct 08 09:54:46 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:54:46.988Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:54:47 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v292: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 08 09:54:47 compute-0 ceph-mon[73572]: pgmap v292: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 08 09:54:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:47 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc0041c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:47 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:54:47 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:54:47 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:54:47.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:54:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:47 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc0041c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_09:54:47
Oct 08 09:54:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 08 09:54:47 compute-0 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct 08 09:54:47 compute-0 ceph-mgr[73869]: [balancer INFO root] pools ['backups', '.mgr', 'default.rgw.control', 'vms', 'cephfs.cephfs.meta', 'volumes', '.rgw.root', 'images', 'default.rgw.meta', '.nfs', 'cephfs.cephfs.data', 'default.rgw.log']
Oct 08 09:54:47 compute-0 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct 08 09:54:47 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:54:47 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:54:47 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:54:47.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:54:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct 08 09:54:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:54:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 08 09:54:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:54:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:54:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:54:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:54:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:54:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:54:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:54:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:54:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:54:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 08 09:54:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:54:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:54:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:54:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 08 09:54:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:54:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 08 09:54:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:54:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:54:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:54:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 08 09:54:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:54:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 09:54:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 09:54:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:54:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:54:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:54:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:47 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc0041c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 08 09:54:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 09:54:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 09:54:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 09:54:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 09:54:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 08 09:54:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 09:54:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 09:54:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:54:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:54:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:54:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:54:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 09:54:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 09:54:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:54:48 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:54:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:54:48.862Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:54:49 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v293: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Oct 08 09:54:49 compute-0 ceph-mon[73572]: pgmap v293: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Oct 08 09:54:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:49 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc0041c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:49 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:54:49 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 09:54:49 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:54:49.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 09:54:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:49 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec8003240 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:49 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:54:49 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:54:49 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:54:49.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:54:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:49 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed8001bd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:51 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v294: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:54:51 compute-0 sshd-session[153799]: Accepted publickey for zuul from 192.168.122.30 port 52014 ssh2: ECDSA SHA256:7LvTHAj52RCSkKXOsIbzSDrEw7lwj23D0dAJ0Qgx0Rg
Oct 08 09:54:51 compute-0 systemd-logind[798]: New session 53 of user zuul.
Oct 08 09:54:51 compute-0 systemd[1]: Started Session 53 of User zuul.
Oct 08 09:54:51 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:51 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef800c300 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:51 compute-0 sshd-session[153799]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 08 09:54:51 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:54:51 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:54:51 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:54:51.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:54:51 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:51 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc0041c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:51 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:54:51 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:54:51 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:54:51.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:54:51 compute-0 ceph-mon[73572]: pgmap v294: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:54:51 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:51 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec8003240 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:52 compute-0 python3.9[153953]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 08 09:54:52 compute-0 systemd[1]: Stopping User Manager for UID 0...
Oct 08 09:54:52 compute-0 systemd[153219]: Activating special unit Exit the Session...
Oct 08 09:54:52 compute-0 systemd[153219]: Stopped target Main User Target.
Oct 08 09:54:52 compute-0 systemd[153219]: Stopped target Basic System.
Oct 08 09:54:52 compute-0 systemd[153219]: Stopped target Paths.
Oct 08 09:54:52 compute-0 systemd[153219]: Stopped target Sockets.
Oct 08 09:54:52 compute-0 systemd[153219]: Stopped target Timers.
Oct 08 09:54:52 compute-0 systemd[153219]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 08 09:54:52 compute-0 systemd[153219]: Closed D-Bus User Message Bus Socket.
Oct 08 09:54:52 compute-0 systemd[153219]: Stopped Create User's Volatile Files and Directories.
Oct 08 09:54:52 compute-0 systemd[153219]: Removed slice User Application Slice.
Oct 08 09:54:52 compute-0 systemd[153219]: Reached target Shutdown.
Oct 08 09:54:52 compute-0 systemd[153219]: Finished Exit the Session.
Oct 08 09:54:52 compute-0 systemd[153219]: Reached target Exit the Session.
Oct 08 09:54:52 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Oct 08 09:54:52 compute-0 systemd[1]: Stopped User Manager for UID 0.
Oct 08 09:54:52 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct 08 09:54:52 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct 08 09:54:52 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct 08 09:54:52 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct 08 09:54:52 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Oct 08 09:54:53 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v295: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:54:53 compute-0 ceph-mon[73572]: pgmap v295: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:54:53 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:53 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed8001bd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:53 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:54:53 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:54:53 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:54:53.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:54:53 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:53 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef800c300 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:53 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:54:53 compute-0 sudo[154110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-leeepgjcswaltwbkphipirkmiwortgxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917292.9517915-62-111145616747493/AnsiballZ_file.py'
Oct 08 09:54:53 compute-0 sudo[154110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:54:53 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:54:53 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:54:53 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:54:53.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:54:53 compute-0 python3.9[154112]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:54:53 compute-0 sudo[154110]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:53 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:53 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc0041c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:54 compute-0 sudo[154263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krutawaykpdcobaesgkxnpapihxqmlhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917293.9794245-62-173132551143372/AnsiballZ_file.py'
Oct 08 09:54:54 compute-0 sudo[154263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:54:54 compute-0 python3.9[154265]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:54:54 compute-0 sudo[154263]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:54 compute-0 sudo[154415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yduavypnmcfzwlufbrxablmniybzyjrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917294.6405957-62-48627522764303/AnsiballZ_file.py'
Oct 08 09:54:54 compute-0 sudo[154415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:54:55 compute-0 python3.9[154417]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:54:55 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v296: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 08 09:54:55 compute-0 sudo[154415]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:55 compute-0 ceph-mon[73572]: pgmap v296: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 08 09:54:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:55 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc0041c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:55 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:54:55 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:54:55 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:54:55.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:54:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:55 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed8001bd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:55 compute-0 sudo[154568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsivbdjpmaxbxpiicwamkbiwwbrspedp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917295.303442-62-232101327946839/AnsiballZ_file.py'
Oct 08 09:54:55 compute-0 sudo[154568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:54:55 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:54:55 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:54:55 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:54:55.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:54:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:54:55] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Oct 08 09:54:55 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:54:55] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Oct 08 09:54:55 compute-0 python3.9[154570]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:54:55 compute-0 sudo[154568]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:55 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef800c300 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:54:56 compute-0 sudo[154721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvtnodbpotumoslpvbdvlbljcazxciwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917295.9359393-62-178147874082259/AnsiballZ_file.py'
Oct 08 09:54:56 compute-0 sudo[154721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:54:56 compute-0 python3.9[154723]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:54:56 compute-0 sudo[154721]: pam_unix(sudo:session): session closed for user root
Oct 08 09:54:56 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:54:56.989Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 09:54:56 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:54:56.990Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 09:54:56 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:54:56.991Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:54:57 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v297: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:54:57 compute-0 python3.9[154874]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 08 09:54:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:57 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec8003240 fd 39 proxy ignored for local
Oct 08 09:54:57 compute-0 kernel: ganesha.nfsd[153795]: segfault at 50 ip 00007f9fb1e1132e sp 00007f9f6f7fd210 error 4 in libntirpc.so.5.8[7f9fb1df6000+2c000] likely on CPU 4 (core 0, socket 4)
Oct 08 09:54:57 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 08 09:54:57 compute-0 systemd[1]: Started Process Core Dump (PID 154875/UID 0).
Oct 08 09:54:57 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:54:57 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:54:57 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:54:57.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:54:57 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:54:57 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:54:57 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:54:57.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:54:58 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:54:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:54:58.863Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:54:59 compute-0 sudo[155028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brofkwhdvcplrocokailouaeeinfolrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917298.6365354-194-128817581498426/AnsiballZ_seboolean.py'
Oct 08 09:54:59 compute-0 sudo[155028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:54:59 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v298: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 08 09:54:59 compute-0 python3.9[155030]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct 08 09:54:59 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:54:59 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 09:54:59 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:54:59.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 09:54:59 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:54:59 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 09:54:59 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:54:59.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 09:54:59 compute-0 ceph-mon[73572]: pgmap v297: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:54:59 compute-0 systemd-coredump[154876]: Process 119750 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 72:
                                                    #0  0x00007f9fb1e1132e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Oct 08 09:54:59 compute-0 systemd[1]: systemd-coredump@2-154875-0.service: Deactivated successfully.
Oct 08 09:54:59 compute-0 systemd[1]: systemd-coredump@2-154875-0.service: Consumed 1.116s CPU time.
Oct 08 09:54:59 compute-0 podman[155036]: 2025-10-08 09:54:59.913091322 +0000 UTC m=+0.023738184 container died 5648b6991b3670625e89da113426ec69b90cf4710ec8879fe91ecbad4e23ac94 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct 08 09:54:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-db3a225b971325494d9fd29d607fb50df99f9768861ae0ade871ec413a763e24-merged.mount: Deactivated successfully.
Oct 08 09:54:59 compute-0 podman[155036]: 2025-10-08 09:54:59.963142828 +0000 UTC m=+0.073789670 container remove 5648b6991b3670625e89da113426ec69b90cf4710ec8879fe91ecbad4e23ac94 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:54:59 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Main process exited, code=exited, status=139/n/a
Oct 08 09:55:00 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Failed with result 'exit-code'.
Oct 08 09:55:00 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Consumed 1.737s CPU time.
Oct 08 09:55:00 compute-0 sudo[155028]: pam_unix(sudo:session): session closed for user root
Oct 08 09:55:01 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v299: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:55:01 compute-0 ceph-mon[73572]: pgmap v298: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 08 09:55:01 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:55:01 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:55:01 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:55:01.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:55:01 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:55:01 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:55:01 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:55:01.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:55:02 compute-0 sudo[155106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 09:55:02 compute-0 sudo[155106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:55:02 compute-0 sudo[155106]: pam_unix(sudo:session): session closed for user root
Oct 08 09:55:02 compute-0 ceph-mon[73572]: pgmap v299: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:55:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 09:55:02 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:55:03 compute-0 python3.9[155256]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:55:03 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v300: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:55:03 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:55:03 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:55:03 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:55:03 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:55:03.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:55:03 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:55:03 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:55:03 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 09:55:03 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:55:03.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 09:55:03 compute-0 python3.9[155378]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759917302.4005055-218-163373868727011/.source follow=False _original_basename=haproxy.j2 checksum=4bca74f6ee0b6450624d22997e2f90c414d58b44 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:55:04 compute-0 sudo[155530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:55:04 compute-0 sudo[155530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:55:04 compute-0 sudo[155530]: pam_unix(sudo:session): session closed for user root
Oct 08 09:55:04 compute-0 sudo[155555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 08 09:55:04 compute-0 sudo[155555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:55:04 compute-0 python3.9[155529]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:55:04 compute-0 ceph-mon[73572]: pgmap v300: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:55:04 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 08 09:55:04 compute-0 sudo[155555]: pam_unix(sudo:session): session closed for user root
Oct 08 09:55:04 compute-0 python3.9[155716]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759917303.8786292-263-113786529512115/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:55:04 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:55:04 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 08 09:55:05 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:55:05 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v301: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 08 09:55:05 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:55:05 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:55:05 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:55:05.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:55:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/095505 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 08 09:55:05 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:55:05 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:55:05 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:55:05.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:55:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:55:05] "GET /metrics HTTP/1.1" 200 48334 "" "Prometheus/2.51.0"
Oct 08 09:55:05 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:55:05] "GET /metrics HTTP/1.1" 200 48334 "" "Prometheus/2.51.0"
Oct 08 09:55:05 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:55:05 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:55:05 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 08 09:55:05 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 09:55:05 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v302: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 192 B/s rd, 0 op/s
Oct 08 09:55:05 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 08 09:55:06 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:55:06 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 08 09:55:06 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:55:06 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:55:06 compute-0 ceph-mon[73572]: pgmap v301: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 08 09:55:06 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:55:06 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 09:55:06 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:55:06 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 08 09:55:06 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 09:55:06 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 08 09:55:06 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 09:55:06 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:55:06 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:55:06 compute-0 sudo[155759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:55:06 compute-0 sudo[155759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:55:06 compute-0 sudo[155759]: pam_unix(sudo:session): session closed for user root
Oct 08 09:55:06 compute-0 sudo[155784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 08 09:55:06 compute-0 sudo[155784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:55:06 compute-0 sudo[155989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sshlaecfotwnanladbrefsexomxfsjxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917306.4842424-314-31814797480724/AnsiballZ_setup.py'
Oct 08 09:55:06 compute-0 sudo[155989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:55:06 compute-0 podman[155926]: 2025-10-08 09:55:06.73686239 +0000 UTC m=+0.022580796 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:55:06 compute-0 podman[155926]: 2025-10-08 09:55:06.977695623 +0000 UTC m=+0.263414039 container create 772e192ee02c40b4c7aa4a6d442633b4e0e732a8688e22babff9017a5efece23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_moore, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct 08 09:55:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:55:06.992Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:55:07 compute-0 systemd[1]: Started libpod-conmon-772e192ee02c40b4c7aa4a6d442633b4e0e732a8688e22babff9017a5efece23.scope.
Oct 08 09:55:07 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:55:07 compute-0 podman[155926]: 2025-10-08 09:55:07.086252283 +0000 UTC m=+0.371970709 container init 772e192ee02c40b4c7aa4a6d442633b4e0e732a8688e22babff9017a5efece23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_moore, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Oct 08 09:55:07 compute-0 podman[155926]: 2025-10-08 09:55:07.097783678 +0000 UTC m=+0.383502064 container start 772e192ee02c40b4c7aa4a6d442633b4e0e732a8688e22babff9017a5efece23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_moore, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct 08 09:55:07 compute-0 podman[155926]: 2025-10-08 09:55:07.101894339 +0000 UTC m=+0.387612805 container attach 772e192ee02c40b4c7aa4a6d442633b4e0e732a8688e22babff9017a5efece23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_moore, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 08 09:55:07 compute-0 crazy_moore[155995]: 167 167
Oct 08 09:55:07 compute-0 systemd[1]: libpod-772e192ee02c40b4c7aa4a6d442633b4e0e732a8688e22babff9017a5efece23.scope: Deactivated successfully.
Oct 08 09:55:07 compute-0 podman[155926]: 2025-10-08 09:55:07.106405562 +0000 UTC m=+0.392123978 container died 772e192ee02c40b4c7aa4a6d442633b4e0e732a8688e22babff9017a5efece23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_moore, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct 08 09:55:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5275d6fc693e3a690756e628229236925fe6c42fc89616b9ee0c943c03f096d-merged.mount: Deactivated successfully.
Oct 08 09:55:07 compute-0 podman[155926]: 2025-10-08 09:55:07.150807069 +0000 UTC m=+0.436525445 container remove 772e192ee02c40b4c7aa4a6d442633b4e0e732a8688e22babff9017a5efece23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_moore, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct 08 09:55:07 compute-0 systemd[1]: libpod-conmon-772e192ee02c40b4c7aa4a6d442633b4e0e732a8688e22babff9017a5efece23.scope: Deactivated successfully.
Oct 08 09:55:07 compute-0 python3.9[155991]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 08 09:55:07 compute-0 ceph-mon[73572]: pgmap v302: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 192 B/s rd, 0 op/s
Oct 08 09:55:07 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:55:07 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:55:07 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 09:55:07 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 09:55:07 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:55:07 compute-0 podman[156029]: 2025-10-08 09:55:07.312256456 +0000 UTC m=+0.047639821 container create 936d62689babdadab649675d70bc95f029d7bbdcd9df0e3e6b6916d947c06c32 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_gould, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 08 09:55:07 compute-0 systemd[1]: Started libpod-conmon-936d62689babdadab649675d70bc95f029d7bbdcd9df0e3e6b6916d947c06c32.scope.
Oct 08 09:55:07 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:55:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e4da14cfb55722ee00547fa1731caaa72c2abde0f63715cc2d8231fb41bf2c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 09:55:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e4da14cfb55722ee00547fa1731caaa72c2abde0f63715cc2d8231fb41bf2c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:55:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e4da14cfb55722ee00547fa1731caaa72c2abde0f63715cc2d8231fb41bf2c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:55:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e4da14cfb55722ee00547fa1731caaa72c2abde0f63715cc2d8231fb41bf2c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 09:55:07 compute-0 podman[156029]: 2025-10-08 09:55:07.292375466 +0000 UTC m=+0.027758841 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:55:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e4da14cfb55722ee00547fa1731caaa72c2abde0f63715cc2d8231fb41bf2c6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 09:55:07 compute-0 podman[156029]: 2025-10-08 09:55:07.400500332 +0000 UTC m=+0.135883687 container init 936d62689babdadab649675d70bc95f029d7bbdcd9df0e3e6b6916d947c06c32 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_gould, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:55:07 compute-0 sudo[155989]: pam_unix(sudo:session): session closed for user root
Oct 08 09:55:07 compute-0 podman[156029]: 2025-10-08 09:55:07.408348531 +0000 UTC m=+0.143731886 container start 936d62689babdadab649675d70bc95f029d7bbdcd9df0e3e6b6916d947c06c32 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_gould, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 08 09:55:07 compute-0 podman[156029]: 2025-10-08 09:55:07.412190963 +0000 UTC m=+0.147574308 container attach 936d62689babdadab649675d70bc95f029d7bbdcd9df0e3e6b6916d947c06c32 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_gould, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325)
Oct 08 09:55:07 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:55:07 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:55:07 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:55:07.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:55:07 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:55:07 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:55:07 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:55:07.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:55:07 compute-0 sudo[156131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uelhhdrbkrljqiofqdkkachxeakqorkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917306.4842424-314-31814797480724/AnsiballZ_dnf.py'
Oct 08 09:55:07 compute-0 sudo[156131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:55:07 compute-0 brave_gould[156045]: --> passed data devices: 0 physical, 1 LVM
Oct 08 09:55:07 compute-0 brave_gould[156045]: --> All data devices are unavailable
Oct 08 09:55:07 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v303: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 192 B/s rd, 0 op/s
Oct 08 09:55:07 compute-0 systemd[1]: libpod-936d62689babdadab649675d70bc95f029d7bbdcd9df0e3e6b6916d947c06c32.scope: Deactivated successfully.
Oct 08 09:55:07 compute-0 podman[156029]: 2025-10-08 09:55:07.792219687 +0000 UTC m=+0.527603032 container died 936d62689babdadab649675d70bc95f029d7bbdcd9df0e3e6b6916d947c06c32 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_gould, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Oct 08 09:55:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e4da14cfb55722ee00547fa1731caaa72c2abde0f63715cc2d8231fb41bf2c6-merged.mount: Deactivated successfully.
Oct 08 09:55:07 compute-0 podman[156029]: 2025-10-08 09:55:07.837884724 +0000 UTC m=+0.573268059 container remove 936d62689babdadab649675d70bc95f029d7bbdcd9df0e3e6b6916d947c06c32 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Oct 08 09:55:07 compute-0 systemd[1]: libpod-conmon-936d62689babdadab649675d70bc95f029d7bbdcd9df0e3e6b6916d947c06c32.scope: Deactivated successfully.
Oct 08 09:55:07 compute-0 sudo[155784]: pam_unix(sudo:session): session closed for user root
Oct 08 09:55:07 compute-0 sudo[156148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:55:07 compute-0 sudo[156148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:55:07 compute-0 sudo[156148]: pam_unix(sudo:session): session closed for user root
Oct 08 09:55:07 compute-0 sudo[156173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- lvm list --format json
Oct 08 09:55:07 compute-0 python3.9[156134]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 08 09:55:07 compute-0 sudo[156173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:55:08 compute-0 ceph-mon[73572]: pgmap v303: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 192 B/s rd, 0 op/s
Oct 08 09:55:08 compute-0 podman[156238]: 2025-10-08 09:55:08.312140755 +0000 UTC m=+0.035608530 container create ae9c871f858890ffaf3ec6f5606c6522a099cc613e58b11a9a8c981c065cef05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:55:08 compute-0 systemd[1]: Started libpod-conmon-ae9c871f858890ffaf3ec6f5606c6522a099cc613e58b11a9a8c981c065cef05.scope.
Oct 08 09:55:08 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:55:08 compute-0 podman[156238]: 2025-10-08 09:55:08.38110557 +0000 UTC m=+0.104573365 container init ae9c871f858890ffaf3ec6f5606c6522a099cc613e58b11a9a8c981c065cef05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_sammet, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:55:08 compute-0 podman[156238]: 2025-10-08 09:55:08.387200813 +0000 UTC m=+0.110668588 container start ae9c871f858890ffaf3ec6f5606c6522a099cc613e58b11a9a8c981c065cef05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_sammet, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Oct 08 09:55:08 compute-0 musing_sammet[156254]: 167 167
Oct 08 09:55:08 compute-0 systemd[1]: libpod-ae9c871f858890ffaf3ec6f5606c6522a099cc613e58b11a9a8c981c065cef05.scope: Deactivated successfully.
Oct 08 09:55:08 compute-0 podman[156238]: 2025-10-08 09:55:08.297601344 +0000 UTC m=+0.021069139 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:55:08 compute-0 podman[156238]: 2025-10-08 09:55:08.443002422 +0000 UTC m=+0.166470197 container attach ae9c871f858890ffaf3ec6f5606c6522a099cc613e58b11a9a8c981c065cef05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_sammet, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 08 09:55:08 compute-0 podman[156238]: 2025-10-08 09:55:08.443639382 +0000 UTC m=+0.167107167 container died ae9c871f858890ffaf3ec6f5606c6522a099cc613e58b11a9a8c981c065cef05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 08 09:55:08 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:55:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc6939af4df8ad3e2b0428a6a0984091b7111e45887a22ec4ae8472293ad9332-merged.mount: Deactivated successfully.
Oct 08 09:55:08 compute-0 podman[156238]: 2025-10-08 09:55:08.556461557 +0000 UTC m=+0.279929332 container remove ae9c871f858890ffaf3ec6f5606c6522a099cc613e58b11a9a8c981c065cef05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_sammet, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:55:08 compute-0 systemd[1]: libpod-conmon-ae9c871f858890ffaf3ec6f5606c6522a099cc613e58b11a9a8c981c065cef05.scope: Deactivated successfully.
Oct 08 09:55:08 compute-0 podman[156278]: 2025-10-08 09:55:08.723696908 +0000 UTC m=+0.039289156 container create 717ed0e0e820daac3c13327327d8e087749b78a4e2ef2ac033dacf972685b386 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid)
Oct 08 09:55:08 compute-0 systemd[1]: Started libpod-conmon-717ed0e0e820daac3c13327327d8e087749b78a4e2ef2ac033dacf972685b386.scope.
Oct 08 09:55:08 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:55:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c38fa597c088b022fa10193c9ecd515fb2c03c6a28e9e65f08c7adff8ac9aebf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 09:55:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c38fa597c088b022fa10193c9ecd515fb2c03c6a28e9e65f08c7adff8ac9aebf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:55:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c38fa597c088b022fa10193c9ecd515fb2c03c6a28e9e65f08c7adff8ac9aebf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:55:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c38fa597c088b022fa10193c9ecd515fb2c03c6a28e9e65f08c7adff8ac9aebf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 09:55:08 compute-0 podman[156278]: 2025-10-08 09:55:08.804372514 +0000 UTC m=+0.119964792 container init 717ed0e0e820daac3c13327327d8e087749b78a4e2ef2ac033dacf972685b386 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_lewin, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:55:08 compute-0 podman[156278]: 2025-10-08 09:55:08.707928657 +0000 UTC m=+0.023520925 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:55:08 compute-0 podman[156278]: 2025-10-08 09:55:08.814106692 +0000 UTC m=+0.129698940 container start 717ed0e0e820daac3c13327327d8e087749b78a4e2ef2ac033dacf972685b386 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:55:08 compute-0 podman[156278]: 2025-10-08 09:55:08.817463399 +0000 UTC m=+0.133055647 container attach 717ed0e0e820daac3c13327327d8e087749b78a4e2ef2ac033dacf972685b386 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid)
Oct 08 09:55:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:55:08.866Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 09:55:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:55:08.868Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 09:55:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:55:08.868Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 09:55:09 compute-0 jolly_lewin[156294]: {
Oct 08 09:55:09 compute-0 jolly_lewin[156294]:     "1": [
Oct 08 09:55:09 compute-0 jolly_lewin[156294]:         {
Oct 08 09:55:09 compute-0 jolly_lewin[156294]:             "devices": [
Oct 08 09:55:09 compute-0 jolly_lewin[156294]:                 "/dev/loop3"
Oct 08 09:55:09 compute-0 jolly_lewin[156294]:             ],
Oct 08 09:55:09 compute-0 jolly_lewin[156294]:             "lv_name": "ceph_lv0",
Oct 08 09:55:09 compute-0 jolly_lewin[156294]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 09:55:09 compute-0 jolly_lewin[156294]:             "lv_size": "21470642176",
Oct 08 09:55:09 compute-0 jolly_lewin[156294]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 08 09:55:09 compute-0 jolly_lewin[156294]:             "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 09:55:09 compute-0 jolly_lewin[156294]:             "name": "ceph_lv0",
Oct 08 09:55:09 compute-0 jolly_lewin[156294]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 09:55:09 compute-0 jolly_lewin[156294]:             "tags": {
Oct 08 09:55:09 compute-0 jolly_lewin[156294]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 08 09:55:09 compute-0 jolly_lewin[156294]:                 "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 09:55:09 compute-0 jolly_lewin[156294]:                 "ceph.cephx_lockbox_secret": "",
Oct 08 09:55:09 compute-0 jolly_lewin[156294]:                 "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct 08 09:55:09 compute-0 jolly_lewin[156294]:                 "ceph.cluster_name": "ceph",
Oct 08 09:55:09 compute-0 jolly_lewin[156294]:                 "ceph.crush_device_class": "",
Oct 08 09:55:09 compute-0 jolly_lewin[156294]:                 "ceph.encrypted": "0",
Oct 08 09:55:09 compute-0 jolly_lewin[156294]:                 "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct 08 09:55:09 compute-0 jolly_lewin[156294]:                 "ceph.osd_id": "1",
Oct 08 09:55:09 compute-0 jolly_lewin[156294]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 08 09:55:09 compute-0 jolly_lewin[156294]:                 "ceph.type": "block",
Oct 08 09:55:09 compute-0 jolly_lewin[156294]:                 "ceph.vdo": "0",
Oct 08 09:55:09 compute-0 jolly_lewin[156294]:                 "ceph.with_tpm": "0"
Oct 08 09:55:09 compute-0 jolly_lewin[156294]:             },
Oct 08 09:55:09 compute-0 jolly_lewin[156294]:             "type": "block",
Oct 08 09:55:09 compute-0 jolly_lewin[156294]:             "vg_name": "ceph_vg0"
Oct 08 09:55:09 compute-0 jolly_lewin[156294]:         }
Oct 08 09:55:09 compute-0 jolly_lewin[156294]:     ]
Oct 08 09:55:09 compute-0 jolly_lewin[156294]: }
Oct 08 09:55:09 compute-0 systemd[1]: libpod-717ed0e0e820daac3c13327327d8e087749b78a4e2ef2ac033dacf972685b386.scope: Deactivated successfully.
Oct 08 09:55:09 compute-0 podman[156278]: 2025-10-08 09:55:09.080164104 +0000 UTC m=+0.395756352 container died 717ed0e0e820daac3c13327327d8e087749b78a4e2ef2ac033dacf972685b386 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 08 09:55:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-c38fa597c088b022fa10193c9ecd515fb2c03c6a28e9e65f08c7adff8ac9aebf-merged.mount: Deactivated successfully.
Oct 08 09:55:09 compute-0 podman[156278]: 2025-10-08 09:55:09.126263256 +0000 UTC m=+0.441855504 container remove 717ed0e0e820daac3c13327327d8e087749b78a4e2ef2ac033dacf972685b386 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 08 09:55:09 compute-0 systemd[1]: libpod-conmon-717ed0e0e820daac3c13327327d8e087749b78a4e2ef2ac033dacf972685b386.scope: Deactivated successfully.
Oct 08 09:55:09 compute-0 sudo[156173]: pam_unix(sudo:session): session closed for user root
Oct 08 09:55:09 compute-0 sudo[156131]: pam_unix(sudo:session): session closed for user root
Oct 08 09:55:09 compute-0 sudo[156316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:55:09 compute-0 sudo[156316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:55:09 compute-0 sudo[156316]: pam_unix(sudo:session): session closed for user root
Oct 08 09:55:09 compute-0 sudo[156365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- raw list --format json
Oct 08 09:55:09 compute-0 sudo[156365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:55:09 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:55:09 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:55:09 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:55:09.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:55:09 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:55:09 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 09:55:09 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:55:09.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 09:55:09 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v304: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 96 B/s rd, 0 op/s
Oct 08 09:55:09 compute-0 podman[156483]: 2025-10-08 09:55:09.774113937 +0000 UTC m=+0.043719956 container create dba9e23d49984b7dbdb63ae7730272e7bf0f7e27dbe3672aeeaf7650d104232b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:55:09 compute-0 systemd[1]: Started libpod-conmon-dba9e23d49984b7dbdb63ae7730272e7bf0f7e27dbe3672aeeaf7650d104232b.scope.
Oct 08 09:55:09 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:55:09 compute-0 podman[156483]: 2025-10-08 09:55:09.757717798 +0000 UTC m=+0.027323837 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:55:09 compute-0 podman[156483]: 2025-10-08 09:55:09.901715592 +0000 UTC m=+0.171321631 container init dba9e23d49984b7dbdb63ae7730272e7bf0f7e27dbe3672aeeaf7650d104232b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_meninsky, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 08 09:55:09 compute-0 podman[156483]: 2025-10-08 09:55:09.910604343 +0000 UTC m=+0.180210362 container start dba9e23d49984b7dbdb63ae7730272e7bf0f7e27dbe3672aeeaf7650d104232b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_meninsky, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct 08 09:55:09 compute-0 wizardly_meninsky[156500]: 167 167
Oct 08 09:55:09 compute-0 systemd[1]: libpod-dba9e23d49984b7dbdb63ae7730272e7bf0f7e27dbe3672aeeaf7650d104232b.scope: Deactivated successfully.
Oct 08 09:55:09 compute-0 podman[156483]: 2025-10-08 09:55:09.929563534 +0000 UTC m=+0.199169573 container attach dba9e23d49984b7dbdb63ae7730272e7bf0f7e27dbe3672aeeaf7650d104232b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Oct 08 09:55:09 compute-0 podman[156483]: 2025-10-08 09:55:09.929985098 +0000 UTC m=+0.199591127 container died dba9e23d49984b7dbdb63ae7730272e7bf0f7e27dbe3672aeeaf7650d104232b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:55:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b253cdaeff5c92458f6635e451cea3b8d934f9e6f76d36e177933cb5b29e1d3-merged.mount: Deactivated successfully.
Oct 08 09:55:10 compute-0 podman[156483]: 2025-10-08 09:55:10.051697154 +0000 UTC m=+0.321303163 container remove dba9e23d49984b7dbdb63ae7730272e7bf0f7e27dbe3672aeeaf7650d104232b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_meninsky, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325)
Oct 08 09:55:10 compute-0 sudo[156590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hetzueuvizwerazjwtpeaqocvhndrcti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917309.4210093-350-185334136867649/AnsiballZ_systemd.py'
Oct 08 09:55:10 compute-0 sudo[156590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:55:10 compute-0 systemd[1]: libpod-conmon-dba9e23d49984b7dbdb63ae7730272e7bf0f7e27dbe3672aeeaf7650d104232b.scope: Deactivated successfully.
Oct 08 09:55:10 compute-0 podman[156600]: 2025-10-08 09:55:10.233965281 +0000 UTC m=+0.041699873 container create f5c787b3121de947449f36ea83f210de2f6f321e9cb4d10eac3704f1d41ec2c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_franklin, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:55:10 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Scheduled restart job, restart counter is at 3.
Oct 08 09:55:10 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct 08 09:55:10 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Consumed 1.737s CPU time.
Oct 08 09:55:10 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct 08 09:55:10 compute-0 systemd[1]: Started libpod-conmon-f5c787b3121de947449f36ea83f210de2f6f321e9cb4d10eac3704f1d41ec2c8.scope.
Oct 08 09:55:10 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:55:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f8308d47fdc2cd9c11e3f9ba9404fef36db2f6b2b154d7cf163c84c343535cc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 09:55:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f8308d47fdc2cd9c11e3f9ba9404fef36db2f6b2b154d7cf163c84c343535cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:55:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f8308d47fdc2cd9c11e3f9ba9404fef36db2f6b2b154d7cf163c84c343535cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:55:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f8308d47fdc2cd9c11e3f9ba9404fef36db2f6b2b154d7cf163c84c343535cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 09:55:10 compute-0 podman[156600]: 2025-10-08 09:55:10.294547241 +0000 UTC m=+0.102281883 container init f5c787b3121de947449f36ea83f210de2f6f321e9cb4d10eac3704f1d41ec2c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:55:10 compute-0 podman[156600]: 2025-10-08 09:55:10.305780807 +0000 UTC m=+0.113515399 container start f5c787b3121de947449f36ea83f210de2f6f321e9cb4d10eac3704f1d41ec2c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_franklin, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:55:10 compute-0 podman[156600]: 2025-10-08 09:55:10.215244848 +0000 UTC m=+0.022979470 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:55:10 compute-0 podman[156600]: 2025-10-08 09:55:10.309076471 +0000 UTC m=+0.116811113 container attach f5c787b3121de947449f36ea83f210de2f6f321e9cb4d10eac3704f1d41ec2c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_franklin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:55:10 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/095510 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 08 09:55:10 compute-0 python3.9[156592]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 08 09:55:10 compute-0 podman[156664]: 2025-10-08 09:55:10.465668544 +0000 UTC m=+0.040326618 container create c427e6c11e062f9636a45bc767e0a3cb951225f9153c622ac9e9b72d859be25e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct 08 09:55:10 compute-0 sudo[156590]: pam_unix(sudo:session): session closed for user root
Oct 08 09:55:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6201882d2556a974402ebedf55dc29af345432908f34a2728ce3c7ef9e499676/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 08 09:55:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6201882d2556a974402ebedf55dc29af345432908f34a2728ce3c7ef9e499676/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:55:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6201882d2556a974402ebedf55dc29af345432908f34a2728ce3c7ef9e499676/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 09:55:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6201882d2556a974402ebedf55dc29af345432908f34a2728ce3c7ef9e499676/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.uynkmx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 09:55:10 compute-0 podman[156664]: 2025-10-08 09:55:10.539650179 +0000 UTC m=+0.114308303 container init c427e6c11e062f9636a45bc767e0a3cb951225f9153c622ac9e9b72d859be25e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:55:10 compute-0 podman[156664]: 2025-10-08 09:55:10.447694145 +0000 UTC m=+0.022352249 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:55:10 compute-0 podman[156664]: 2025-10-08 09:55:10.547696404 +0000 UTC m=+0.122354498 container start c427e6c11e062f9636a45bc767e0a3cb951225f9153c622ac9e9b72d859be25e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:55:10 compute-0 bash[156664]: c427e6c11e062f9636a45bc767e0a3cb951225f9153c622ac9e9b72d859be25e
Oct 08 09:55:10 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct 08 09:55:10 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:10 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 08 09:55:10 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:10 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 08 09:55:10 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:10 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 08 09:55:10 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:10 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 08 09:55:10 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:10 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 08 09:55:10 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:10 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 08 09:55:10 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:10 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 08 09:55:10 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:10 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 09:55:10 compute-0 ceph-mon[73572]: pgmap v304: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 96 B/s rd, 0 op/s
Oct 08 09:55:10 compute-0 lvm[156942]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 08 09:55:10 compute-0 lvm[156942]: VG ceph_vg0 finished
Oct 08 09:55:11 compute-0 frosty_franklin[156617]: {}
Oct 08 09:55:11 compute-0 systemd[1]: libpod-f5c787b3121de947449f36ea83f210de2f6f321e9cb4d10eac3704f1d41ec2c8.scope: Deactivated successfully.
Oct 08 09:55:11 compute-0 systemd[1]: libpod-f5c787b3121de947449f36ea83f210de2f6f321e9cb4d10eac3704f1d41ec2c8.scope: Consumed 1.095s CPU time.
Oct 08 09:55:11 compute-0 podman[156600]: 2025-10-08 09:55:11.04389285 +0000 UTC m=+0.851627452 container died f5c787b3121de947449f36ea83f210de2f6f321e9cb4d10eac3704f1d41ec2c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_franklin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 08 09:55:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f8308d47fdc2cd9c11e3f9ba9404fef36db2f6b2b154d7cf163c84c343535cc-merged.mount: Deactivated successfully.
Oct 08 09:55:11 compute-0 podman[156600]: 2025-10-08 09:55:11.118537245 +0000 UTC m=+0.926271867 container remove f5c787b3121de947449f36ea83f210de2f6f321e9cb4d10eac3704f1d41ec2c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_franklin, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct 08 09:55:11 compute-0 systemd[1]: libpod-conmon-f5c787b3121de947449f36ea83f210de2f6f321e9cb4d10eac3704f1d41ec2c8.scope: Deactivated successfully.
Oct 08 09:55:11 compute-0 python3.9[156940]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:55:11 compute-0 sudo[156365]: pam_unix(sudo:session): session closed for user root
Oct 08 09:55:11 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 09:55:11 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:55:11 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 09:55:11 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:55:11 compute-0 sudo[156959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 08 09:55:11 compute-0 sudo[156959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:55:11 compute-0 sudo[156959]: pam_unix(sudo:session): session closed for user root
Oct 08 09:55:11 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:55:11 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:55:11 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:55:11.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:55:11 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:55:11 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:55:11 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:55:11.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:55:11 compute-0 python3.9[157104]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759917310.6818042-374-120256607137196/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:55:11 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v305: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 96 B/s rd, 0 op/s
Oct 08 09:55:12 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:55:12 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:55:12 compute-0 ovn_controller[153187]: 2025-10-08T09:55:12Z|00025|memory|INFO|16512 kB peak resident set size after 29.9 seconds
Oct 08 09:55:12 compute-0 ovn_controller[153187]: 2025-10-08T09:55:12Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:3
Oct 08 09:55:12 compute-0 podman[157229]: 2025-10-08 09:55:12.274427277 +0000 UTC m=+0.128708280 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 08 09:55:12 compute-0 python3.9[157268]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:55:12 compute-0 python3.9[157402]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759917311.8980722-374-133798103445745/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:55:13 compute-0 ceph-mon[73572]: pgmap v305: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 96 B/s rd, 0 op/s
Oct 08 09:55:13 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:55:13 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:55:13 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:55:13.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:55:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:55:13 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:55:13 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 09:55:13 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:55:13.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 09:55:13 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v306: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 771 B/s rd, 289 B/s wr, 1 op/s
Oct 08 09:55:14 compute-0 ceph-mon[73572]: pgmap v306: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 771 B/s rd, 289 B/s wr, 1 op/s
Oct 08 09:55:14 compute-0 python3.9[157554]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:55:14 compute-0 python3.9[157675]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759917313.8536303-506-125205628279069/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:55:15 compute-0 python3.9[157826]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:55:15 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:55:15 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:55:15 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:55:15.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:55:15 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:55:15 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 09:55:15 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:55:15.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 09:55:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:55:15] "GET /metrics HTTP/1.1" 200 48334 "" "Prometheus/2.51.0"
Oct 08 09:55:15 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:55:15] "GET /metrics HTTP/1.1" 200 48334 "" "Prometheus/2.51.0"
Oct 08 09:55:15 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v307: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 675 B/s rd, 289 B/s wr, 1 op/s
Oct 08 09:55:16 compute-0 python3.9[157948]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759917315.063814-506-251721765152198/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:55:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:16 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 09:55:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:16 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 09:55:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:16 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 09:55:16 compute-0 python3.9[158098]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 09:55:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:16 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 09:55:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:16 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 09:55:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:16 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 09:55:16 compute-0 ceph-mon[73572]: pgmap v307: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 675 B/s rd, 289 B/s wr, 1 op/s
Oct 08 09:55:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:55:16.994Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:55:17 compute-0 sudo[158251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epyzievvsawuabaavnmcfqfmxkshwnfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917317.0546334-620-253966551640409/AnsiballZ_file.py'
Oct 08 09:55:17 compute-0 sudo[158251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:55:17 compute-0 python3.9[158253]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:55:17 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:55:17 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 09:55:17 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:55:17.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 09:55:17 compute-0 sudo[158251]: pam_unix(sudo:session): session closed for user root
Oct 08 09:55:17 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:55:17 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 09:55:17 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:55:17.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 09:55:17 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v308: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 255 B/s wr, 0 op/s
Oct 08 09:55:17 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 09:55:17 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:55:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:55:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:55:17 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:55:17 compute-0 sudo[158404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-woympwwhrawrljvczxypkwdbbxbhylds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917317.7212157-644-244322434063720/AnsiballZ_stat.py'
Oct 08 09:55:17 compute-0 sudo[158404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:55:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:55:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:55:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:55:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:55:18 compute-0 python3.9[158406]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:55:18 compute-0 sudo[158404]: pam_unix(sudo:session): session closed for user root
Oct 08 09:55:18 compute-0 sudo[158482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-saevqtblpswtrcbrkqlqtztpwjgowxzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917317.7212157-644-244322434063720/AnsiballZ_file.py'
Oct 08 09:55:18 compute-0 sudo[158482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:55:18 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:55:18 compute-0 python3.9[158484]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:55:18 compute-0 sudo[158482]: pam_unix(sudo:session): session closed for user root
Oct 08 09:55:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:55:18.869Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:55:18 compute-0 ceph-mon[73572]: pgmap v308: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 255 B/s wr, 0 op/s
Oct 08 09:55:19 compute-0 sudo[158635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmoolllrlfrvqdqeppzilnllfdslmjqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917318.794359-644-258062410472285/AnsiballZ_stat.py'
Oct 08 09:55:19 compute-0 sudo[158635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:55:19 compute-0 python3.9[158637]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:55:19 compute-0 sudo[158635]: pam_unix(sudo:session): session closed for user root
Oct 08 09:55:19 compute-0 sudo[158713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-goygiiyppfjxhdrsjyrwetjftndubtzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917318.794359-644-258062410472285/AnsiballZ_file.py'
Oct 08 09:55:19 compute-0 sudo[158713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:55:19 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:55:19 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:55:19 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:55:19.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:55:19 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:55:19 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:55:19 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:55:19.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:55:19 compute-0 python3.9[158715]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:55:19 compute-0 sudo[158713]: pam_unix(sudo:session): session closed for user root
Oct 08 09:55:19 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v309: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 3 op/s
Oct 08 09:55:20 compute-0 sudo[158866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-retzsnvzvwpaottezivyngdhgxvbyqcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917319.9728136-713-224506577790924/AnsiballZ_file.py'
Oct 08 09:55:20 compute-0 sudo[158866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:55:20 compute-0 python3.9[158868]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:55:20 compute-0 sudo[158866]: pam_unix(sudo:session): session closed for user root
Oct 08 09:55:20 compute-0 sudo[159018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcyztyhdlkbevnfwxxvqksgwkxzlixgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917320.6467931-737-74962014428894/AnsiballZ_stat.py'
Oct 08 09:55:20 compute-0 sudo[159018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:55:20 compute-0 ceph-mon[73572]: pgmap v309: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 3 op/s
Oct 08 09:55:21 compute-0 python3.9[159020]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:55:21 compute-0 sudo[159018]: pam_unix(sudo:session): session closed for user root
Oct 08 09:55:21 compute-0 sudo[159097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbxkkulglhlvcsxqhhvlewqyhwibzdjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917320.6467931-737-74962014428894/AnsiballZ_file.py'
Oct 08 09:55:21 compute-0 sudo[159097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:55:21 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:55:21 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:55:21 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:55:21.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:55:21 compute-0 python3.9[159099]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:55:21 compute-0 sudo[159097]: pam_unix(sudo:session): session closed for user root
Oct 08 09:55:21 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:55:21 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:55:21 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:55:21.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:55:21 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v310: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 3 op/s
Oct 08 09:55:22 compute-0 sudo[159234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 09:55:22 compute-0 sudo[159234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:55:22 compute-0 sudo[159271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dygzdrdfelrkqhbakmjggkpemshrofbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917321.8232937-773-93310883427189/AnsiballZ_stat.py'
Oct 08 09:55:22 compute-0 sudo[159271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:55:22 compute-0 sudo[159234]: pam_unix(sudo:session): session closed for user root
Oct 08 09:55:22 compute-0 python3.9[159277]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:55:22 compute-0 sudo[159271]: pam_unix(sudo:session): session closed for user root
Oct 08 09:55:22 compute-0 sudo[159353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqarzkhgotmdsbivvpojembdlweydyoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917321.8232937-773-93310883427189/AnsiballZ_file.py'
Oct 08 09:55:22 compute-0 sudo[159353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:55:22 compute-0 python3.9[159355]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:55:22 compute-0 sudo[159353]: pam_unix(sudo:session): session closed for user root
Oct 08 09:55:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:22 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 08 09:55:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:22 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 08 09:55:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:22 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 08 09:55:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:22 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 08 09:55:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:22 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 08 09:55:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:22 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 08 09:55:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:22 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 08 09:55:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:22 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 08 09:55:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:22 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 08 09:55:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:22 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 08 09:55:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:22 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 08 09:55:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:22 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 08 09:55:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:22 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 08 09:55:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:22 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 08 09:55:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:22 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 08 09:55:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:22 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 08 09:55:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:22 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 08 09:55:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:22 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 08 09:55:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:22 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 08 09:55:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:22 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 08 09:55:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:22 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 08 09:55:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:22 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 08 09:55:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:22 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 08 09:55:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:22 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 08 09:55:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:22 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 08 09:55:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:22 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 08 09:55:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:22 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 08 09:55:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:22 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 09:55:22 compute-0 ceph-mon[73572]: pgmap v310: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 3 op/s
Oct 08 09:55:23 compute-0 sudo[159518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oktxgudbkiqrscycjzqhohbvwzqaramw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917322.9880629-809-115675720021578/AnsiballZ_systemd.py'
Oct 08 09:55:23 compute-0 sudo[159518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:55:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:23 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:55:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:23 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a58001970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:55:23 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:55:23 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 09:55:23 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:55:23.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 09:55:23 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:55:23 compute-0 python3.9[159520]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 09:55:23 compute-0 systemd[1]: Reloading.
Oct 08 09:55:23 compute-0 systemd-rc-local-generator[159552]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:55:23 compute-0 systemd-sysv-generator[159555]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:55:23 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:55:23 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:55:23 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:55:23.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:55:23 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v311: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 1.5 KiB/s wr, 5 op/s
Oct 08 09:55:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:23 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a40000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:55:23 compute-0 sudo[159518]: pam_unix(sudo:session): session closed for user root
Oct 08 09:55:24 compute-0 sudo[159712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvpaatyausdnoxupduamuxgbvzgrzqle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917324.1634088-833-138942416110739/AnsiballZ_stat.py'
Oct 08 09:55:24 compute-0 sudo[159712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:55:24 compute-0 python3.9[159714]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:55:24 compute-0 sudo[159712]: pam_unix(sudo:session): session closed for user root
Oct 08 09:55:24 compute-0 sudo[159790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-soyzwklnsilkmqveqxqndbxarbpfsbhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917324.1634088-833-138942416110739/AnsiballZ_file.py'
Oct 08 09:55:24 compute-0 sudo[159790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:55:24 compute-0 ceph-mon[73572]: pgmap v311: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 1.5 KiB/s wr, 5 op/s
Oct 08 09:55:25 compute-0 python3.9[159792]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:55:25 compute-0 sudo[159790]: pam_unix(sudo:session): session closed for user root
Oct 08 09:55:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:25 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:55:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:25 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a64001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:55:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/095525 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 08 09:55:25 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:55:25 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:55:25 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:55:25.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:55:25 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:55:25 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:55:25 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:55:25.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:55:25 compute-0 sudo[159943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpvxnvrlthzobijptkadwpivnnqvaqfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917325.440775-869-263579329102501/AnsiballZ_stat.py'
Oct 08 09:55:25 compute-0 sudo[159943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:55:25 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:55:25] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Oct 08 09:55:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:55:25] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Oct 08 09:55:25 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v312: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Oct 08 09:55:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:25 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 09:55:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:25 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 09:55:25 compute-0 python3.9[159945]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:55:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:25 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a58002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:55:25 compute-0 sudo[159943]: pam_unix(sudo:session): session closed for user root
Oct 08 09:55:26 compute-0 sudo[160022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvtcofhztkhorhopptbxfrxccrytotbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917325.440775-869-263579329102501/AnsiballZ_file.py'
Oct 08 09:55:26 compute-0 sudo[160022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:55:26 compute-0 python3.9[160024]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:55:26 compute-0 sudo[160022]: pam_unix(sudo:session): session closed for user root
Oct 08 09:55:26 compute-0 sudo[160174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zorltjsbjxriojhdeccuorcdehjwqxzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917326.699564-905-81644726093214/AnsiballZ_systemd.py'
Oct 08 09:55:26 compute-0 sudo[160174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:55:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:55:26.995Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:55:27 compute-0 ceph-mon[73572]: pgmap v312: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Oct 08 09:55:27 compute-0 python3.9[160176]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 09:55:27 compute-0 systemd[1]: Reloading.
Oct 08 09:55:27 compute-0 systemd-rc-local-generator[160206]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:55:27 compute-0 systemd-sysv-generator[160212]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:55:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:27 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a400016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:55:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:27 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:55:27 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:55:27 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 09:55:27 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:55:27.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 09:55:27 compute-0 systemd[1]: Starting Create netns directory...
Oct 08 09:55:27 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 08 09:55:27 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 08 09:55:27 compute-0 systemd[1]: Finished Create netns directory.
Oct 08 09:55:27 compute-0 sudo[160174]: pam_unix(sudo:session): session closed for user root
Oct 08 09:55:27 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:55:27 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:55:27 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:55:27.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:55:27 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v313: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Oct 08 09:55:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:27 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a640025c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:55:28 compute-0 sudo[160370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqdhjjvhqnjoqnrzsarhyjpcnxxayqbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917328.1207635-935-264019084529034/AnsiballZ_file.py'
Oct 08 09:55:28 compute-0 sudo[160370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:55:28 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:55:28 compute-0 python3.9[160372]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:55:28 compute-0 sudo[160370]: pam_unix(sudo:session): session closed for user root
Oct 08 09:55:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:55:28.869Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:55:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:28 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 08 09:55:29 compute-0 sudo[160523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tiivvqodmdigbllvdkeftnznakqdjzyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917328.8205082-959-109578901911704/AnsiballZ_stat.py'
Oct 08 09:55:29 compute-0 sudo[160523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:55:29 compute-0 ceph-mon[73572]: pgmap v313: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Oct 08 09:55:29 compute-0 python3.9[160525]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:55:29 compute-0 sudo[160523]: pam_unix(sudo:session): session closed for user root
Oct 08 09:55:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:29 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a58002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:55:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:29 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a400016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:55:29 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:55:29 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:55:29 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:55:29.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:55:29 compute-0 sudo[160646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewudvcgjawxmzsxzaezkjwaompqypcph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917328.8205082-959-109578901911704/AnsiballZ_copy.py'
Oct 08 09:55:29 compute-0 sudo[160646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:55:29 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:55:29 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:55:29 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:55:29.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:55:29 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v314: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.9 KiB/s rd, 1.5 KiB/s wr, 5 op/s
Oct 08 09:55:29 compute-0 python3.9[160648]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759917328.8205082-959-109578901911704/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:55:29 compute-0 sudo[160646]: pam_unix(sudo:session): session closed for user root
Oct 08 09:55:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:29 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:55:30 compute-0 sudo[160799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxvjrufcjapfjlytpfnsvmnbnxkgllix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917330.3001046-1010-252597104384032/AnsiballZ_file.py'
Oct 08 09:55:30 compute-0 sudo[160799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:55:30 compute-0 python3.9[160801]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:55:30 compute-0 sudo[160799]: pam_unix(sudo:session): session closed for user root
Oct 08 09:55:31 compute-0 ceph-mon[73572]: pgmap v314: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.9 KiB/s rd, 1.5 KiB/s wr, 5 op/s
Oct 08 09:55:31 compute-0 sudo[160952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izlptgllopweabdxgxdfgnodcckcucev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917331.115463-1034-239505006791185/AnsiballZ_stat.py'
Oct 08 09:55:31 compute-0 sudo[160952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:55:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:31 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a640025c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:55:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:31 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a58002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:55:31 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:55:31 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:55:31 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:55:31.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:55:31 compute-0 python3.9[160954]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:55:31 compute-0 sudo[160952]: pam_unix(sudo:session): session closed for user root
Oct 08 09:55:31 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:55:31 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:55:31 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:55:31.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:55:31 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v315: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 08 09:55:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:31 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a400016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:55:31 compute-0 sudo[161076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcdbzviypffwqoiuztxohnxykxaampau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917331.115463-1034-239505006791185/AnsiballZ_copy.py'
Oct 08 09:55:31 compute-0 sudo[161076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:55:32 compute-0 python3.9[161078]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759917331.115463-1034-239505006791185/.source.json _original_basename=.n1kz9a98 follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:55:32 compute-0 sudo[161076]: pam_unix(sudo:session): session closed for user root
Oct 08 09:55:32 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/095532 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 08 09:55:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 09:55:32 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:55:32 compute-0 sudo[161228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwaxjrrcxopavigeuxezkzdnegdsgcot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917332.5583055-1079-146600467805475/AnsiballZ_file.py'
Oct 08 09:55:32 compute-0 sudo[161228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:55:33 compute-0 python3.9[161230]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:55:33 compute-0 sudo[161228]: pam_unix(sudo:session): session closed for user root
Oct 08 09:55:33 compute-0 ceph-mon[73572]: pgmap v315: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 08 09:55:33 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:55:33 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:33 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:55:33 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:33 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a640032d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:55:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:55:33 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:55:33 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:55:33 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:55:33.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:55:33 compute-0 sudo[161381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edqvvjlktmeravsbbpivowdzwurpzpri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917333.3548858-1103-246317296104905/AnsiballZ_stat.py'
Oct 08 09:55:33 compute-0 sudo[161381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:55:33 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:55:33 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:55:33 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:55:33.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:55:33 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v316: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 08 09:55:33 compute-0 sudo[161381]: pam_unix(sudo:session): session closed for user root
Oct 08 09:55:33 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:33 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a58002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:55:34 compute-0 sudo[161505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akynxyufmuhzzgwyobghlgfaynczjhrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917333.3548858-1103-246317296104905/AnsiballZ_copy.py'
Oct 08 09:55:34 compute-0 sudo[161505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:55:34 compute-0 ceph-mon[73572]: pgmap v316: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 08 09:55:34 compute-0 sudo[161505]: pam_unix(sudo:session): session closed for user root
Oct 08 09:55:35 compute-0 sudo[161658]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjrigeryqyuncnidqeiicxtrojosxric ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917334.9138846-1154-239914375594029/AnsiballZ_container_config_data.py'
Oct 08 09:55:35 compute-0 sudo[161658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:55:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:35 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a40002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:55:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:35 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:55:35 compute-0 python3.9[161660]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Oct 08 09:55:35 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:55:35 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:55:35 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:55:35.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:55:35 compute-0 sudo[161658]: pam_unix(sudo:session): session closed for user root
Oct 08 09:55:35 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:55:35 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:55:35 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:55:35.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:55:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:55:35] "GET /metrics HTTP/1.1" 200 48331 "" "Prometheus/2.51.0"
Oct 08 09:55:35 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:55:35] "GET /metrics HTTP/1.1" 200 48331 "" "Prometheus/2.51.0"
Oct 08 09:55:35 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v317: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Oct 08 09:55:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:35 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a640032d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:55:36 compute-0 sudo[161811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wolxlgzzamqhvhpexushokoiqjsnyodl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917335.7982953-1181-211787660682065/AnsiballZ_container_config_hash.py'
Oct 08 09:55:36 compute-0 sudo[161811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:55:36 compute-0 python3.9[161813]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 08 09:55:36 compute-0 sudo[161811]: pam_unix(sudo:session): session closed for user root
Oct 08 09:55:36 compute-0 ceph-mon[73572]: pgmap v317: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Oct 08 09:55:36 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:55:36.996Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 09:55:36 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:55:36.997Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 09:55:37 compute-0 sudo[161964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omuvjesrxnubrtfgepbjsrbecjtvpxbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917336.8088164-1208-158128704064799/AnsiballZ_podman_container_info.py'
Oct 08 09:55:37 compute-0 sudo[161964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:55:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:37 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a58002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:55:37 compute-0 python3.9[161966]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 08 09:55:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:37 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a40002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:55:37 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:55:37 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:55:37 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:55:37.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:55:37 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:55:37 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:55:37 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:55:37.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:55:37 compute-0 sudo[161964]: pam_unix(sudo:session): session closed for user root
Oct 08 09:55:37 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v318: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Oct 08 09:55:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:37 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:55:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:55:38 compute-0 ceph-mon[73572]: pgmap v318: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Oct 08 09:55:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:55:38.871Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:55:39 compute-0 sudo[162144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nueofonxbnayrlbmmojfczbwfnzmopfj ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759917338.5803032-1247-229437287902531/AnsiballZ_edpm_container_manage.py'
Oct 08 09:55:39 compute-0 sudo[162144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:55:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:39 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a64003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:55:39 compute-0 python3[162146]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 08 09:55:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:39 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a58002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:55:39 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:55:39 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:55:39 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:55:39.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:55:39 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:55:39 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:55:39 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:55:39.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:55:39 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v319: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Oct 08 09:55:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:39 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a40002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:55:40 compute-0 ceph-mon[73572]: pgmap v319: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Oct 08 09:55:41 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:41 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:55:41 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:41 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a64003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:55:41 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:55:41 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:55:41 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:55:41.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:55:41 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:55:41 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:55:41 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:55:41.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:55:41 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v320: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Oct 08 09:55:41 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:41 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a58002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:55:42 compute-0 sudo[162213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 09:55:42 compute-0 sudo[162213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:55:42 compute-0 sudo[162213]: pam_unix(sudo:session): session closed for user root
Oct 08 09:55:43 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:43 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a40003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:55:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:55:43 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:43 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:55:43 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:55:43 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:55:43 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:55:43.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:55:43 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:55:43 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:55:43 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:55:43.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:55:43 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v321: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Oct 08 09:55:43 compute-0 ceph-mon[73572]: pgmap v320: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Oct 08 09:55:43 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:43 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a64003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:55:45 compute-0 ceph-mon[73572]: pgmap v321: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Oct 08 09:55:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:45 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a58002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:55:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:45 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a40003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:55:45 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:55:45 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:55:45 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:55:45.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:55:45 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:55:45 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:55:45 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:55:45.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:55:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:55:45] "GET /metrics HTTP/1.1" 200 48331 "" "Prometheus/2.51.0"
Oct 08 09:55:45 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:55:45] "GET /metrics HTTP/1.1" 200 48331 "" "Prometheus/2.51.0"
Oct 08 09:55:45 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v322: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:55:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:45 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:55:46 compute-0 podman[162248]: 2025-10-08 09:55:46.380903152 +0000 UTC m=+3.531715483 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Oct 08 09:55:46 compute-0 ceph-mon[73572]: pgmap v322: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:55:46 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:55:46.998Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 09:55:46 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:55:46.999Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:55:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:47 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a64003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:55:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:47 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a58002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:55:47 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:55:47 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:55:47 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:55:47.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:55:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_09:55:47
Oct 08 09:55:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 08 09:55:47 compute-0 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct 08 09:55:47 compute-0 ceph-mgr[73869]: [balancer INFO root] pools ['default.rgw.meta', '.rgw.root', 'default.rgw.control', '.nfs', 'default.rgw.log', 'cephfs.cephfs.data', 'images', '.mgr', 'vms', 'volumes', 'cephfs.cephfs.meta', 'backups']
Oct 08 09:55:47 compute-0 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct 08 09:55:47 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:55:47 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:55:47 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:55:47.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:55:47 compute-0 podman[162160]: 2025-10-08 09:55:47.769741133 +0000 UTC m=+8.220098988 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 08 09:55:47 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v323: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:55:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct 08 09:55:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:55:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 08 09:55:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:55:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:55:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:55:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:55:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:55:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:55:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:55:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:55:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:55:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 08 09:55:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:55:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:55:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:55:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 08 09:55:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:55:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 08 09:55:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:55:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:55:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:55:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 08 09:55:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:55:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 09:55:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 09:55:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:55:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:55:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:55:47 compute-0 podman[162347]: 2025-10-08 09:55:47.917295587 +0000 UTC m=+0.048422563 container create 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 08 09:55:47 compute-0 podman[162347]: 2025-10-08 09:55:47.891071063 +0000 UTC m=+0.022198069 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 08 09:55:47 compute-0 python3[162146]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 08 09:55:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 08 09:55:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 09:55:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 09:55:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 09:55:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 09:55:47 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:55:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:47 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a40003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:55:48 compute-0 sudo[162144]: pam_unix(sudo:session): session closed for user root
Oct 08 09:55:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:55:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:55:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:55:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:55:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 08 09:55:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 09:55:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 09:55:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 09:55:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 09:55:48 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:55:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:55:48.873Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:55:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/095549 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 08 09:55:49 compute-0 ceph-mon[73572]: pgmap v323: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:55:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:49 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:55:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:49 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a64003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:55:49 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:55:49 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:55:49 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:55:49.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:55:49 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:55:49 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:55:49 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:55:49.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:55:49 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v324: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:55:49 compute-0 sudo[162533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzzcrsxgalvhvfzoeqkrlaselputsnvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917349.542875-1271-31663016538486/AnsiballZ_stat.py'
Oct 08 09:55:49 compute-0 sudo[162533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:55:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:49 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a58002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:55:50 compute-0 python3.9[162535]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 09:55:50 compute-0 sudo[162533]: pam_unix(sudo:session): session closed for user root
Oct 08 09:55:50 compute-0 ceph-mon[73572]: pgmap v324: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:55:50 compute-0 sudo[162688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cibetnefwxwetrsbzvrultkxbfnqsave ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917350.4139178-1298-57845010323065/AnsiballZ_file.py'
Oct 08 09:55:50 compute-0 sudo[162688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:55:50 compute-0 python3.9[162690]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:55:50 compute-0 sudo[162688]: pam_unix(sudo:session): session closed for user root
Oct 08 09:55:51 compute-0 sudo[162765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itltwdzywbawxmvkjdjwrbapmezrjicf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917350.4139178-1298-57845010323065/AnsiballZ_stat.py'
Oct 08 09:55:51 compute-0 sudo[162765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:55:51 compute-0 python3.9[162767]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 09:55:51 compute-0 sudo[162765]: pam_unix(sudo:session): session closed for user root
Oct 08 09:55:51 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:51 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a40003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:55:51 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:51 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:55:51 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:55:51 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:55:51 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:55:51.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:55:51 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:55:51 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:55:51 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:55:51.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:55:51 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v325: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:55:51 compute-0 sudo[162916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpbqfolksjihlntioadolxgrxnnwsgiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917351.382558-1298-18874755154254/AnsiballZ_copy.py'
Oct 08 09:55:51 compute-0 sudo[162916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:55:51 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:51 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a64003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:55:52 compute-0 python3.9[162918]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759917351.382558-1298-18874755154254/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:55:52 compute-0 sudo[162916]: pam_unix(sudo:session): session closed for user root
Oct 08 09:55:52 compute-0 sudo[162993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qscsxdhvvuxwpgfpzrhdfchkpsscgyxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917351.382558-1298-18874755154254/AnsiballZ_systemd.py'
Oct 08 09:55:52 compute-0 sudo[162993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:55:52 compute-0 python3.9[162995]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 08 09:55:52 compute-0 systemd[1]: Reloading.
Oct 08 09:55:52 compute-0 systemd-rc-local-generator[163021]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:55:52 compute-0 systemd-sysv-generator[163025]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:55:52 compute-0 radosgw[88577]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Oct 08 09:55:52 compute-0 radosgw[88577]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Oct 08 09:55:52 compute-0 ceph-mon[73572]: pgmap v325: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:55:52 compute-0 radosgw[88577]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Oct 08 09:55:52 compute-0 radosgw[88577]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Oct 08 09:55:52 compute-0 radosgw[88577]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Oct 08 09:55:52 compute-0 sudo[162993]: pam_unix(sudo:session): session closed for user root
Oct 08 09:55:53 compute-0 sudo[163104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-madawgmuqlghkhiaiobtkaawxztrbhra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917351.382558-1298-18874755154254/AnsiballZ_systemd.py'
Oct 08 09:55:53 compute-0 sudo[163104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:55:53 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:53 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a58002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:55:53 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:55:53 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:53 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a40003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:55:53 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:55:53 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:55:53 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:55:53.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:55:53 compute-0 python3.9[163106]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 09:55:53 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:55:53 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:55:53 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:55:53.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:55:53 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v326: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 28 op/s
Oct 08 09:55:53 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:53 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:55:54 compute-0 systemd[1]: Reloading.
Oct 08 09:55:54 compute-0 systemd-sysv-generator[163147]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:55:54 compute-0 systemd-rc-local-generator[163142]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:55:54 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Oct 08 09:55:55 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:55:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af1fbaaea5195f62cd87d30536b3f349b4ffb866cbbd8a6f5bbbf1986b93e338/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Oct 08 09:55:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af1fbaaea5195f62cd87d30536b3f349b4ffb866cbbd8a6f5bbbf1986b93e338/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 08 09:55:55 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784.
Oct 08 09:55:55 compute-0 ceph-mon[73572]: pgmap v326: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 28 op/s
Oct 08 09:55:55 compute-0 podman[163153]: 2025-10-08 09:55:55.114531799 +0000 UTC m=+0.161967870 container init 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 08 09:55:55 compute-0 ovn_metadata_agent[163169]: + sudo -E kolla_set_configs
Oct 08 09:55:55 compute-0 podman[163153]: 2025-10-08 09:55:55.157001241 +0000 UTC m=+0.204437332 container start 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 08 09:55:55 compute-0 edpm-start-podman-container[163153]: ovn_metadata_agent
Oct 08 09:55:55 compute-0 podman[163177]: 2025-10-08 09:55:55.239705058 +0000 UTC m=+0.065843710 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 08 09:55:55 compute-0 edpm-start-podman-container[163152]: Creating additional drop-in dependency for "ovn_metadata_agent" (96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784)
Oct 08 09:55:55 compute-0 systemd[1]: Reloading.
Oct 08 09:55:55 compute-0 systemd-rc-local-generator[163242]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:55:55 compute-0 ovn_metadata_agent[163169]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 08 09:55:55 compute-0 ovn_metadata_agent[163169]: INFO:__main__:Validating config file
Oct 08 09:55:55 compute-0 ovn_metadata_agent[163169]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 08 09:55:55 compute-0 ovn_metadata_agent[163169]: INFO:__main__:Copying service configuration files
Oct 08 09:55:55 compute-0 ovn_metadata_agent[163169]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Oct 08 09:55:55 compute-0 ovn_metadata_agent[163169]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Oct 08 09:55:55 compute-0 ovn_metadata_agent[163169]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Oct 08 09:55:55 compute-0 ovn_metadata_agent[163169]: INFO:__main__:Writing out command to execute
Oct 08 09:55:55 compute-0 ovn_metadata_agent[163169]: INFO:__main__:Setting permission for /var/lib/neutron
Oct 08 09:55:55 compute-0 ovn_metadata_agent[163169]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Oct 08 09:55:55 compute-0 ovn_metadata_agent[163169]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Oct 08 09:55:55 compute-0 ovn_metadata_agent[163169]: INFO:__main__:Setting permission for /var/lib/neutron/external
Oct 08 09:55:55 compute-0 ovn_metadata_agent[163169]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Oct 08 09:55:55 compute-0 ovn_metadata_agent[163169]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Oct 08 09:55:55 compute-0 ovn_metadata_agent[163169]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Oct 08 09:55:55 compute-0 systemd-sysv-generator[163246]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:55:55 compute-0 ovn_metadata_agent[163169]: ++ cat /run_command
Oct 08 09:55:55 compute-0 ovn_metadata_agent[163169]: + CMD=neutron-ovn-metadata-agent
Oct 08 09:55:55 compute-0 ovn_metadata_agent[163169]: + ARGS=
Oct 08 09:55:55 compute-0 ovn_metadata_agent[163169]: + sudo kolla_copy_cacerts
Oct 08 09:55:55 compute-0 ovn_metadata_agent[163169]: + [[ ! -n '' ]]
Oct 08 09:55:55 compute-0 ovn_metadata_agent[163169]: + . kolla_extend_start
Oct 08 09:55:55 compute-0 ovn_metadata_agent[163169]: Running command: 'neutron-ovn-metadata-agent'
Oct 08 09:55:55 compute-0 ovn_metadata_agent[163169]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Oct 08 09:55:55 compute-0 ovn_metadata_agent[163169]: + umask 0022
Oct 08 09:55:55 compute-0 ovn_metadata_agent[163169]: + exec neutron-ovn-metadata-agent
Oct 08 09:55:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:55 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a64003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:55:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:55 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a58002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:55:55 compute-0 systemd[1]: Started ovn_metadata_agent container.
Oct 08 09:55:55 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:55:55 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:55:55 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:55:55.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:55:55 compute-0 sudo[163104]: pam_unix(sudo:session): session closed for user root
Oct 08 09:55:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:55:55] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Oct 08 09:55:55 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:55:55] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Oct 08 09:55:55 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:55:55 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:55:55 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:55:55.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:55:55 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v327: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 28 op/s
Oct 08 09:55:55 compute-0 sshd-session[153802]: Connection closed by 192.168.122.30 port 52014
Oct 08 09:55:55 compute-0 sshd-session[153799]: pam_unix(sshd:session): session closed for user zuul
Oct 08 09:55:55 compute-0 systemd[1]: session-53.scope: Deactivated successfully.
Oct 08 09:55:55 compute-0 systemd[1]: session-53.scope: Consumed 54.302s CPU time.
Oct 08 09:55:55 compute-0 systemd-logind[798]: Session 53 logged out. Waiting for processes to exit.
Oct 08 09:55:55 compute-0 systemd-logind[798]: Removed session 53.
Oct 08 09:55:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:55 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:55:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:55:57.000Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.351 163175 INFO neutron.common.config [-] Logging enabled!
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.352 163175 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.352 163175 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.353 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.353 163175 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.353 163175 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.353 163175 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.353 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.353 163175 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.354 163175 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.354 163175 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.354 163175 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.354 163175 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.354 163175 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.354 163175 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.355 163175 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.355 163175 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.355 163175 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.355 163175 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.355 163175 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.355 163175 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.355 163175 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.356 163175 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.356 163175 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.356 163175 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.356 163175 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.356 163175 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.356 163175 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.356 163175 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.357 163175 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.357 163175 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.357 163175 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.357 163175 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.357 163175 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.357 163175 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.357 163175 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.358 163175 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.358 163175 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.358 163175 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.358 163175 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.358 163175 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.358 163175 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.359 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.359 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.359 163175 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.359 163175 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.359 163175 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.359 163175 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.359 163175 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.360 163175 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.360 163175 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.360 163175 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.360 163175 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.360 163175 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.360 163175 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.360 163175 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.360 163175 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.361 163175 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.361 163175 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.361 163175 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.361 163175 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.361 163175 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.361 163175 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.362 163175 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.362 163175 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.362 163175 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.362 163175 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.362 163175 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.362 163175 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.362 163175 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.363 163175 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.363 163175 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.363 163175 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.363 163175 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.363 163175 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.363 163175 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.364 163175 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.364 163175 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.364 163175 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.364 163175 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.364 163175 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.364 163175 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.364 163175 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.365 163175 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.365 163175 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.365 163175 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.365 163175 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.365 163175 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.365 163175 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.366 163175 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.366 163175 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.366 163175 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.366 163175 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.366 163175 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.366 163175 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.366 163175 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.367 163175 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.367 163175 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.367 163175 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.367 163175 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.367 163175 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.367 163175 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.367 163175 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.367 163175 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.368 163175 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.368 163175 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.368 163175 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.368 163175 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.368 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.368 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.369 163175 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.369 163175 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.369 163175 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.369 163175 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.369 163175 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.369 163175 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.369 163175 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.370 163175 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.370 163175 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.370 163175 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.370 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.370 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.370 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.371 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.371 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.371 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.371 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.371 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.371 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.371 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.372 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.372 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.372 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.372 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.372 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.372 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.372 163175 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.373 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.373 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.373 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.373 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.373 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.373 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.374 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.374 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.374 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.374 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.374 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.374 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.374 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.375 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.375 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.375 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.375 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.375 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.375 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.376 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.376 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.376 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.376 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.376 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.376 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.376 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.377 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.377 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.377 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.377 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.377 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.377 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.377 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.378 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.378 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.378 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.378 163175 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.378 163175 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.378 163175 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.379 163175 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.379 163175 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.379 163175 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.379 163175 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.379 163175 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.379 163175 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.379 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.380 163175 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.380 163175 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.380 163175 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.380 163175 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.380 163175 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.380 163175 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.380 163175 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.381 163175 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.381 163175 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.381 163175 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.381 163175 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.381 163175 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.381 163175 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.382 163175 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.382 163175 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.382 163175 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.382 163175 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.382 163175 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.382 163175 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.382 163175 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.382 163175 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.382 163175 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.383 163175 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.383 163175 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.383 163175 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.383 163175 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.383 163175 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.383 163175 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.383 163175 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.383 163175 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.383 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.384 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.384 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.384 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.384 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.384 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.384 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.384 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.384 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.384 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.384 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.385 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.385 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.385 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.385 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.385 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.385 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.385 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.385 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.385 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.385 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.386 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.386 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.386 163175 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.386 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.386 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.386 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.386 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.386 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.386 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.386 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.387 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.387 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.387 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.387 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.387 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.387 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.387 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.387 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.387 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.388 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.388 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.388 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.388 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.388 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.388 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.388 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.388 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.388 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.388 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.389 163175 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.389 163175 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.389 163175 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.389 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.389 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.389 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.389 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.389 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.389 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.390 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.390 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.390 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.390 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.390 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.390 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.390 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.390 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.390 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.391 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.391 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.391 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.391 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.391 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.391 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.391 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.391 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.391 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.392 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.392 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.392 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.392 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.392 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.392 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.392 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.392 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.392 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.392 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.393 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.393 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.393 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.393 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.401 163175 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.401 163175 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.401 163175 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.401 163175 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.402 163175 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Oct 08 09:55:57 compute-0 ceph-mon[73572]: pgmap v327: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 28 op/s
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.416 163175 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 26869918-b723-425c-a2e1-0d697f3d0fec (UUID: 26869918-b723-425c-a2e1-0d697f3d0fec) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.435 163175 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.436 163175 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.436 163175 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.436 163175 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.438 163175 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.443 163175 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.449 163175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '26869918-b723-425c-a2e1-0d697f3d0fec'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f191f105a60>], external_ids={}, name=26869918-b723-425c-a2e1-0d697f3d0fec, nb_cfg_timestamp=1759917290430, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.450 163175 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f191f102f40>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.451 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.451 163175 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.451 163175 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.451 163175 INFO oslo_service.service [-] Starting 1 workers
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.456 163175 DEBUG oslo_service.service [-] Started child 163284 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Oct 08 09:55:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:57 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.459 163284 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-159970'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.459 163175 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmp0cqyf9jh/privsep.sock']
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.484 163284 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.485 163284 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.485 163284 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.504 163284 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.511 163284 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct 08 09:55:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.520 163284 INFO eventlet.wsgi.server [-] (163284) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Oct 08 09:55:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:57 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a64003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:55:57 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:55:57 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:55:57 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:55:57.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:55:57 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:55:57 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:55:57 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:55:57.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:55:57 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v328: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 28 op/s
Oct 08 09:55:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:57 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a58002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:55:58 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Oct 08 09:55:58 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:58.166 163175 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct 08 09:55:58 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:58.166 163175 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp0cqyf9jh/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Oct 08 09:55:58 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:58.028 163290 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 08 09:55:58 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:58.035 163290 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 08 09:55:58 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:58.039 163290 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Oct 08 09:55:58 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:58.039 163290 INFO oslo.privsep.daemon [-] privsep daemon running as pid 163290
Oct 08 09:55:58 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:58.169 163290 DEBUG oslo.privsep.daemon [-] privsep: reply[a4580327-dafc-4d09-8781-93f599a4178e]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 09:55:58 compute-0 ceph-mon[73572]: pgmap v328: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 28 op/s
Oct 08 09:55:58 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:55:58 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:58.649 163290 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 09:55:58 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:58.649 163290 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 09:55:58 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:58.649 163290 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 09:55:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:55:58.873Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.169 163290 DEBUG oslo.privsep.daemon [-] privsep: reply[cf03aed7-3e52-42ea-b3da-151763769da7]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.171 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=26869918-b723-425c-a2e1-0d697f3d0fec, column=external_ids, values=({'neutron:ovn-metadata-id': '2ded52bb-1ae7-5b18-bd1d-b28ab5fb6948'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.181 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26869918-b723-425c-a2e1-0d697f3d0fec, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.188 163175 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.188 163175 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.188 163175 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.188 163175 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.188 163175 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.188 163175 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.188 163175 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.189 163175 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.189 163175 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.189 163175 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.189 163175 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.189 163175 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.189 163175 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.189 163175 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.189 163175 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.189 163175 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.190 163175 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.190 163175 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.190 163175 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.190 163175 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.190 163175 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.190 163175 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.190 163175 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.190 163175 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.191 163175 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.191 163175 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.191 163175 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.191 163175 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.191 163175 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.191 163175 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.191 163175 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.191 163175 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.191 163175 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.192 163175 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.192 163175 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.192 163175 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.192 163175 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.192 163175 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.192 163175 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.192 163175 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.192 163175 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.193 163175 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.193 163175 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.193 163175 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.193 163175 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.193 163175 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.193 163175 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.193 163175 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.193 163175 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.193 163175 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.194 163175 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.194 163175 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.194 163175 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.194 163175 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.194 163175 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.194 163175 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.194 163175 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.194 163175 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.194 163175 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.194 163175 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.194 163175 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.195 163175 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.195 163175 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.195 163175 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.195 163175 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.195 163175 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.195 163175 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.195 163175 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.195 163175 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.195 163175 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.196 163175 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.196 163175 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.196 163175 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.196 163175 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.196 163175 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.196 163175 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.196 163175 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.196 163175 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.196 163175 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.197 163175 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.197 163175 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.197 163175 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.197 163175 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.197 163175 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.197 163175 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.197 163175 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.197 163175 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.197 163175 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.198 163175 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.198 163175 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.198 163175 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.198 163175 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.198 163175 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.198 163175 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.198 163175 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.198 163175 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.198 163175 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.198 163175 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.199 163175 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.199 163175 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.199 163175 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.199 163175 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.199 163175 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.199 163175 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.199 163175 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.199 163175 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.199 163175 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.199 163175 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.200 163175 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.200 163175 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.200 163175 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.200 163175 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.200 163175 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.200 163175 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.200 163175 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.200 163175 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.201 163175 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.201 163175 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.201 163175 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.201 163175 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.201 163175 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.201 163175 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.201 163175 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.201 163175 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.201 163175 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.202 163175 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.202 163175 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.202 163175 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.202 163175 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.202 163175 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.202 163175 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.202 163175 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.202 163175 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.202 163175 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.202 163175 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.203 163175 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.203 163175 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.203 163175 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.203 163175 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.203 163175 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.203 163175 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.203 163175 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.203 163175 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.203 163175 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.204 163175 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.204 163175 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.204 163175 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.204 163175 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.204 163175 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.204 163175 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.204 163175 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.204 163175 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.204 163175 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.204 163175 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.204 163175 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.205 163175 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.205 163175 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.205 163175 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.205 163175 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.205 163175 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.205 163175 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.205 163175 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.205 163175 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.205 163175 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.205 163175 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.205 163175 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.206 163175 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.206 163175 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.206 163175 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.206 163175 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.206 163175 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.206 163175 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.206 163175 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.206 163175 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.206 163175 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.206 163175 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.207 163175 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.207 163175 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.207 163175 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.207 163175 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.207 163175 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.207 163175 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.207 163175 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.207 163175 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.207 163175 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.208 163175 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.208 163175 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.208 163175 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.208 163175 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.208 163175 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.208 163175 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.208 163175 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.208 163175 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.208 163175 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.209 163175 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.209 163175 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.209 163175 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.209 163175 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.209 163175 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.209 163175 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.209 163175 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.209 163175 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.209 163175 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.209 163175 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.210 163175 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.210 163175 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.210 163175 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.210 163175 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.210 163175 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.210 163175 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.210 163175 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.210 163175 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.210 163175 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.210 163175 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.210 163175 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.211 163175 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.211 163175 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.211 163175 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.211 163175 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.211 163175 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.211 163175 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.211 163175 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.211 163175 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.211 163175 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.211 163175 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.212 163175 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.212 163175 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.212 163175 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.212 163175 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.212 163175 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.212 163175 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.212 163175 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.212 163175 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.212 163175 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.212 163175 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.213 163175 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.213 163175 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.213 163175 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.213 163175 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.213 163175 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.213 163175 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.213 163175 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.213 163175 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.213 163175 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.214 163175 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.214 163175 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.214 163175 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.214 163175 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.214 163175 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.214 163175 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.214 163175 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.214 163175 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.214 163175 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.214 163175 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.215 163175 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.215 163175 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.215 163175 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.215 163175 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.215 163175 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.215 163175 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.215 163175 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.216 163175 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.216 163175 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.217 163175 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.217 163175 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.217 163175 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.217 163175 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.217 163175 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.217 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.217 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.217 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.217 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.218 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.218 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.218 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.218 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.218 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.218 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.218 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.218 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.218 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.219 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.219 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.219 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.219 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.219 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.219 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.219 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.219 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.219 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.219 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.220 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.220 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.220 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.220 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.220 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.220 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.220 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.220 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.220 163175 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.220 163175 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.221 163175 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.221 163175 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 09:55:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.221 163175 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 08 09:55:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:59 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c001d70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:55:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:59 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:55:59 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:55:59 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:55:59 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:55:59.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:55:59 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:55:59 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:55:59 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:55:59.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:55:59 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v329: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 97 KiB/s rd, 0 B/s wr, 162 op/s
Oct 08 09:55:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:59 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a64003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:00 compute-0 ceph-mon[73572]: pgmap v329: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 97 KiB/s rd, 0 B/s wr, 162 op/s
Oct 08 09:56:00 compute-0 sshd-session[163297]: Accepted publickey for zuul from 192.168.122.30 port 51082 ssh2: ECDSA SHA256:7LvTHAj52RCSkKXOsIbzSDrEw7lwj23D0dAJ0Qgx0Rg
Oct 08 09:56:00 compute-0 systemd-logind[798]: New session 54 of user zuul.
Oct 08 09:56:00 compute-0 systemd[1]: Started Session 54 of User zuul.
Oct 08 09:56:00 compute-0 sshd-session[163297]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 08 09:56:01 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:01 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a58002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:01 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:01 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c001d70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:01 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:56:01 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:56:01 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:56:01.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:56:01 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:56:01 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:56:01 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:56:01.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:56:01 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v330: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 97 KiB/s rd, 0 B/s wr, 162 op/s
Oct 08 09:56:01 compute-0 python3.9[163451]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 08 09:56:01 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:01 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:02 compute-0 sudo[163481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 09:56:02 compute-0 sudo[163481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:56:02 compute-0 sudo[163481]: pam_unix(sudo:session): session closed for user root
Oct 08 09:56:02 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:02 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 09:56:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 09:56:02 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:56:02 compute-0 ceph-mon[73572]: pgmap v330: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 97 KiB/s rd, 0 B/s wr, 162 op/s
Oct 08 09:56:02 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:56:02 compute-0 sudo[163631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xflewrsakhlzpfsveqplzgqtokdcttqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917362.5054848-62-189320059124534/AnsiballZ_command.py'
Oct 08 09:56:02 compute-0 sudo[163631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:56:03 compute-0 python3.9[163633]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:56:03 compute-0 sudo[163631]: pam_unix(sudo:session): session closed for user root
Oct 08 09:56:03 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:03 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a64003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:03 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:56:03 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:03 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a58002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:03 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:56:03 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:56:03 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:56:03.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:56:03 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:56:03 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:56:03 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:56:03.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:56:03 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v331: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 99 KiB/s rd, 597 B/s wr, 164 op/s
Oct 08 09:56:03 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:03 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c001d70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:04 compute-0 sudo[163798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etjhprbkwstsrrlaumivpiwhzdnteify ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917363.6398408-95-192275634868224/AnsiballZ_systemd_service.py'
Oct 08 09:56:04 compute-0 sudo[163798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:56:04 compute-0 python3.9[163800]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 08 09:56:04 compute-0 systemd[1]: Reloading.
Oct 08 09:56:04 compute-0 systemd-rc-local-generator[163824]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:56:04 compute-0 systemd-sysv-generator[163832]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:56:04 compute-0 sudo[163798]: pam_unix(sudo:session): session closed for user root
Oct 08 09:56:04 compute-0 ceph-mon[73572]: pgmap v331: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 99 KiB/s rd, 597 B/s wr, 164 op/s
Oct 08 09:56:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:05 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:05 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a64003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:05 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:56:05 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:56:05 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:56:05.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:56:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:05 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 09:56:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:05 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 09:56:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:05 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 09:56:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:56:05] "GET /metrics HTTP/1.1" 200 48353 "" "Prometheus/2.51.0"
Oct 08 09:56:05 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:56:05] "GET /metrics HTTP/1.1" 200 48353 "" "Prometheus/2.51.0"
Oct 08 09:56:05 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:56:05 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:56:05 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:56:05.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:56:05 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v332: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 82 KiB/s rd, 597 B/s wr, 135 op/s
Oct 08 09:56:05 compute-0 python3.9[163986]: ansible-ansible.builtin.service_facts Invoked
Oct 08 09:56:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:05 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a58002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:05 compute-0 network[164004]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 08 09:56:06 compute-0 network[164005]: 'network-scripts' will be removed from distribution in near future.
Oct 08 09:56:06 compute-0 network[164006]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 08 09:56:06 compute-0 ceph-mon[73572]: pgmap v332: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 82 KiB/s rd, 597 B/s wr, 135 op/s
Oct 08 09:56:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:56:07.000Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 09:56:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:56:07.001Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:56:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:07 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c001d70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:07 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:07 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:56:07 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:56:07 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:56:07.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:56:07 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:56:07 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:56:07 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:56:07.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:56:07 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v333: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 82 KiB/s rd, 597 B/s wr, 135 op/s
Oct 08 09:56:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:07 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a64003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:08 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:56:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:08 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 08 09:56:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:56:08.875Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:56:09 compute-0 ceph-mon[73572]: pgmap v333: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 82 KiB/s rd, 597 B/s wr, 135 op/s
Oct 08 09:56:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:09 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a58002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:09 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c0095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:09 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:56:09 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:56:09 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:56:09.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:56:09 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:56:09 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:56:09 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:56:09.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:56:09 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v334: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 1023 B/s wr, 137 op/s
Oct 08 09:56:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:09 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:10 compute-0 sudo[164273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdyesgsaclkrgwpiyrlqcfamtxsmuryi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917369.5052545-152-186537849569143/AnsiballZ_systemd_service.py'
Oct 08 09:56:10 compute-0 sudo[164273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:56:10 compute-0 python3.9[164275]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 09:56:10 compute-0 sudo[164273]: pam_unix(sudo:session): session closed for user root
Oct 08 09:56:10 compute-0 sudo[164426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cldvizmztqlwcjgetczmnczoxjwpjwsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917370.6248913-152-279062872297736/AnsiballZ_systemd_service.py'
Oct 08 09:56:10 compute-0 sudo[164426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:56:11 compute-0 ceph-mon[73572]: pgmap v334: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 1023 B/s wr, 137 op/s
Oct 08 09:56:11 compute-0 python3.9[164428]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 09:56:11 compute-0 sudo[164426]: pam_unix(sudo:session): session closed for user root
Oct 08 09:56:11 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:11 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a64003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:11 compute-0 sudo[164507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:56:11 compute-0 sudo[164507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:56:11 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:11 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a58002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:11 compute-0 sudo[164507]: pam_unix(sudo:session): session closed for user root
Oct 08 09:56:11 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:56:11 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:56:11 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:56:11.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:56:11 compute-0 sudo[164555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 08 09:56:11 compute-0 sudo[164555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:56:11 compute-0 sudo[164630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odsualcogtphnsznxxydzxjrgmxwmymr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917371.4197814-152-152249508392635/AnsiballZ_systemd_service.py'
Oct 08 09:56:11 compute-0 sudo[164630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:56:11 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:56:11 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:56:11 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:56:11.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:56:11 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v335: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 08 09:56:12 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:11 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c0095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:12 compute-0 python3.9[164632]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 09:56:12 compute-0 sudo[164630]: pam_unix(sudo:session): session closed for user root
Oct 08 09:56:12 compute-0 sudo[164555]: pam_unix(sudo:session): session closed for user root
Oct 08 09:56:12 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:56:12 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:56:12 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 08 09:56:12 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 09:56:12 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v336: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Oct 08 09:56:12 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 08 09:56:12 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:56:12 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 08 09:56:12 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:56:12 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 08 09:56:12 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 09:56:12 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 08 09:56:12 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 09:56:12 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:56:12 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:56:12 compute-0 sudo[164742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:56:12 compute-0 sudo[164742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:56:12 compute-0 sudo[164742]: pam_unix(sudo:session): session closed for user root
Oct 08 09:56:12 compute-0 sudo[164790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 08 09:56:12 compute-0 sudo[164790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:56:12 compute-0 sudo[164865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxnckbyiyjsgyqlennlkbozcqvhlxhkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917372.2467175-152-280634575861501/AnsiballZ_systemd_service.py'
Oct 08 09:56:12 compute-0 sudo[164865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:56:12 compute-0 podman[164911]: 2025-10-08 09:56:12.905093143 +0000 UTC m=+0.087824091 container create b1b40713975c0fbbc8e505941777c7f902ee4711c4bad1967c7534cccc934bad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_zhukovsky, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct 08 09:56:12 compute-0 podman[164911]: 2025-10-08 09:56:12.847600066 +0000 UTC m=+0.030331044 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:56:12 compute-0 systemd[1]: Started libpod-conmon-b1b40713975c0fbbc8e505941777c7f902ee4711c4bad1967c7534cccc934bad.scope.
Oct 08 09:56:12 compute-0 python3.9[164867]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 09:56:12 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:56:13 compute-0 sudo[164865]: pam_unix(sudo:session): session closed for user root
Oct 08 09:56:13 compute-0 podman[164911]: 2025-10-08 09:56:13.029109313 +0000 UTC m=+0.211840301 container init b1b40713975c0fbbc8e505941777c7f902ee4711c4bad1967c7534cccc934bad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:56:13 compute-0 podman[164911]: 2025-10-08 09:56:13.036455921 +0000 UTC m=+0.219186889 container start b1b40713975c0fbbc8e505941777c7f902ee4711c4bad1967c7534cccc934bad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_zhukovsky, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Oct 08 09:56:13 compute-0 keen_zhukovsky[164928]: 167 167
Oct 08 09:56:13 compute-0 systemd[1]: libpod-b1b40713975c0fbbc8e505941777c7f902ee4711c4bad1967c7534cccc934bad.scope: Deactivated successfully.
Oct 08 09:56:13 compute-0 podman[164911]: 2025-10-08 09:56:13.049814781 +0000 UTC m=+0.232545739 container attach b1b40713975c0fbbc8e505941777c7f902ee4711c4bad1967c7534cccc934bad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Oct 08 09:56:13 compute-0 podman[164911]: 2025-10-08 09:56:13.051533749 +0000 UTC m=+0.234264727 container died b1b40713975c0fbbc8e505941777c7f902ee4711c4bad1967c7534cccc934bad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:56:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/095613 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 08 09:56:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-a208a4dda1b6d6a1a8eaf7b6db7d309e7c2faf9c5b25faff836ba1c29c69c404-merged.mount: Deactivated successfully.
Oct 08 09:56:13 compute-0 podman[164911]: 2025-10-08 09:56:13.130425929 +0000 UTC m=+0.313156867 container remove b1b40713975c0fbbc8e505941777c7f902ee4711c4bad1967c7534cccc934bad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_zhukovsky, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:56:13 compute-0 ceph-mon[73572]: pgmap v335: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 08 09:56:13 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:56:13 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 09:56:13 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:56:13 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:56:13 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 09:56:13 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 09:56:13 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:56:13 compute-0 systemd[1]: libpod-conmon-b1b40713975c0fbbc8e505941777c7f902ee4711c4bad1967c7534cccc934bad.scope: Deactivated successfully.
Oct 08 09:56:13 compute-0 podman[165032]: 2025-10-08 09:56:13.303531113 +0000 UTC m=+0.048760475 container create 031d86a4dba0835495709db30b21a8883d74261e97147e3dcf7fc4e19251ddea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 08 09:56:13 compute-0 systemd[1]: Started libpod-conmon-031d86a4dba0835495709db30b21a8883d74261e97147e3dcf7fc4e19251ddea.scope.
Oct 08 09:56:13 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:56:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc0033da69a23924ace93ecb4159135209e866fa32575eb7b78827451f725f75/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 09:56:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc0033da69a23924ace93ecb4159135209e866fa32575eb7b78827451f725f75/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:56:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc0033da69a23924ace93ecb4159135209e866fa32575eb7b78827451f725f75/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:56:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc0033da69a23924ace93ecb4159135209e866fa32575eb7b78827451f725f75/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 09:56:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc0033da69a23924ace93ecb4159135209e866fa32575eb7b78827451f725f75/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 09:56:13 compute-0 podman[165032]: 2025-10-08 09:56:13.28238968 +0000 UTC m=+0.027619072 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:56:13 compute-0 podman[165032]: 2025-10-08 09:56:13.3856021 +0000 UTC m=+0.130831472 container init 031d86a4dba0835495709db30b21a8883d74261e97147e3dcf7fc4e19251ddea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:56:13 compute-0 podman[165032]: 2025-10-08 09:56:13.399207158 +0000 UTC m=+0.144436520 container start 031d86a4dba0835495709db30b21a8883d74261e97147e3dcf7fc4e19251ddea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:56:13 compute-0 podman[165032]: 2025-10-08 09:56:13.404064612 +0000 UTC m=+0.149293994 container attach 031d86a4dba0835495709db30b21a8883d74261e97147e3dcf7fc4e19251ddea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_keldysh, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:56:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:13 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a58002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:13 compute-0 sudo[165127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqyzixdprroiodpkjcpexnhjfauihwnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917373.1723523-152-103884890199112/AnsiballZ_systemd_service.py'
Oct 08 09:56:13 compute-0 sudo[165127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:56:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:56:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:13 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:13 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:56:13 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:56:13 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:56:13.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:56:13 compute-0 jolly_keldysh[165072]: --> passed data devices: 0 physical, 1 LVM
Oct 08 09:56:13 compute-0 jolly_keldysh[165072]: --> All data devices are unavailable
Oct 08 09:56:13 compute-0 systemd[1]: libpod-031d86a4dba0835495709db30b21a8883d74261e97147e3dcf7fc4e19251ddea.scope: Deactivated successfully.
Oct 08 09:56:13 compute-0 podman[165032]: 2025-10-08 09:56:13.734747338 +0000 UTC m=+0.479976740 container died 031d86a4dba0835495709db30b21a8883d74261e97147e3dcf7fc4e19251ddea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_keldysh, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:56:13 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:56:13 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:56:13 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:56:13.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:56:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc0033da69a23924ace93ecb4159135209e866fa32575eb7b78827451f725f75-merged.mount: Deactivated successfully.
Oct 08 09:56:13 compute-0 podman[165032]: 2025-10-08 09:56:13.807558832 +0000 UTC m=+0.552788194 container remove 031d86a4dba0835495709db30b21a8883d74261e97147e3dcf7fc4e19251ddea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_keldysh, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Oct 08 09:56:13 compute-0 systemd[1]: libpod-conmon-031d86a4dba0835495709db30b21a8883d74261e97147e3dcf7fc4e19251ddea.scope: Deactivated successfully.
Oct 08 09:56:13 compute-0 sudo[164790]: pam_unix(sudo:session): session closed for user root
Oct 08 09:56:13 compute-0 python3.9[165129]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 09:56:13 compute-0 sudo[165127]: pam_unix(sudo:session): session closed for user root
Oct 08 09:56:13 compute-0 sudo[165155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:56:13 compute-0 sudo[165155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:56:13 compute-0 sudo[165155]: pam_unix(sudo:session): session closed for user root
Oct 08 09:56:13 compute-0 sudo[165191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- lvm list --format json
Oct 08 09:56:13 compute-0 sudo[165191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:56:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:14 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:14 compute-0 ceph-mon[73572]: pgmap v336: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Oct 08 09:56:14 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v337: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 489 B/s wr, 2 op/s
Oct 08 09:56:14 compute-0 podman[165329]: 2025-10-08 09:56:14.382264074 +0000 UTC m=+0.042727592 container create 76b2e80b3611ccd453d6f0cb615e342af8f74b91424e95453dde24f95430570c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_liskov, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Oct 08 09:56:14 compute-0 systemd[1]: Started libpod-conmon-76b2e80b3611ccd453d6f0cb615e342af8f74b91424e95453dde24f95430570c.scope.
Oct 08 09:56:14 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:56:14 compute-0 podman[165329]: 2025-10-08 09:56:14.363179289 +0000 UTC m=+0.023642817 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:56:14 compute-0 podman[165329]: 2025-10-08 09:56:14.472734672 +0000 UTC m=+0.133198200 container init 76b2e80b3611ccd453d6f0cb615e342af8f74b91424e95453dde24f95430570c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_liskov, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:56:14 compute-0 podman[165329]: 2025-10-08 09:56:14.483611149 +0000 UTC m=+0.144074657 container start 76b2e80b3611ccd453d6f0cb615e342af8f74b91424e95453dde24f95430570c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct 08 09:56:14 compute-0 podman[165329]: 2025-10-08 09:56:14.488002987 +0000 UTC m=+0.148466515 container attach 76b2e80b3611ccd453d6f0cb615e342af8f74b91424e95453dde24f95430570c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:56:14 compute-0 angry_liskov[165385]: 167 167
Oct 08 09:56:14 compute-0 podman[165329]: 2025-10-08 09:56:14.491855307 +0000 UTC m=+0.152318815 container died 76b2e80b3611ccd453d6f0cb615e342af8f74b91424e95453dde24f95430570c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Oct 08 09:56:14 compute-0 systemd[1]: libpod-76b2e80b3611ccd453d6f0cb615e342af8f74b91424e95453dde24f95430570c.scope: Deactivated successfully.
Oct 08 09:56:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-470bbf127254c30705b06ea64f1d38a4030cd2c6186d9a2597287044c09747af-merged.mount: Deactivated successfully.
Oct 08 09:56:14 compute-0 sudo[165418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eblmtvdimtnzaaslfdevwtdnpwarqryt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917374.031161-152-148744127080059/AnsiballZ_systemd_service.py'
Oct 08 09:56:14 compute-0 sudo[165418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:56:14 compute-0 podman[165329]: 2025-10-08 09:56:14.528204182 +0000 UTC m=+0.188667690 container remove 76b2e80b3611ccd453d6f0cb615e342af8f74b91424e95453dde24f95430570c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct 08 09:56:14 compute-0 systemd[1]: libpod-conmon-76b2e80b3611ccd453d6f0cb615e342af8f74b91424e95453dde24f95430570c.scope: Deactivated successfully.
Oct 08 09:56:14 compute-0 podman[165440]: 2025-10-08 09:56:14.693560195 +0000 UTC m=+0.048440693 container create 6cff789483a09f024d4907d6661f74d214bf9241b7090e7cb4188183f128eef8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 08 09:56:14 compute-0 systemd[1]: Started libpod-conmon-6cff789483a09f024d4907d6661f74d214bf9241b7090e7cb4188183f128eef8.scope.
Oct 08 09:56:14 compute-0 podman[165440]: 2025-10-08 09:56:14.672565928 +0000 UTC m=+0.027446436 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:56:14 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:56:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03f0cbf986aede5ff40a61d53a3ca70373545216308831322d01e0fc4f9531cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 09:56:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03f0cbf986aede5ff40a61d53a3ca70373545216308831322d01e0fc4f9531cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:56:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03f0cbf986aede5ff40a61d53a3ca70373545216308831322d01e0fc4f9531cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:56:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03f0cbf986aede5ff40a61d53a3ca70373545216308831322d01e0fc4f9531cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 09:56:14 compute-0 podman[165440]: 2025-10-08 09:56:14.791131044 +0000 UTC m=+0.146011542 container init 6cff789483a09f024d4907d6661f74d214bf9241b7090e7cb4188183f128eef8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_jang, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:56:14 compute-0 podman[165440]: 2025-10-08 09:56:14.800578673 +0000 UTC m=+0.155459171 container start 6cff789483a09f024d4907d6661f74d214bf9241b7090e7cb4188183f128eef8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:56:14 compute-0 podman[165440]: 2025-10-08 09:56:14.80404353 +0000 UTC m=+0.158924048 container attach 6cff789483a09f024d4907d6661f74d214bf9241b7090e7cb4188183f128eef8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_jang, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:56:14 compute-0 python3.9[165427]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 09:56:14 compute-0 sudo[165418]: pam_unix(sudo:session): session closed for user root
Oct 08 09:56:15 compute-0 amazing_jang[165456]: {
Oct 08 09:56:15 compute-0 amazing_jang[165456]:     "1": [
Oct 08 09:56:15 compute-0 amazing_jang[165456]:         {
Oct 08 09:56:15 compute-0 amazing_jang[165456]:             "devices": [
Oct 08 09:56:15 compute-0 amazing_jang[165456]:                 "/dev/loop3"
Oct 08 09:56:15 compute-0 amazing_jang[165456]:             ],
Oct 08 09:56:15 compute-0 amazing_jang[165456]:             "lv_name": "ceph_lv0",
Oct 08 09:56:15 compute-0 amazing_jang[165456]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 09:56:15 compute-0 amazing_jang[165456]:             "lv_size": "21470642176",
Oct 08 09:56:15 compute-0 amazing_jang[165456]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 08 09:56:15 compute-0 amazing_jang[165456]:             "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 09:56:15 compute-0 amazing_jang[165456]:             "name": "ceph_lv0",
Oct 08 09:56:15 compute-0 amazing_jang[165456]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 09:56:15 compute-0 amazing_jang[165456]:             "tags": {
Oct 08 09:56:15 compute-0 amazing_jang[165456]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 08 09:56:15 compute-0 amazing_jang[165456]:                 "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 09:56:15 compute-0 amazing_jang[165456]:                 "ceph.cephx_lockbox_secret": "",
Oct 08 09:56:15 compute-0 amazing_jang[165456]:                 "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct 08 09:56:15 compute-0 amazing_jang[165456]:                 "ceph.cluster_name": "ceph",
Oct 08 09:56:15 compute-0 amazing_jang[165456]:                 "ceph.crush_device_class": "",
Oct 08 09:56:15 compute-0 amazing_jang[165456]:                 "ceph.encrypted": "0",
Oct 08 09:56:15 compute-0 amazing_jang[165456]:                 "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct 08 09:56:15 compute-0 amazing_jang[165456]:                 "ceph.osd_id": "1",
Oct 08 09:56:15 compute-0 amazing_jang[165456]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 08 09:56:15 compute-0 amazing_jang[165456]:                 "ceph.type": "block",
Oct 08 09:56:15 compute-0 amazing_jang[165456]:                 "ceph.vdo": "0",
Oct 08 09:56:15 compute-0 amazing_jang[165456]:                 "ceph.with_tpm": "0"
Oct 08 09:56:15 compute-0 amazing_jang[165456]:             },
Oct 08 09:56:15 compute-0 amazing_jang[165456]:             "type": "block",
Oct 08 09:56:15 compute-0 amazing_jang[165456]:             "vg_name": "ceph_vg0"
Oct 08 09:56:15 compute-0 amazing_jang[165456]:         }
Oct 08 09:56:15 compute-0 amazing_jang[165456]:     ]
Oct 08 09:56:15 compute-0 amazing_jang[165456]: }
Oct 08 09:56:15 compute-0 systemd[1]: libpod-6cff789483a09f024d4907d6661f74d214bf9241b7090e7cb4188183f128eef8.scope: Deactivated successfully.
Oct 08 09:56:15 compute-0 podman[165440]: 2025-10-08 09:56:15.131104534 +0000 UTC m=+0.485985032 container died 6cff789483a09f024d4907d6661f74d214bf9241b7090e7cb4188183f128eef8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_jang, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:56:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-03f0cbf986aede5ff40a61d53a3ca70373545216308831322d01e0fc4f9531cd-merged.mount: Deactivated successfully.
Oct 08 09:56:15 compute-0 podman[165440]: 2025-10-08 09:56:15.251624026 +0000 UTC m=+0.606504524 container remove 6cff789483a09f024d4907d6661f74d214bf9241b7090e7cb4188183f128eef8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_jang, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:56:15 compute-0 systemd[1]: libpod-conmon-6cff789483a09f024d4907d6661f74d214bf9241b7090e7cb4188183f128eef8.scope: Deactivated successfully.
Oct 08 09:56:15 compute-0 sudo[165191]: pam_unix(sudo:session): session closed for user root
Oct 08 09:56:15 compute-0 sudo[165601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:56:15 compute-0 sudo[165601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:56:15 compute-0 sudo[165601]: pam_unix(sudo:session): session closed for user root
Oct 08 09:56:15 compute-0 sudo[165650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgjrqgshcysvtmrkpswnesjdasldtncc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917375.051034-152-156901029882344/AnsiballZ_systemd_service.py'
Oct 08 09:56:15 compute-0 sudo[165650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:56:15 compute-0 sudo[165655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- raw list --format json
Oct 08 09:56:15 compute-0 sudo[165655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:56:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:15 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c00a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:15 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a58002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:15 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:56:15 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:56:15 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:56:15.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:56:15 compute-0 python3.9[165654]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 09:56:15 compute-0 sudo[165650]: pam_unix(sudo:session): session closed for user root
Oct 08 09:56:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:56:15] "GET /metrics HTTP/1.1" 200 48353 "" "Prometheus/2.51.0"
Oct 08 09:56:15 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:56:15] "GET /metrics HTTP/1.1" 200 48353 "" "Prometheus/2.51.0"
Oct 08 09:56:15 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:56:15 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:56:15 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:56:15.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:56:15 compute-0 podman[165744]: 2025-10-08 09:56:15.849199238 +0000 UTC m=+0.048075211 container create e03944504efa496146b9ff11567e5150958774156b4a2a637eee582d142dca68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:56:15 compute-0 systemd[1]: Started libpod-conmon-e03944504efa496146b9ff11567e5150958774156b4a2a637eee582d142dca68.scope.
Oct 08 09:56:15 compute-0 podman[165744]: 2025-10-08 09:56:15.829230255 +0000 UTC m=+0.028106238 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:56:15 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:56:15 compute-0 podman[165744]: 2025-10-08 09:56:15.958827423 +0000 UTC m=+0.157703396 container init e03944504efa496146b9ff11567e5150958774156b4a2a637eee582d142dca68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_colden, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 08 09:56:15 compute-0 podman[165744]: 2025-10-08 09:56:15.967127323 +0000 UTC m=+0.166003276 container start e03944504efa496146b9ff11567e5150958774156b4a2a637eee582d142dca68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_colden, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct 08 09:56:15 compute-0 podman[165744]: 2025-10-08 09:56:15.969895786 +0000 UTC m=+0.168771769 container attach e03944504efa496146b9ff11567e5150958774156b4a2a637eee582d142dca68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_colden, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:56:15 compute-0 nice_colden[165761]: 167 167
Oct 08 09:56:15 compute-0 systemd[1]: libpod-e03944504efa496146b9ff11567e5150958774156b4a2a637eee582d142dca68.scope: Deactivated successfully.
Oct 08 09:56:15 compute-0 podman[165744]: 2025-10-08 09:56:15.975103622 +0000 UTC m=+0.173979585 container died e03944504efa496146b9ff11567e5150958774156b4a2a637eee582d142dca68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_colden, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Oct 08 09:56:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-75eb451f4fb64bdcab62645d2b8822bd80894d07de50b6dc57f33c9aac46b065-merged.mount: Deactivated successfully.
Oct 08 09:56:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:16 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:16 compute-0 podman[165744]: 2025-10-08 09:56:16.017925585 +0000 UTC m=+0.216801538 container remove e03944504efa496146b9ff11567e5150958774156b4a2a637eee582d142dca68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_colden, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:56:16 compute-0 systemd[1]: libpod-conmon-e03944504efa496146b9ff11567e5150958774156b4a2a637eee582d142dca68.scope: Deactivated successfully.
Oct 08 09:56:16 compute-0 ceph-mon[73572]: pgmap v337: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 489 B/s wr, 2 op/s
Oct 08 09:56:16 compute-0 podman[165837]: 2025-10-08 09:56:16.225814562 +0000 UTC m=+0.059542968 container create e645069c862c1da9b473457b5ad140c07f6762e2125836b357279673ca549520 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_knuth, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True)
Oct 08 09:56:16 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v338: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 489 B/s wr, 2 op/s
Oct 08 09:56:16 compute-0 systemd[1]: Started libpod-conmon-e645069c862c1da9b473457b5ad140c07f6762e2125836b357279673ca549520.scope.
Oct 08 09:56:16 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:56:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f698e3dfd969e653e2fab6809402bbeb167644ed6fdf9cb488049d2cd73f6d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 09:56:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f698e3dfd969e653e2fab6809402bbeb167644ed6fdf9cb488049d2cd73f6d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:56:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f698e3dfd969e653e2fab6809402bbeb167644ed6fdf9cb488049d2cd73f6d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:56:16 compute-0 podman[165837]: 2025-10-08 09:56:16.207208025 +0000 UTC m=+0.040936441 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:56:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f698e3dfd969e653e2fab6809402bbeb167644ed6fdf9cb488049d2cd73f6d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 09:56:16 compute-0 podman[165837]: 2025-10-08 09:56:16.312837386 +0000 UTC m=+0.146565832 container init e645069c862c1da9b473457b5ad140c07f6762e2125836b357279673ca549520 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_knuth, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:56:16 compute-0 podman[165837]: 2025-10-08 09:56:16.319427818 +0000 UTC m=+0.153156234 container start e645069c862c1da9b473457b5ad140c07f6762e2125836b357279673ca549520 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_knuth, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Oct 08 09:56:16 compute-0 podman[165837]: 2025-10-08 09:56:16.323548586 +0000 UTC m=+0.157277032 container attach e645069c862c1da9b473457b5ad140c07f6762e2125836b357279673ca549520 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_knuth, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 08 09:56:16 compute-0 sudo[165932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtwjcajhnxrejvkwwdfsxenhyxrquupu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917376.0692325-308-14561634600303/AnsiballZ_file.py'
Oct 08 09:56:16 compute-0 sudo[165932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:56:16 compute-0 python3.9[165940]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:56:16 compute-0 sudo[165932]: pam_unix(sudo:session): session closed for user root
Oct 08 09:56:16 compute-0 lvm[166058]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 08 09:56:16 compute-0 lvm[166058]: VG ceph_vg0 finished
Oct 08 09:56:16 compute-0 heuristic_knuth[165853]: {}
Oct 08 09:56:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:56:17.001Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 09:56:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:56:17.003Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 09:56:17 compute-0 systemd[1]: libpod-e645069c862c1da9b473457b5ad140c07f6762e2125836b357279673ca549520.scope: Deactivated successfully.
Oct 08 09:56:17 compute-0 podman[165837]: 2025-10-08 09:56:17.033077932 +0000 UTC m=+0.866806328 container died e645069c862c1da9b473457b5ad140c07f6762e2125836b357279673ca549520 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_knuth, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct 08 09:56:17 compute-0 systemd[1]: libpod-e645069c862c1da9b473457b5ad140c07f6762e2125836b357279673ca549520.scope: Consumed 1.088s CPU time.
Oct 08 09:56:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f698e3dfd969e653e2fab6809402bbeb167644ed6fdf9cb488049d2cd73f6d9-merged.mount: Deactivated successfully.
Oct 08 09:56:17 compute-0 podman[165837]: 2025-10-08 09:56:17.084255247 +0000 UTC m=+0.917983643 container remove e645069c862c1da9b473457b5ad140c07f6762e2125836b357279673ca549520 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_knuth, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 08 09:56:17 compute-0 systemd[1]: libpod-conmon-e645069c862c1da9b473457b5ad140c07f6762e2125836b357279673ca549520.scope: Deactivated successfully.
Oct 08 09:56:17 compute-0 sudo[165655]: pam_unix(sudo:session): session closed for user root
Oct 08 09:56:17 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 09:56:17 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:56:17 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 09:56:17 compute-0 sudo[166167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkkuhynitfjvhtjcbwibhoqfvohbvawk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917376.8805957-308-237578580556387/AnsiballZ_file.py'
Oct 08 09:56:17 compute-0 sudo[166167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:56:17 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:56:17 compute-0 sudo[166170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 08 09:56:17 compute-0 sudo[166170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:56:17 compute-0 sudo[166170]: pam_unix(sudo:session): session closed for user root
Oct 08 09:56:17 compute-0 python3.9[166169]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:56:17 compute-0 sudo[166167]: pam_unix(sudo:session): session closed for user root
Oct 08 09:56:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:17 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:17 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c00a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:17 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:56:17 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:56:17 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:56:17.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:56:17 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:56:17 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:56:17 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:56:17.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:56:17 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 09:56:17 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:56:17 compute-0 sudo[166358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-naakffiiwxuskbxujdvwgodkxhpbibxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917377.5565734-308-124752993099325/AnsiballZ_file.py'
Oct 08 09:56:17 compute-0 sudo[166358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:56:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:56:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:56:17 compute-0 podman[166318]: 2025-10-08 09:56:17.908134457 +0000 UTC m=+0.086575919 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 08 09:56:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:18 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a58002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:18 compute-0 python3.9[166364]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:56:18 compute-0 sudo[166358]: pam_unix(sudo:session): session closed for user root
Oct 08 09:56:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:56:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:56:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:56:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:56:18 compute-0 ceph-mon[73572]: pgmap v338: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 489 B/s wr, 2 op/s
Oct 08 09:56:18 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:56:18 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:56:18 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:56:18 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v339: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 489 B/s wr, 2 op/s
Oct 08 09:56:18 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:56:18 compute-0 sudo[166520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uktazskpomisnzqeqhdtznzkyfwcpodx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917378.2137434-308-131108744174272/AnsiballZ_file.py'
Oct 08 09:56:18 compute-0 sudo[166520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:56:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:56:18.875Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:56:19 compute-0 python3.9[166522]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:56:19 compute-0 sudo[166520]: pam_unix(sudo:session): session closed for user root
Oct 08 09:56:19 compute-0 sudo[166673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbmqcethmcxjlknekkjfazvsnvjrygti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917379.141512-308-198013026038879/AnsiballZ_file.py'
Oct 08 09:56:19 compute-0 sudo[166673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:56:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:19 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:19 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a64003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:19 compute-0 python3.9[166675]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:56:19 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:56:19 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:56:19 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:56:19.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:56:19 compute-0 sudo[166673]: pam_unix(sudo:session): session closed for user root
Oct 08 09:56:19 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:56:19 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:56:19 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:56:19.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:56:20 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:20 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c00a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:20 compute-0 sudo[166826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwzjqhwppccbetrxlfykqezrqqdcnsfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917379.7275755-308-208408818203424/AnsiballZ_file.py'
Oct 08 09:56:20 compute-0 sudo[166826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:56:20 compute-0 python3.9[166828]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:56:20 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v340: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 97 B/s rd, 0 B/s wr, 0 op/s
Oct 08 09:56:20 compute-0 sudo[166826]: pam_unix(sudo:session): session closed for user root
Oct 08 09:56:20 compute-0 sudo[166978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqsbirzzinxkjsxufkgcfgkuebapdkjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917380.409878-308-81408552974938/AnsiballZ_file.py'
Oct 08 09:56:20 compute-0 sudo[166978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:56:20 compute-0 ceph-mon[73572]: pgmap v339: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 489 B/s wr, 2 op/s
Oct 08 09:56:21 compute-0 python3.9[166980]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:56:21 compute-0 sudo[166978]: pam_unix(sudo:session): session closed for user root
Oct 08 09:56:21 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:21 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a58002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:21 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:21 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003c30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:21 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:56:21 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:56:21 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:56:21.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:56:21 compute-0 sudo[167131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yztzqufnzpazcfciupnxcagohbozdaow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917381.3719652-458-4925777382734/AnsiballZ_file.py'
Oct 08 09:56:21 compute-0 sudo[167131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:56:21 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:56:21 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:56:21 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:56:21.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:56:21 compute-0 python3.9[167133]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:56:21 compute-0 sudo[167131]: pam_unix(sudo:session): session closed for user root
Oct 08 09:56:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:22 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a64003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:22 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v341: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 97 B/s rd, 0 B/s wr, 0 op/s
Oct 08 09:56:22 compute-0 sudo[167258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 09:56:22 compute-0 sudo[167258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:56:22 compute-0 sudo[167258]: pam_unix(sudo:session): session closed for user root
Oct 08 09:56:22 compute-0 sudo[167308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flncmmyuttzmqkxhoxvdxlaablfxbthx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917382.1159894-458-211776342860325/AnsiballZ_file.py'
Oct 08 09:56:22 compute-0 sudo[167308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:56:22 compute-0 python3.9[167311]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:56:22 compute-0 sudo[167308]: pam_unix(sudo:session): session closed for user root
Oct 08 09:56:22 compute-0 ceph-mon[73572]: pgmap v340: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 97 B/s rd, 0 B/s wr, 0 op/s
Oct 08 09:56:23 compute-0 sudo[167463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llvzivehsvucxqlkqnkhmjeksuwzjauv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917383.198743-458-197777779053577/AnsiballZ_file.py'
Oct 08 09:56:23 compute-0 sudo[167463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:56:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:23 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c00a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:23 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:56:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:23 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a58002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:23 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:56:23 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:56:23 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:56:23.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:56:23 compute-0 python3.9[167465]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:56:23 compute-0 sudo[167463]: pam_unix(sudo:session): session closed for user root
Oct 08 09:56:23 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:56:23 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:56:23 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:56:23.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:56:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:24 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:24 compute-0 sudo[167616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihsgljeacgcxizzefobakeghmejrfzdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917383.7833285-458-29904576478485/AnsiballZ_file.py'
Oct 08 09:56:24 compute-0 sudo[167616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:56:24 compute-0 ceph-mon[73572]: pgmap v341: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 97 B/s rd, 0 B/s wr, 0 op/s
Oct 08 09:56:24 compute-0 python3.9[167618]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:56:24 compute-0 sudo[167616]: pam_unix(sudo:session): session closed for user root
Oct 08 09:56:24 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v342: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Oct 08 09:56:24 compute-0 sudo[167768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdnrhhjzmgihqqrjlftcwuevmrxooqgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917384.3588064-458-103115996053232/AnsiballZ_file.py'
Oct 08 09:56:24 compute-0 sudo[167768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:56:24 compute-0 python3.9[167770]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:56:24 compute-0 sudo[167768]: pam_unix(sudo:session): session closed for user root
Oct 08 09:56:25 compute-0 sudo[167921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xiuzyycksbvdclsmuomzvzwnxnkkyctp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917384.9670963-458-273494509600824/AnsiballZ_file.py'
Oct 08 09:56:25 compute-0 sudo[167921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:56:25 compute-0 podman[167923]: 2025-10-08 09:56:25.379641963 +0000 UTC m=+0.051825167 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:56:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:25 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a64003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:25 compute-0 python3.9[167924]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:56:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:25 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c00a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:25 compute-0 sudo[167921]: pam_unix(sudo:session): session closed for user root
Oct 08 09:56:25 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:56:25 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:56:25 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:56:25.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:56:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:56:25] "GET /metrics HTTP/1.1" 200 48349 "" "Prometheus/2.51.0"
Oct 08 09:56:25 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:56:25] "GET /metrics HTTP/1.1" 200 48349 "" "Prometheus/2.51.0"
Oct 08 09:56:25 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:56:25 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:56:25 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:56:25.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:56:25 compute-0 sudo[168093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikriyizhyawylcggwqyrjodgklfpnudp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917385.6730416-458-266088105547528/AnsiballZ_file.py'
Oct 08 09:56:25 compute-0 sudo[168093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:56:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:26 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a58002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:26 compute-0 python3.9[168095]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:56:26 compute-0 sudo[168093]: pam_unix(sudo:session): session closed for user root
Oct 08 09:56:26 compute-0 ceph-mon[73572]: pgmap v342: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Oct 08 09:56:26 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v343: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:56:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:56:27.004Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:56:27 compute-0 sudo[168246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfixcxkmolwjogbgrhdczpvtmrboplxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917386.6107373-611-25926569273913/AnsiballZ_command.py'
Oct 08 09:56:27 compute-0 sudo[168246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:56:27 compute-0 ceph-mon[73572]: pgmap v343: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:56:27 compute-0 python3.9[168248]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:56:27 compute-0 sudo[168246]: pam_unix(sudo:session): session closed for user root
Oct 08 09:56:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:27 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003c70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:27 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c00a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:27 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:56:27 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:56:27 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:56:27.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:56:27 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:56:27 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:56:27 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:56:27.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:56:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:28 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a40002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:28 compute-0 python3.9[168403]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 08 09:56:28 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v344: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:56:28 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:56:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:56:28.876Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 09:56:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:56:28.877Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 09:56:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:56:28.877Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 09:56:28 compute-0 sudo[168553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdqellztpsajzyxccnslkpxkxztutjmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917388.5524445-665-74974974023669/AnsiballZ_systemd_service.py'
Oct 08 09:56:28 compute-0 sudo[168553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:56:29 compute-0 python3.9[168555]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 08 09:56:29 compute-0 systemd[1]: Reloading.
Oct 08 09:56:29 compute-0 ceph-mon[73572]: pgmap v344: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:56:29 compute-0 systemd-rc-local-generator[168587]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:56:29 compute-0 systemd-sysv-generator[168590]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:56:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:29 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a60000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:29 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003c90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:29 compute-0 sudo[168553]: pam_unix(sudo:session): session closed for user root
Oct 08 09:56:29 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:56:29 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:56:29 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:56:29.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:56:29 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:56:29 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:56:29 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:56:29.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:56:30 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:30 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c00a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:30 compute-0 sudo[168743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjvsasdyrujckfymthlpjsnpmacttboh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917389.7703173-689-76002768890202/AnsiballZ_command.py'
Oct 08 09:56:30 compute-0 sudo[168743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:56:30 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v345: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:56:30 compute-0 python3.9[168745]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:56:30 compute-0 sudo[168743]: pam_unix(sudo:session): session closed for user root
Oct 08 09:56:30 compute-0 sudo[168896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlfxqvaaormftqcjosmxifjmlilwylue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917390.4297888-689-145927200898933/AnsiballZ_command.py'
Oct 08 09:56:30 compute-0 sudo[168896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:56:30 compute-0 python3.9[168898]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:56:30 compute-0 sudo[168896]: pam_unix(sudo:session): session closed for user root
Oct 08 09:56:31 compute-0 ceph-mon[73572]: pgmap v345: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:56:31 compute-0 sudo[169050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnepucbtcwuqfumbkwjflvoudqvuwevx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917391.053926-689-195215275015203/AnsiballZ_command.py'
Oct 08 09:56:31 compute-0 sudo[169050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:56:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:31 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a40002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:31 compute-0 python3.9[169052]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:56:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:31 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a40002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:31 compute-0 sudo[169050]: pam_unix(sudo:session): session closed for user root
Oct 08 09:56:31 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:56:31 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:56:31 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:56:31.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:56:31 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:56:31 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:56:31 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:56:31.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:56:31 compute-0 sudo[169204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enfipxeoxtsfpwvgipvxladebmfzocgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917391.7027543-689-76306453120541/AnsiballZ_command.py'
Oct 08 09:56:31 compute-0 sudo[169204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:56:32 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:32 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:32 compute-0 python3.9[169206]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:56:32 compute-0 sudo[169204]: pam_unix(sudo:session): session closed for user root
Oct 08 09:56:32 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v346: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:56:32 compute-0 sudo[169357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfuzmtbyenyefhnuuiaylyyxjjrcrlts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917392.3800128-689-178933812543546/AnsiballZ_command.py'
Oct 08 09:56:32 compute-0 sudo[169357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:56:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 09:56:32 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:56:32 compute-0 python3.9[169359]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:56:32 compute-0 sudo[169357]: pam_unix(sudo:session): session closed for user root
Oct 08 09:56:33 compute-0 sudo[169511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnqdivjngtofqajdgwywxcnwoluwbaep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917393.000541-689-54857275158305/AnsiballZ_command.py'
Oct 08 09:56:33 compute-0 sudo[169511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:56:33 compute-0 ceph-mon[73572]: pgmap v346: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:56:33 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:56:33 compute-0 python3.9[169513]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:56:33 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:33 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c00a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:33 compute-0 sudo[169511]: pam_unix(sudo:session): session closed for user root
Oct 08 09:56:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:56:33 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:33 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c00a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:33 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:56:33 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:56:33 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:56:33.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:56:33 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:56:33 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:56:33 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:56:33.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:56:33 compute-0 sudo[169667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfosqxxqvkymuqisatwcnfdrijzejyve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917393.6544337-689-134416242511187/AnsiballZ_command.py'
Oct 08 09:56:33 compute-0 sudo[169667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:56:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:34 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a60001930 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:34 compute-0 python3.9[169669]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:56:34 compute-0 sudo[169667]: pam_unix(sudo:session): session closed for user root
Oct 08 09:56:34 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v347: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 08 09:56:35 compute-0 sudo[169821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zybssjwwumvcjugsrgtjpxwronrkupnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917394.9482608-851-95490156654826/AnsiballZ_getent.py'
Oct 08 09:56:35 compute-0 sudo[169821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:56:35 compute-0 ceph-mon[73572]: pgmap v347: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 08 09:56:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:35 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a34000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:35 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003d80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:35 compute-0 python3.9[169823]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Oct 08 09:56:35 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:56:35 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:56:35 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:56:35.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:56:35 compute-0 sudo[169821]: pam_unix(sudo:session): session closed for user root
Oct 08 09:56:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:56:35] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Oct 08 09:56:35 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:56:35] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Oct 08 09:56:35 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:56:35 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:56:35 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:56:35.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:56:36 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:36 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c00a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:36 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v348: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 09:56:36 compute-0 sudo[169975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpslebrtpcwhmzacutmljudwulfhadpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917395.799679-875-126312527314404/AnsiballZ_group.py'
Oct 08 09:56:36 compute-0 sudo[169975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:56:36 compute-0 python3.9[169977]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 08 09:56:36 compute-0 groupadd[169978]: group added to /etc/group: name=libvirt, GID=42473
Oct 08 09:56:36 compute-0 groupadd[169978]: group added to /etc/gshadow: name=libvirt
Oct 08 09:56:36 compute-0 groupadd[169978]: new group: name=libvirt, GID=42473
Oct 08 09:56:36 compute-0 sudo[169975]: pam_unix(sudo:session): session closed for user root
Oct 08 09:56:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:56:37.004Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 09:56:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:56:37.005Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:56:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/095637 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 08 09:56:37 compute-0 sudo[170134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qisbnqlkdlwzghnwouyattjyzxccnfuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917396.7291536-899-82396247418056/AnsiballZ_user.py'
Oct 08 09:56:37 compute-0 sudo[170134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:56:37 compute-0 python3.9[170136]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 08 09:56:37 compute-0 ceph-mon[73572]: pgmap v348: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 09:56:37 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 08 09:56:37 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 08 09:56:37 compute-0 useradd[170138]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Oct 08 09:56:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:37 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a600022e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:37 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a340016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:37 compute-0 sudo[170134]: pam_unix(sudo:session): session closed for user root
Oct 08 09:56:37 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:56:37 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:56:37 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:56:37.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:56:37 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:56:37 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 09:56:37 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:56:37.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 09:56:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:38 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:38 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v349: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 09:56:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:56:38 compute-0 sudo[170296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihwmmklfezljbbpvvtdkksqblexazsiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917398.2534528-932-219282457607211/AnsiballZ_setup.py'
Oct 08 09:56:38 compute-0 sudo[170296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:56:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:56:38.878Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 09:56:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:56:38.880Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:56:38 compute-0 python3.9[170298]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 08 09:56:39 compute-0 sudo[170296]: pam_unix(sudo:session): session closed for user root
Oct 08 09:56:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:39 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c00a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:39 compute-0 sudo[170381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rywtpnatukfrfnzkyjeytsrfyjrcsggi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917398.2534528-932-219282457607211/AnsiballZ_dnf.py'
Oct 08 09:56:39 compute-0 sudo[170381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:56:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:39 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a600022e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:39 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:56:39 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:56:39 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:56:39.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:56:39 compute-0 ceph-mon[73572]: pgmap v349: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 09:56:39 compute-0 python3.9[170383]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 08 09:56:39 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:56:39 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:56:39 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:56:39.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:56:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:40 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a340016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:40 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v350: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 09:56:41 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:41 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003dc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:41 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:41 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c00a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:41 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:56:41 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:56:41 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:56:41.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:56:41 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:56:41 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:56:41 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:56:41.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:56:41 compute-0 ceph-mon[73572]: pgmap v350: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 09:56:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:42 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a60003190 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:42 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v351: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 09:56:42 compute-0 sudo[170395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 09:56:42 compute-0 sudo[170395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:56:42 compute-0 sudo[170395]: pam_unix(sudo:session): session closed for user root
Oct 08 09:56:43 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:43 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a340016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:56:43 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:43 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003de0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:43 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:56:43 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:56:43 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:56:43.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:56:43 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:56:43 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:56:43 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:56:43.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:56:44 compute-0 ceph-mon[73572]: pgmap v351: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 09:56:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:44 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003de0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:44 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v352: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 85 B/s wr, 0 op/s
Oct 08 09:56:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:45 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c00a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:45 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a60003190 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:45 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:56:45 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 09:56:45 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:56:45.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 09:56:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:56:45] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Oct 08 09:56:45 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:56:45] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Oct 08 09:56:45 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:56:45 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:56:45 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:56:45.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:56:46 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:46 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a34002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:46 compute-0 ceph-mon[73572]: pgmap v352: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 85 B/s wr, 0 op/s
Oct 08 09:56:46 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:46 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 09:56:46 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v353: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 08 09:56:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:56:47.005Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 09:56:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:56:47.006Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:56:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:47 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003f80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:47 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003f80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:47 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:56:47 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:56:47 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:56:47.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:56:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_09:56:47
Oct 08 09:56:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 08 09:56:47 compute-0 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct 08 09:56:47 compute-0 ceph-mgr[73869]: [balancer INFO root] pools ['.mgr', 'backups', 'cephfs.cephfs.data', '.nfs', '.rgw.root', 'volumes', 'vms', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log', 'images']
Oct 08 09:56:47 compute-0 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct 08 09:56:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct 08 09:56:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:56:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 08 09:56:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:56:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:56:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:56:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:56:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:56:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:56:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:56:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:56:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:56:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 08 09:56:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:56:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:56:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:56:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 08 09:56:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:56:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 08 09:56:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:56:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:56:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:56:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 08 09:56:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:56:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 09:56:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 09:56:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:56:47 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:56:47 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:56:47 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:56:47.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:56:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:56:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:56:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 08 09:56:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 09:56:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 09:56:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 09:56:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 09:56:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:48 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a60003190 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:56:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:56:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:56:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:56:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 08 09:56:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 09:56:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 09:56:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 09:56:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 09:56:48 compute-0 ceph-mon[73572]: pgmap v353: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 08 09:56:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:56:48 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v354: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 08 09:56:48 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:56:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:56:48.880Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:56:48 compute-0 podman[170430]: 2025-10-08 09:56:48.920559331 +0000 UTC m=+0.084907236 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 08 09:56:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:49 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 09:56:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:49 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 09:56:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:49 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c00a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:49 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c00a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:49 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:56:49 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:56:49 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:56:49.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:56:49 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:56:49 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 09:56:49 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:56:49.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 09:56:50 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:50 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003f80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:50 compute-0 ceph-mon[73572]: pgmap v354: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 08 09:56:50 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v355: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Oct 08 09:56:51 compute-0 ceph-mon[73572]: pgmap v355: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Oct 08 09:56:51 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:51 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003f80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:51 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:51 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003f80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:51 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:56:51 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 09:56:51 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:56:51.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 09:56:51 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:56:51 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:56:51 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:56:51.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:56:52 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:52 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c00a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:52 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:52 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 08 09:56:52 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v356: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Oct 08 09:56:53 compute-0 ceph-mon[73572]: pgmap v356: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Oct 08 09:56:53 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:53 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a40002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:53 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:56:53 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:53 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a40002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:53 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:56:53 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:56:53 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:56:53.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:56:53 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:56:53 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:56:53 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:56:53.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:56:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:54 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003f80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:54 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v357: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 08 09:56:55 compute-0 ceph-mon[73572]: pgmap v357: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 08 09:56:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:55 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c00a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:55 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a34003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:55 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:56:55 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 09:56:55 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:56:55.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 09:56:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:56:55] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Oct 08 09:56:55 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:56:55] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Oct 08 09:56:55 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:56:55 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 09:56:55 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:56:55.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 09:56:55 compute-0 podman[170637]: 2025-10-08 09:56:55.919528604 +0000 UTC m=+0.068177622 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 08 09:56:56 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:56 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a34003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:56 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v358: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Oct 08 09:56:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:56:57.007Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:56:57 compute-0 ceph-mon[73572]: pgmap v358: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Oct 08 09:56:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:56:57.395 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 09:56:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:56:57.396 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 09:56:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:56:57.396 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 09:56:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:57 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003f80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:57 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c00a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:57 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:56:57 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:56:57 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:56:57.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:56:57 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:56:57 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:56:57 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:56:57.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:56:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:58 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a34003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:58 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v359: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Oct 08 09:56:58 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:56:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:56:58.882Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:56:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/095659 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 08 09:56:59 compute-0 ceph-mon[73572]: pgmap v359: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Oct 08 09:56:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:59 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a34003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:59 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003f80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:56:59 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:56:59 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:56:59 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:56:59.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:56:59 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:56:59 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:56:59 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:56:59.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:57:00 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:57:00 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c00a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:57:00 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v360: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 09:57:01 compute-0 ceph-mon[73572]: pgmap v360: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 09:57:01 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:57:01 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a34003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:57:01 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:57:01 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a34003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:57:01 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:57:01 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:57:01 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:57:01.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:57:01 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:57:01 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:57:01 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:57:01.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:57:02 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:57:02 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:57:02 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v361: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 08 09:57:02 compute-0 sudo[170669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 09:57:02 compute-0 sudo[170669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:57:02 compute-0 sudo[170669]: pam_unix(sudo:session): session closed for user root
Oct 08 09:57:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 09:57:02 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:57:03 compute-0 ceph-mon[73572]: pgmap v361: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 08 09:57:03 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:57:03 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:57:03 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c00a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:57:03 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:57:03 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:57:03 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a34003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:57:03 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:57:03 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 09:57:03 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:57:03.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 09:57:03 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:57:03 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:57:03 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:57:03.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:57:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:57:04 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a34003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:57:04 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v362: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 08 09:57:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:57:05 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a40002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:57:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:57:05 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c00a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:57:05 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:57:05 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:57:05 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:57:05.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:57:05 compute-0 ceph-mon[73572]: pgmap v362: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 08 09:57:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:57:05] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Oct 08 09:57:05 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:57:05] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Oct 08 09:57:05 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:57:05 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:57:05 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:57:05.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:57:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:57:06 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a58000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:57:06 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v363: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 08 09:57:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:57:07.007Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 09:57:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:57:07.008Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 09:57:07 compute-0 ceph-mon[73572]: pgmap v363: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 08 09:57:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:57:07 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a34003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:57:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:57:07 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:57:07 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:57:07 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:57:07 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:57:07.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:57:07 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:57:07 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:57:07 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:57:07.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:57:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:57:08 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c00a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:57:08 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v364: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 08 09:57:08 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:57:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:57:08.883Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 09:57:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:57:08.883Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 09:57:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:57:08.883Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 09:57:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:57:09 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a58000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:57:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:57:09 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a34004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:57:09 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:57:09 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:57:09 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:57:09.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:57:09 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:57:09 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:57:09 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:57:09.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:57:10 compute-0 kernel: SELinux:  Converting 2772 SID table entries...
Oct 08 09:57:10 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 08 09:57:10 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 08 09:57:10 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 08 09:57:10 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 08 09:57:10 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 08 09:57:10 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 08 09:57:10 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 08 09:57:10 compute-0 ceph-mon[73572]: pgmap v364: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 08 09:57:10 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:57:10 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c004020 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:57:10 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v365: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Oct 08 09:57:11 compute-0 ceph-mon[73572]: pgmap v365: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Oct 08 09:57:11 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:57:11 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c00a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:57:11 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:57:11 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a58000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:57:11 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:57:11 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:57:11 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:57:11.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:57:11 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:57:11 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:57:11 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:57:11.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:57:12 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:57:12 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a34004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:57:12 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v366: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:57:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:57:13 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c004040 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:57:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:57:13 compute-0 ceph-mon[73572]: pgmap v366: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:57:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:57:13 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c00a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:57:13 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:57:13 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:57:13 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:57:13.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:57:13 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:57:13 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:57:13 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:57:13.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:57:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:57:14 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a58002c80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:57:14 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v367: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 08 09:57:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:57:15 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a34004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:57:15 compute-0 kernel: ganesha.nfsd[159522]: segfault at 50 ip 00007f8b1838a32e sp 00007f8ad4ff8210 error 4 in libntirpc.so.5.8[7f8b1836f000+2c000] likely on CPU 6 (core 0, socket 6)
Oct 08 09:57:15 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 08 09:57:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:57:15 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c004060 fd 39 proxy ignored for local
Oct 08 09:57:15 compute-0 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Oct 08 09:57:15 compute-0 systemd[1]: Started Process Core Dump (PID 170717/UID 0).
Oct 08 09:57:15 compute-0 ceph-mon[73572]: pgmap v367: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 08 09:57:15 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:57:15 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:57:15 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:57:15.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:57:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:57:15] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Oct 08 09:57:15 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:57:15] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Oct 08 09:57:15 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:57:15 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 09:57:15 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:57:15.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 09:57:16 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v368: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:57:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:57:17.008Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:57:17 compute-0 systemd-coredump[170718]: Process 156719 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 53:
                                                    #0  0x00007f8b1838a32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Oct 08 09:57:17 compute-0 systemd[1]: systemd-coredump@3-170717-0.service: Deactivated successfully.
Oct 08 09:57:17 compute-0 systemd[1]: systemd-coredump@3-170717-0.service: Consumed 1.491s CPU time.
Oct 08 09:57:17 compute-0 podman[170725]: 2025-10-08 09:57:17.233390548 +0000 UTC m=+0.026841050 container died c427e6c11e062f9636a45bc767e0a3cb951225f9153c622ac9e9b72d859be25e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct 08 09:57:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-6201882d2556a974402ebedf55dc29af345432908f34a2728ce3c7ef9e499676-merged.mount: Deactivated successfully.
Oct 08 09:57:17 compute-0 podman[170725]: 2025-10-08 09:57:17.285468345 +0000 UTC m=+0.078918827 container remove c427e6c11e062f9636a45bc767e0a3cb951225f9153c622ac9e9b72d859be25e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid)
Oct 08 09:57:17 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Main process exited, code=exited, status=139/n/a
Oct 08 09:57:17 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Failed with result 'exit-code'.
Oct 08 09:57:17 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Consumed 1.425s CPU time.
Oct 08 09:57:17 compute-0 sudo[170768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:57:17 compute-0 sudo[170768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:57:17 compute-0 sudo[170768]: pam_unix(sudo:session): session closed for user root
Oct 08 09:57:17 compute-0 sudo[170793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 08 09:57:17 compute-0 sudo[170793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:57:17 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:57:17 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 09:57:17 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:57:17.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 09:57:17 compute-0 ceph-mon[73572]: pgmap v368: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:57:17 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 09:57:17 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:57:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:57:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:57:17 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:57:17 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:57:17 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:57:17.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:57:18 compute-0 sudo[170793]: pam_unix(sudo:session): session closed for user root
Oct 08 09:57:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:57:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:57:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:57:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:57:18 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:57:18 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:57:18 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 08 09:57:18 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 09:57:18 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v369: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 256 B/s rd, 0 op/s
Oct 08 09:57:18 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 08 09:57:18 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:57:18 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 08 09:57:18 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:57:18 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 08 09:57:18 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 09:57:18 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 08 09:57:18 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 09:57:18 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:57:18 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:57:18 compute-0 sudo[170850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:57:18 compute-0 sudo[170850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:57:18 compute-0 sudo[170850]: pam_unix(sudo:session): session closed for user root
Oct 08 09:57:18 compute-0 sudo[170875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 08 09:57:18 compute-0 sudo[170875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:57:18 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:57:18 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:57:18 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:57:18 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 09:57:18 compute-0 ceph-mon[73572]: pgmap v369: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 256 B/s rd, 0 op/s
Oct 08 09:57:18 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:57:18 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:57:18 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 09:57:18 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 09:57:18 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:57:18 compute-0 podman[170942]: 2025-10-08 09:57:18.823970135 +0000 UTC m=+0.046434240 container create 91102436dd3052d15a24ef7661b88ef3354c7ac788e6b58a58d3c940220e4387 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_jennings, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 08 09:57:18 compute-0 systemd[1]: Started libpod-conmon-91102436dd3052d15a24ef7661b88ef3354c7ac788e6b58a58d3c940220e4387.scope.
Oct 08 09:57:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:57:18.884Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 09:57:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:57:18.886Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:57:18 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:57:18 compute-0 podman[170942]: 2025-10-08 09:57:18.803710233 +0000 UTC m=+0.026174348 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:57:18 compute-0 podman[170942]: 2025-10-08 09:57:18.905700365 +0000 UTC m=+0.128164500 container init 91102436dd3052d15a24ef7661b88ef3354c7ac788e6b58a58d3c940220e4387 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_jennings, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct 08 09:57:18 compute-0 podman[170942]: 2025-10-08 09:57:18.912624874 +0000 UTC m=+0.135088989 container start 91102436dd3052d15a24ef7661b88ef3354c7ac788e6b58a58d3c940220e4387 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_jennings, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct 08 09:57:18 compute-0 podman[170942]: 2025-10-08 09:57:18.916915937 +0000 UTC m=+0.139380072 container attach 91102436dd3052d15a24ef7661b88ef3354c7ac788e6b58a58d3c940220e4387 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_jennings, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 08 09:57:18 compute-0 magical_jennings[170958]: 167 167
Oct 08 09:57:18 compute-0 systemd[1]: libpod-91102436dd3052d15a24ef7661b88ef3354c7ac788e6b58a58d3c940220e4387.scope: Deactivated successfully.
Oct 08 09:57:18 compute-0 podman[170942]: 2025-10-08 09:57:18.927348772 +0000 UTC m=+0.149812887 container died 91102436dd3052d15a24ef7661b88ef3354c7ac788e6b58a58d3c940220e4387 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:57:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3d629ac13b1223662e4c3b1f4553eec34810d92664d6e27145fe5437b9a979d-merged.mount: Deactivated successfully.
Oct 08 09:57:18 compute-0 podman[170942]: 2025-10-08 09:57:18.979016365 +0000 UTC m=+0.201480480 container remove 91102436dd3052d15a24ef7661b88ef3354c7ac788e6b58a58d3c940220e4387 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_jennings, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:57:18 compute-0 systemd[1]: libpod-conmon-91102436dd3052d15a24ef7661b88ef3354c7ac788e6b58a58d3c940220e4387.scope: Deactivated successfully.
Oct 08 09:57:19 compute-0 podman[170965]: 2025-10-08 09:57:19.069640149 +0000 UTC m=+0.113702380 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 08 09:57:19 compute-0 podman[171011]: 2025-10-08 09:57:19.151971168 +0000 UTC m=+0.045103626 container create b0fab86044b78d5087085d61491a3d7c11c645d34dc3a121473ec9baeaf0faab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Oct 08 09:57:19 compute-0 systemd[1]: Started libpod-conmon-b0fab86044b78d5087085d61491a3d7c11c645d34dc3a121473ec9baeaf0faab.scope.
Oct 08 09:57:19 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:57:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a1f641f53dae2a29456bba149910d78aa32ef47860f0f05c03ce6664f347090/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 09:57:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a1f641f53dae2a29456bba149910d78aa32ef47860f0f05c03ce6664f347090/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:57:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a1f641f53dae2a29456bba149910d78aa32ef47860f0f05c03ce6664f347090/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:57:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a1f641f53dae2a29456bba149910d78aa32ef47860f0f05c03ce6664f347090/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 09:57:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a1f641f53dae2a29456bba149910d78aa32ef47860f0f05c03ce6664f347090/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 09:57:19 compute-0 podman[171011]: 2025-10-08 09:57:19.136159305 +0000 UTC m=+0.029291793 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:57:19 compute-0 podman[171011]: 2025-10-08 09:57:19.248922532 +0000 UTC m=+0.142055040 container init b0fab86044b78d5087085d61491a3d7c11c645d34dc3a121473ec9baeaf0faab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_heisenberg, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:57:19 compute-0 podman[171011]: 2025-10-08 09:57:19.25909353 +0000 UTC m=+0.152226008 container start b0fab86044b78d5087085d61491a3d7c11c645d34dc3a121473ec9baeaf0faab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:57:19 compute-0 podman[171011]: 2025-10-08 09:57:19.289155536 +0000 UTC m=+0.182288044 container attach b0fab86044b78d5087085d61491a3d7c11c645d34dc3a121473ec9baeaf0faab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_heisenberg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct 08 09:57:19 compute-0 peaceful_heisenberg[171028]: --> passed data devices: 0 physical, 1 LVM
Oct 08 09:57:19 compute-0 peaceful_heisenberg[171028]: --> All data devices are unavailable
Oct 08 09:57:19 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:57:19 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:57:19 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:57:19.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:57:19 compute-0 systemd[1]: libpod-b0fab86044b78d5087085d61491a3d7c11c645d34dc3a121473ec9baeaf0faab.scope: Deactivated successfully.
Oct 08 09:57:19 compute-0 podman[171011]: 2025-10-08 09:57:19.650498895 +0000 UTC m=+0.543631363 container died b0fab86044b78d5087085d61491a3d7c11c645d34dc3a121473ec9baeaf0faab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_heisenberg, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1)
Oct 08 09:57:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a1f641f53dae2a29456bba149910d78aa32ef47860f0f05c03ce6664f347090-merged.mount: Deactivated successfully.
Oct 08 09:57:19 compute-0 podman[171011]: 2025-10-08 09:57:19.69925083 +0000 UTC m=+0.592383298 container remove b0fab86044b78d5087085d61491a3d7c11c645d34dc3a121473ec9baeaf0faab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_heisenberg, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 08 09:57:19 compute-0 systemd[1]: libpod-conmon-b0fab86044b78d5087085d61491a3d7c11c645d34dc3a121473ec9baeaf0faab.scope: Deactivated successfully.
Oct 08 09:57:19 compute-0 sudo[170875]: pam_unix(sudo:session): session closed for user root
Oct 08 09:57:19 compute-0 sudo[171054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:57:19 compute-0 sudo[171054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:57:19 compute-0 sudo[171054]: pam_unix(sudo:session): session closed for user root
Oct 08 09:57:19 compute-0 sudo[171079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- lvm list --format json
Oct 08 09:57:19 compute-0 sudo[171079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:57:19 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:57:19 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:57:19 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:57:19.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:57:20 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v370: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 342 B/s rd, 0 op/s
Oct 08 09:57:20 compute-0 podman[171146]: 2025-10-08 09:57:20.331460348 +0000 UTC m=+0.047802005 container create 8f1de58e88a76fb584c68d6b39fff8ccbae41785ec9406c254080c7ba8ae85c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_curran, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:57:20 compute-0 ceph-mon[73572]: pgmap v370: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 342 B/s rd, 0 op/s
Oct 08 09:57:20 compute-0 systemd[1]: Started libpod-conmon-8f1de58e88a76fb584c68d6b39fff8ccbae41785ec9406c254080c7ba8ae85c1.scope.
Oct 08 09:57:20 compute-0 podman[171146]: 2025-10-08 09:57:20.308606181 +0000 UTC m=+0.024947838 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:57:20 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:57:20 compute-0 podman[171146]: 2025-10-08 09:57:20.433648525 +0000 UTC m=+0.149990152 container init 8f1de58e88a76fb584c68d6b39fff8ccbae41785ec9406c254080c7ba8ae85c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_curran, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:57:20 compute-0 podman[171146]: 2025-10-08 09:57:20.442992855 +0000 UTC m=+0.159334472 container start 8f1de58e88a76fb584c68d6b39fff8ccbae41785ec9406c254080c7ba8ae85c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_curran, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct 08 09:57:20 compute-0 podman[171146]: 2025-10-08 09:57:20.446830532 +0000 UTC m=+0.163172159 container attach 8f1de58e88a76fb584c68d6b39fff8ccbae41785ec9406c254080c7ba8ae85c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_curran, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:57:20 compute-0 practical_curran[171162]: 167 167
Oct 08 09:57:20 compute-0 systemd[1]: libpod-8f1de58e88a76fb584c68d6b39fff8ccbae41785ec9406c254080c7ba8ae85c1.scope: Deactivated successfully.
Oct 08 09:57:20 compute-0 podman[171146]: 2025-10-08 09:57:20.45128706 +0000 UTC m=+0.167628687 container died 8f1de58e88a76fb584c68d6b39fff8ccbae41785ec9406c254080c7ba8ae85c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:57:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-e22209959425173c155f0f5c87883d9c36f56449868a1336c880879342e88aa3-merged.mount: Deactivated successfully.
Oct 08 09:57:20 compute-0 podman[171146]: 2025-10-08 09:57:20.488897467 +0000 UTC m=+0.205239084 container remove 8f1de58e88a76fb584c68d6b39fff8ccbae41785ec9406c254080c7ba8ae85c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:57:20 compute-0 systemd[1]: libpod-conmon-8f1de58e88a76fb584c68d6b39fff8ccbae41785ec9406c254080c7ba8ae85c1.scope: Deactivated successfully.
Oct 08 09:57:20 compute-0 podman[171185]: 2025-10-08 09:57:20.688141832 +0000 UTC m=+0.048167218 container create 5a83dcb212d7c5f9b1bf402ab8ebc7c105d51d85f118bcb2f99d9998d218af61 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct 08 09:57:20 compute-0 systemd[1]: Started libpod-conmon-5a83dcb212d7c5f9b1bf402ab8ebc7c105d51d85f118bcb2f99d9998d218af61.scope.
Oct 08 09:57:20 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:57:20 compute-0 podman[171185]: 2025-10-08 09:57:20.670172987 +0000 UTC m=+0.030198383 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:57:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cceccf25527cb5df039b2bddca3b5a22d1f8bc6279b00a6da393b28371112e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 09:57:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cceccf25527cb5df039b2bddca3b5a22d1f8bc6279b00a6da393b28371112e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:57:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cceccf25527cb5df039b2bddca3b5a22d1f8bc6279b00a6da393b28371112e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:57:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cceccf25527cb5df039b2bddca3b5a22d1f8bc6279b00a6da393b28371112e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 09:57:20 compute-0 podman[171185]: 2025-10-08 09:57:20.790215176 +0000 UTC m=+0.150240612 container init 5a83dcb212d7c5f9b1bf402ab8ebc7c105d51d85f118bcb2f99d9998d218af61 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:57:20 compute-0 podman[171185]: 2025-10-08 09:57:20.801442648 +0000 UTC m=+0.161468044 container start 5a83dcb212d7c5f9b1bf402ab8ebc7c105d51d85f118bcb2f99d9998d218af61 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_yalow, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:57:20 compute-0 podman[171185]: 2025-10-08 09:57:20.807761278 +0000 UTC m=+0.167786724 container attach 5a83dcb212d7c5f9b1bf402ab8ebc7c105d51d85f118bcb2f99d9998d218af61 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_yalow, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:57:21 compute-0 jovial_yalow[171202]: {
Oct 08 09:57:21 compute-0 jovial_yalow[171202]:     "1": [
Oct 08 09:57:21 compute-0 jovial_yalow[171202]:         {
Oct 08 09:57:21 compute-0 jovial_yalow[171202]:             "devices": [
Oct 08 09:57:21 compute-0 jovial_yalow[171202]:                 "/dev/loop3"
Oct 08 09:57:21 compute-0 jovial_yalow[171202]:             ],
Oct 08 09:57:21 compute-0 jovial_yalow[171202]:             "lv_name": "ceph_lv0",
Oct 08 09:57:21 compute-0 jovial_yalow[171202]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 09:57:21 compute-0 jovial_yalow[171202]:             "lv_size": "21470642176",
Oct 08 09:57:21 compute-0 jovial_yalow[171202]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 08 09:57:21 compute-0 jovial_yalow[171202]:             "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 09:57:21 compute-0 jovial_yalow[171202]:             "name": "ceph_lv0",
Oct 08 09:57:21 compute-0 jovial_yalow[171202]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 09:57:21 compute-0 jovial_yalow[171202]:             "tags": {
Oct 08 09:57:21 compute-0 jovial_yalow[171202]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 08 09:57:21 compute-0 jovial_yalow[171202]:                 "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 09:57:21 compute-0 jovial_yalow[171202]:                 "ceph.cephx_lockbox_secret": "",
Oct 08 09:57:21 compute-0 jovial_yalow[171202]:                 "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct 08 09:57:21 compute-0 jovial_yalow[171202]:                 "ceph.cluster_name": "ceph",
Oct 08 09:57:21 compute-0 jovial_yalow[171202]:                 "ceph.crush_device_class": "",
Oct 08 09:57:21 compute-0 jovial_yalow[171202]:                 "ceph.encrypted": "0",
Oct 08 09:57:21 compute-0 jovial_yalow[171202]:                 "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct 08 09:57:21 compute-0 jovial_yalow[171202]:                 "ceph.osd_id": "1",
Oct 08 09:57:21 compute-0 jovial_yalow[171202]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 08 09:57:21 compute-0 jovial_yalow[171202]:                 "ceph.type": "block",
Oct 08 09:57:21 compute-0 jovial_yalow[171202]:                 "ceph.vdo": "0",
Oct 08 09:57:21 compute-0 jovial_yalow[171202]:                 "ceph.with_tpm": "0"
Oct 08 09:57:21 compute-0 jovial_yalow[171202]:             },
Oct 08 09:57:21 compute-0 jovial_yalow[171202]:             "type": "block",
Oct 08 09:57:21 compute-0 jovial_yalow[171202]:             "vg_name": "ceph_vg0"
Oct 08 09:57:21 compute-0 jovial_yalow[171202]:         }
Oct 08 09:57:21 compute-0 jovial_yalow[171202]:     ]
Oct 08 09:57:21 compute-0 jovial_yalow[171202]: }
Oct 08 09:57:21 compute-0 systemd[1]: libpod-5a83dcb212d7c5f9b1bf402ab8ebc7c105d51d85f118bcb2f99d9998d218af61.scope: Deactivated successfully.
Oct 08 09:57:21 compute-0 podman[171185]: 2025-10-08 09:57:21.126314027 +0000 UTC m=+0.486339463 container died 5a83dcb212d7c5f9b1bf402ab8ebc7c105d51d85f118bcb2f99d9998d218af61 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_yalow, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 08 09:57:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-5cceccf25527cb5df039b2bddca3b5a22d1f8bc6279b00a6da393b28371112e9-merged.mount: Deactivated successfully.
Oct 08 09:57:21 compute-0 podman[171185]: 2025-10-08 09:57:21.404149658 +0000 UTC m=+0.764175044 container remove 5a83dcb212d7c5f9b1bf402ab8ebc7c105d51d85f118bcb2f99d9998d218af61 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_yalow, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:57:21 compute-0 systemd[1]: libpod-conmon-5a83dcb212d7c5f9b1bf402ab8ebc7c105d51d85f118bcb2f99d9998d218af61.scope: Deactivated successfully.
Oct 08 09:57:21 compute-0 sudo[171079]: pam_unix(sudo:session): session closed for user root
Oct 08 09:57:21 compute-0 sudo[171230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:57:21 compute-0 sudo[171230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:57:21 compute-0 sudo[171230]: pam_unix(sudo:session): session closed for user root
Oct 08 09:57:21 compute-0 sudo[171255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- raw list --format json
Oct 08 09:57:21 compute-0 sudo[171255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:57:21 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/095721 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 08 09:57:21 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:57:21 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:57:21 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:57:21.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:57:21 compute-0 kernel: SELinux:  Converting 2772 SID table entries...
Oct 08 09:57:21 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 08 09:57:21 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 08 09:57:21 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 08 09:57:21 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 08 09:57:21 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 08 09:57:21 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 08 09:57:21 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 08 09:57:21 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:57:21 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 09:57:21 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:57:21.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 09:57:21 compute-0 podman[171324]: 2025-10-08 09:57:21.976323234 +0000 UTC m=+0.043621106 container create 8130a0098996276444d2937087395990bc082ec9f6d09fe00dd06bb6e9e1c8c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:57:21 compute-0 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Oct 08 09:57:22 compute-0 systemd[1]: Started libpod-conmon-8130a0098996276444d2937087395990bc082ec9f6d09fe00dd06bb6e9e1c8c0.scope.
Oct 08 09:57:22 compute-0 podman[171324]: 2025-10-08 09:57:21.957010404 +0000 UTC m=+0.024308296 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:57:22 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:57:22 compute-0 podman[171324]: 2025-10-08 09:57:22.07967915 +0000 UTC m=+0.146977042 container init 8130a0098996276444d2937087395990bc082ec9f6d09fe00dd06bb6e9e1c8c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_noether, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:57:22 compute-0 podman[171324]: 2025-10-08 09:57:22.085676139 +0000 UTC m=+0.152974011 container start 8130a0098996276444d2937087395990bc082ec9f6d09fe00dd06bb6e9e1c8c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_noether, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Oct 08 09:57:22 compute-0 podman[171324]: 2025-10-08 09:57:22.089367572 +0000 UTC m=+0.156665464 container attach 8130a0098996276444d2937087395990bc082ec9f6d09fe00dd06bb6e9e1c8c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:57:22 compute-0 vigilant_noether[171341]: 167 167
Oct 08 09:57:22 compute-0 systemd[1]: libpod-8130a0098996276444d2937087395990bc082ec9f6d09fe00dd06bb6e9e1c8c0.scope: Deactivated successfully.
Oct 08 09:57:22 compute-0 podman[171324]: 2025-10-08 09:57:22.092384182 +0000 UTC m=+0.159682074 container died 8130a0098996276444d2937087395990bc082ec9f6d09fe00dd06bb6e9e1c8c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct 08 09:57:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-07a0647b039b8f2a990a9f3157f6e0bad85a1eea48655bb58ecf2ba4e5fe80ea-merged.mount: Deactivated successfully.
Oct 08 09:57:22 compute-0 podman[171324]: 2025-10-08 09:57:22.131577401 +0000 UTC m=+0.198875273 container remove 8130a0098996276444d2937087395990bc082ec9f6d09fe00dd06bb6e9e1c8c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:57:22 compute-0 systemd[1]: libpod-conmon-8130a0098996276444d2937087395990bc082ec9f6d09fe00dd06bb6e9e1c8c0.scope: Deactivated successfully.
Oct 08 09:57:22 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v371: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 256 B/s rd, 0 op/s
Oct 08 09:57:22 compute-0 podman[171364]: 2025-10-08 09:57:22.284923614 +0000 UTC m=+0.037814865 container create bab95bf684e184260b950d7a4cf418ea358f6961103ac0081d77ad43af7ae4e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:57:22 compute-0 systemd[1]: Started libpod-conmon-bab95bf684e184260b950d7a4cf418ea358f6961103ac0081d77ad43af7ae4e4.scope.
Oct 08 09:57:22 compute-0 ceph-mon[73572]: pgmap v371: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 256 B/s rd, 0 op/s
Oct 08 09:57:22 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:57:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b919cfb635d8aef8a4906492d205f5909031f2b9a837d8239d642baedbd3658/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 09:57:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b919cfb635d8aef8a4906492d205f5909031f2b9a837d8239d642baedbd3658/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:57:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b919cfb635d8aef8a4906492d205f5909031f2b9a837d8239d642baedbd3658/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:57:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b919cfb635d8aef8a4906492d205f5909031f2b9a837d8239d642baedbd3658/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 09:57:22 compute-0 podman[171364]: 2025-10-08 09:57:22.26730154 +0000 UTC m=+0.020192821 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:57:22 compute-0 podman[171364]: 2025-10-08 09:57:22.38072623 +0000 UTC m=+0.133617501 container init bab95bf684e184260b950d7a4cf418ea358f6961103ac0081d77ad43af7ae4e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:57:22 compute-0 podman[171364]: 2025-10-08 09:57:22.391119015 +0000 UTC m=+0.144010276 container start bab95bf684e184260b950d7a4cf418ea358f6961103ac0081d77ad43af7ae4e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:57:22 compute-0 podman[171364]: 2025-10-08 09:57:22.395976805 +0000 UTC m=+0.148868066 container attach bab95bf684e184260b950d7a4cf418ea358f6961103ac0081d77ad43af7ae4e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_bhabha, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:57:22 compute-0 sudo[171391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 09:57:22 compute-0 sudo[171391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:57:22 compute-0 sudo[171391]: pam_unix(sudo:session): session closed for user root
Oct 08 09:57:22 compute-0 lvm[171479]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 08 09:57:22 compute-0 lvm[171479]: VG ceph_vg0 finished
Oct 08 09:57:23 compute-0 trusting_bhabha[171380]: {}
Oct 08 09:57:23 compute-0 lvm[171483]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 08 09:57:23 compute-0 lvm[171483]: VG ceph_vg0 finished
Oct 08 09:57:23 compute-0 systemd[1]: libpod-bab95bf684e184260b950d7a4cf418ea358f6961103ac0081d77ad43af7ae4e4.scope: Deactivated successfully.
Oct 08 09:57:23 compute-0 podman[171364]: 2025-10-08 09:57:23.090134376 +0000 UTC m=+0.843025677 container died bab95bf684e184260b950d7a4cf418ea358f6961103ac0081d77ad43af7ae4e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_bhabha, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:57:23 compute-0 systemd[1]: libpod-bab95bf684e184260b950d7a4cf418ea358f6961103ac0081d77ad43af7ae4e4.scope: Consumed 1.104s CPU time.
Oct 08 09:57:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b919cfb635d8aef8a4906492d205f5909031f2b9a837d8239d642baedbd3658-merged.mount: Deactivated successfully.
Oct 08 09:57:23 compute-0 podman[171364]: 2025-10-08 09:57:23.14753891 +0000 UTC m=+0.900430171 container remove bab95bf684e184260b950d7a4cf418ea358f6961103ac0081d77ad43af7ae4e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_bhabha, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True)
Oct 08 09:57:23 compute-0 systemd[1]: libpod-conmon-bab95bf684e184260b950d7a4cf418ea358f6961103ac0081d77ad43af7ae4e4.scope: Deactivated successfully.
Oct 08 09:57:23 compute-0 sudo[171255]: pam_unix(sudo:session): session closed for user root
Oct 08 09:57:23 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 09:57:23 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:57:23 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 09:57:23 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:57:23 compute-0 sudo[171497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 08 09:57:23 compute-0 sudo[171497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:57:23 compute-0 sudo[171497]: pam_unix(sudo:session): session closed for user root
Oct 08 09:57:23 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:57:23 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:57:23 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 09:57:23 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:57:23.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 09:57:23 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:57:23 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:57:23 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:57:23.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:57:24 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v372: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 342 B/s rd, 0 op/s
Oct 08 09:57:24 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:57:24 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:57:25 compute-0 ceph-mon[73572]: pgmap v372: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 342 B/s rd, 0 op/s
Oct 08 09:57:25 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:57:25 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 09:57:25 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:57:25.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 09:57:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:57:25] "GET /metrics HTTP/1.1" 200 48348 "" "Prometheus/2.51.0"
Oct 08 09:57:25 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:57:25] "GET /metrics HTTP/1.1" 200 48348 "" "Prometheus/2.51.0"
Oct 08 09:57:25 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:57:25 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:57:25 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:57:25.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:57:26 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v373: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 171 B/s rd, 0 op/s
Oct 08 09:57:26 compute-0 ceph-mon[73572]: pgmap v373: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 171 B/s rd, 0 op/s
Oct 08 09:57:26 compute-0 podman[171525]: 2025-10-08 09:57:26.914001926 +0000 UTC m=+0.071827032 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 08 09:57:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:57:27.010Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:57:27 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Scheduled restart job, restart counter is at 4.
Oct 08 09:57:27 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct 08 09:57:27 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Consumed 1.425s CPU time.
Oct 08 09:57:27 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct 08 09:57:27 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:57:27 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:57:27 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:57:27.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:57:27 compute-0 podman[171595]: 2025-10-08 09:57:27.783151638 +0000 UTC m=+0.019107824 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:57:27 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:57:27 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:57:27 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:57:27.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:57:28 compute-0 podman[171595]: 2025-10-08 09:57:28.006321186 +0000 UTC m=+0.242277352 container create 6e3f2bf17063e42f526444bce3d228dd80b89337a4330c408305990317c0e676 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 08 09:57:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56aa7bd87c581fd0af616dc67fc3157442e4b37bcac452af83d16c25e948e62c/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 08 09:57:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56aa7bd87c581fd0af616dc67fc3157442e4b37bcac452af83d16c25e948e62c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:57:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56aa7bd87c581fd0af616dc67fc3157442e4b37bcac452af83d16c25e948e62c/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 09:57:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56aa7bd87c581fd0af616dc67fc3157442e4b37bcac452af83d16c25e948e62c/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.uynkmx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 09:57:28 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v374: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 171 B/s rd, 0 op/s
Oct 08 09:57:28 compute-0 podman[171595]: 2025-10-08 09:57:28.296880338 +0000 UTC m=+0.532836514 container init 6e3f2bf17063e42f526444bce3d228dd80b89337a4330c408305990317c0e676 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Oct 08 09:57:28 compute-0 podman[171595]: 2025-10-08 09:57:28.301975337 +0000 UTC m=+0.537931533 container start 6e3f2bf17063e42f526444bce3d228dd80b89337a4330c408305990317c0e676 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:57:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:28 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 08 09:57:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:28 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 08 09:57:28 compute-0 bash[171595]: 6e3f2bf17063e42f526444bce3d228dd80b89337a4330c408305990317c0e676
Oct 08 09:57:28 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct 08 09:57:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:28 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 08 09:57:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:28 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 08 09:57:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:28 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 08 09:57:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:28 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 08 09:57:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:28 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 08 09:57:28 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:57:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:28 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 09:57:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:57:28.887Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:57:29 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:57:29 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:57:29 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:57:29.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:57:29 compute-0 ceph-mon[73572]: pgmap v374: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 171 B/s rd, 0 op/s
Oct 08 09:57:29 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:57:29 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:57:29 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:57:29.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:57:30 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v375: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 08 09:57:30 compute-0 ceph-mon[73572]: pgmap v375: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 08 09:57:31 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:57:31 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 09:57:31 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:57:31.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 09:57:31 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:57:31 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:57:31 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:57:31.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:57:32 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v376: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:57:32 compute-0 ceph-mon[73572]: pgmap v376: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:57:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 09:57:32 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:57:33 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:57:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:57:33 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:57:33 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 09:57:33 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:57:33.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 09:57:33 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:57:33 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:57:33 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:57:33.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:57:34 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v377: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 08 09:57:34 compute-0 ceph-mon[73572]: pgmap v377: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 08 09:57:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:34 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 09:57:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:34 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 09:57:35 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:57:35 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:57:35 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:57:35.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:57:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:57:35] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Oct 08 09:57:35 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:57:35] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Oct 08 09:57:35 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:57:35 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:57:35 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:57:35.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:57:36 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v378: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 08 09:57:36 compute-0 ceph-mon[73572]: pgmap v378: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 08 09:57:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:57:37.011Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 09:57:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:57:37.012Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:57:37 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:57:37 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:57:37 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:57:37.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:57:37 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:57:37 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:57:37 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:57:37.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:57:38 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v379: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 08 09:57:38 compute-0 ceph-mon[73572]: pgmap v379: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 08 09:57:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:57:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:57:38.888Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 09:57:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:57:38.889Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:57:39 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:57:39 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:57:39 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:57:39.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:57:39 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:57:39 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:57:39 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:57:39.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:57:40 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v380: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 08 09:57:40 compute-0 ceph-mon[73572]: pgmap v380: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 08 09:57:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:40 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 08 09:57:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:40 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 08 09:57:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:40 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 08 09:57:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:40 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 08 09:57:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:40 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 08 09:57:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:40 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 08 09:57:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:40 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 08 09:57:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:40 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 08 09:57:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:40 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 08 09:57:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:40 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 08 09:57:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:40 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 08 09:57:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:40 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 08 09:57:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:40 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 08 09:57:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:40 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 08 09:57:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:40 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 08 09:57:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:40 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 08 09:57:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:40 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 08 09:57:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:40 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 08 09:57:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:40 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 08 09:57:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:40 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 08 09:57:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:40 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 08 09:57:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:40 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 08 09:57:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:40 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 08 09:57:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:40 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 08 09:57:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:40 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 08 09:57:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:40 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 08 09:57:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:40 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 08 09:57:41 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:41 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0424000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:57:41 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:41 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420001c40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:57:41 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:57:41 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:57:41 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:57:41.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:57:41 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:57:41 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:57:41 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:57:41.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:57:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:42 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420001c40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:57:42 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v381: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 08 09:57:42 compute-0 ceph-mon[73572]: pgmap v381: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 08 09:57:42 compute-0 sudo[177631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 09:57:42 compute-0 sudo[177631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:57:42 compute-0 sudo[177631]: pam_unix(sudo:session): session closed for user root
Oct 08 09:57:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:57:43 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:43 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:57:43 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/095743 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 08 09:57:43 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:43 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0414001230 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:57:43 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:57:43 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:57:43 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:57:43.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:57:43 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:57:43 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:57:43 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:57:43.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:57:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:44 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0400000d00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:57:44 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v382: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 08 09:57:44 compute-0 ceph-mon[73572]: pgmap v382: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 08 09:57:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:45 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420001c40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:57:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:45 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:57:45 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:57:45 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:57:45 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:57:45.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:57:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:57:45] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Oct 08 09:57:45 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:57:45] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Oct 08 09:57:45 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:57:45 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:57:45 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:57:45.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:57:46 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:46 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0414001d50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:57:46 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v383: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 08 09:57:46 compute-0 ceph-mon[73572]: pgmap v383: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 08 09:57:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:57:47.013Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:57:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:47 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0400001820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:57:47 compute-0 kernel: ganesha.nfsd[176470]: segfault at 50 ip 00007f04d3a8e32e sp 00007f049e7fb210 error 4 in libntirpc.so.5.8[7f04d3a73000+2c000] likely on CPU 6 (core 0, socket 6)
Oct 08 09:57:47 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 08 09:57:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:47 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420002f20 fd 39 proxy ignored for local
Oct 08 09:57:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_09:57:47
Oct 08 09:57:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 08 09:57:47 compute-0 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct 08 09:57:47 compute-0 ceph-mgr[73869]: [balancer INFO root] pools ['default.rgw.log', 'images', 'vms', 'default.rgw.control', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', '.mgr', '.nfs', '.rgw.root', 'cephfs.cephfs.data', 'backups']
Oct 08 09:57:47 compute-0 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct 08 09:57:47 compute-0 systemd[1]: Started Process Core Dump (PID 180972/UID 0).
Oct 08 09:57:47 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:57:47 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:57:47 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:57:47.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:57:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct 08 09:57:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:57:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 08 09:57:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:57:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:57:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:57:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:57:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:57:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:57:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:57:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:57:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:57:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 08 09:57:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:57:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:57:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:57:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 08 09:57:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:57:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 08 09:57:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:57:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:57:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:57:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 08 09:57:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:57:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 09:57:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 09:57:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:57:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:57:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:57:47 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:57:47 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:57:47 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:57:47 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:57:47.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:57:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 08 09:57:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 09:57:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 09:57:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 09:57:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 09:57:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:57:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:57:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:57:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:57:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 08 09:57:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 09:57:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 09:57:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 09:57:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 09:57:48 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v384: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 08 09:57:48 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:57:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:57:48.889Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:57:49 compute-0 ceph-mon[73572]: pgmap v384: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 08 09:57:49 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:57:49 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:57:49 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:57:49.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:57:49 compute-0 ceph-mgr[73869]: [devicehealth INFO root] Check health
Oct 08 09:57:49 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:57:49 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:57:49 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:57:49.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:57:49 compute-0 podman[182508]: 2025-10-08 09:57:49.931863713 +0000 UTC m=+0.097405402 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 08 09:57:50 compute-0 systemd-coredump[180985]: Process 171615 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 42:
                                                    #0  0x00007f04d3a8e32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Oct 08 09:57:50 compute-0 systemd[1]: systemd-coredump@4-180972-0.service: Deactivated successfully.
Oct 08 09:57:50 compute-0 systemd[1]: systemd-coredump@4-180972-0.service: Consumed 1.206s CPU time.
Oct 08 09:57:50 compute-0 podman[182710]: 2025-10-08 09:57:50.151360971 +0000 UTC m=+0.033865955 container died 6e3f2bf17063e42f526444bce3d228dd80b89337a4330c408305990317c0e676 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:57:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-56aa7bd87c581fd0af616dc67fc3157442e4b37bcac452af83d16c25e948e62c-merged.mount: Deactivated successfully.
Oct 08 09:57:50 compute-0 podman[182710]: 2025-10-08 09:57:50.191527646 +0000 UTC m=+0.074032620 container remove 6e3f2bf17063e42f526444bce3d228dd80b89337a4330c408305990317c0e676 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:57:50 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Main process exited, code=exited, status=139/n/a
Oct 08 09:57:50 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v385: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 2 op/s
Oct 08 09:57:50 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Failed with result 'exit-code'.
Oct 08 09:57:50 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Consumed 1.384s CPU time.
Oct 08 09:57:51 compute-0 ceph-mon[73572]: pgmap v385: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 2 op/s
Oct 08 09:57:51 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:57:51 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:57:51 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:57:51.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:57:51 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:57:51 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:57:51 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:57:51.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:57:52 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v386: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 08 09:57:52 compute-0 ceph-mon[73572]: pgmap v386: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 08 09:57:53 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:57:53 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:57:53 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:57:53 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:57:53.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:57:53 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:57:53 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:57:53 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:57:53.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:57:54 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v387: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 08 09:57:54 compute-0 ceph-mon[73572]: pgmap v387: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 08 09:57:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/095755 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 08 09:57:55 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:57:55 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:57:55 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:57:55.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:57:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:57:55] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Oct 08 09:57:55 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:57:55] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Oct 08 09:57:55 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:57:55 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:57:55 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:57:55.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:57:56 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v388: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 09:57:56 compute-0 ceph-mon[73572]: pgmap v388: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 09:57:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:57:57.014Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 09:57:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:57:57.014Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:57:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:57:57.396 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 09:57:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:57:57.396 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 09:57:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:57:57.396 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 09:57:57 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:57:57 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:57:57 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:57:57.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:57:57 compute-0 podman[187617]: 2025-10-08 09:57:57.890261807 +0000 UTC m=+0.044167389 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 08 09:57:57 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:57:57 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:57:57 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:57:57.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:57:58 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v389: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 09:57:58 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:57:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:57:58.891Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:57:59 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:57:59 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:57:59 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:57:59.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:57:59 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:57:59 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:57:59 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:57:59.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:58:00 compute-0 ceph-mon[73572]: pgmap v389: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 09:58:00 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v390: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 08 09:58:00 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Scheduled restart job, restart counter is at 5.
Oct 08 09:58:00 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct 08 09:58:00 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Consumed 1.384s CPU time.
Oct 08 09:58:00 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct 08 09:58:00 compute-0 podman[188603]: 2025-10-08 09:58:00.842182069 +0000 UTC m=+0.051561447 container create c7ddf9eb043b2f4319271f9f52568ea96e1aa7b542b5cb278857b11c92e1ddaa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct 08 09:58:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b305198b3d8efc9db631e903d993aee68b48b62f795ac868089a526f78c5ea29/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 08 09:58:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b305198b3d8efc9db631e903d993aee68b48b62f795ac868089a526f78c5ea29/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:58:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b305198b3d8efc9db631e903d993aee68b48b62f795ac868089a526f78c5ea29/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 09:58:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b305198b3d8efc9db631e903d993aee68b48b62f795ac868089a526f78c5ea29/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.uynkmx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 09:58:00 compute-0 podman[188603]: 2025-10-08 09:58:00.820295546 +0000 UTC m=+0.029674934 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:58:00 compute-0 podman[188603]: 2025-10-08 09:58:00.917106268 +0000 UTC m=+0.126485726 container init c7ddf9eb043b2f4319271f9f52568ea96e1aa7b542b5cb278857b11c92e1ddaa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:58:00 compute-0 podman[188603]: 2025-10-08 09:58:00.922025622 +0000 UTC m=+0.131405000 container start c7ddf9eb043b2f4319271f9f52568ea96e1aa7b542b5cb278857b11c92e1ddaa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct 08 09:58:00 compute-0 bash[188603]: c7ddf9eb043b2f4319271f9f52568ea96e1aa7b542b5cb278857b11c92e1ddaa
Oct 08 09:58:00 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct 08 09:58:00 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:00 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 08 09:58:00 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:00 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 08 09:58:00 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:00 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 08 09:58:00 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:00 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 08 09:58:00 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:00 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 08 09:58:00 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:00 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 08 09:58:01 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:01 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 08 09:58:01 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:01 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 09:58:01 compute-0 ceph-mon[73572]: pgmap v390: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 08 09:58:01 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:58:01 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:58:01 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:58:01.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:58:01 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:58:01 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:58:01 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:58:01.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:58:02 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v391: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 09:58:02 compute-0 ceph-mon[73572]: pgmap v391: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 09:58:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 09:58:02 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:58:02 compute-0 sudo[188673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 09:58:02 compute-0 sudo[188673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:58:02 compute-0 sudo[188673]: pam_unix(sudo:session): session closed for user root
Oct 08 09:58:03 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:58:03 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:58:03 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:58:03 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:58:03 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:58:03.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:58:03 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:58:03 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:58:03 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:58:03.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:58:04 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v392: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 08 09:58:04 compute-0 ceph-mon[73572]: pgmap v392: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 08 09:58:05 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:58:05 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:58:05 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:58:05.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:58:05 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:58:05] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Oct 08 09:58:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:58:05] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Oct 08 09:58:05 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:58:05 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:58:05 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:58:05.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:58:06 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v393: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 08 09:58:06 compute-0 ceph-mon[73572]: pgmap v393: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 08 09:58:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:58:07.015Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 09:58:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:58:07.016Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 09:58:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:58:07.017Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:58:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:07 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 09:58:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:07 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 09:58:07 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:58:07 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:58:07 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:58:07.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:58:07 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:58:07 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:58:07 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:58:07.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:58:08 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v394: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 08 09:58:08 compute-0 ceph-mon[73572]: pgmap v394: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 08 09:58:08 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:58:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:58:08.891Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 09:58:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:58:08.892Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 09:58:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:58:08.893Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:58:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=sqlstore.transactions t=2025-10-08T09:58:09.447156599Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Oct 08 09:58:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=cleanup t=2025-10-08T09:58:09.461117916Z level=info msg="Completed cleanup jobs" duration=25.299387ms
Oct 08 09:58:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=plugins.update.checker t=2025-10-08T09:58:09.565954305Z level=info msg="Update check succeeded" duration=53.886013ms
Oct 08 09:58:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=grafana.update.checker t=2025-10-08T09:58:09.567629652Z level=info msg="Update check succeeded" duration=55.616922ms
Oct 08 09:58:09 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:58:09 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:58:09 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:58:09.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:58:09 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:58:09 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:58:09 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:58:09.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:58:10 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v395: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 09:58:10 compute-0 ceph-mon[73572]: pgmap v395: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 09:58:11 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:58:11 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:58:11 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:58:11.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:58:11 compute-0 kernel: SELinux:  Converting 2773 SID table entries...
Oct 08 09:58:11 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 08 09:58:11 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 08 09:58:11 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 08 09:58:11 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 08 09:58:11 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 08 09:58:11 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 08 09:58:11 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 08 09:58:11 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:58:11 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:58:11 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:58:11.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:58:12 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v396: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 09:58:12 compute-0 ceph-mon[73572]: pgmap v396: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 09:58:12 compute-0 groupadd[188721]: group added to /etc/group: name=dnsmasq, GID=991
Oct 08 09:58:12 compute-0 groupadd[188721]: group added to /etc/gshadow: name=dnsmasq
Oct 08 09:58:12 compute-0 groupadd[188721]: new group: name=dnsmasq, GID=991
Oct 08 09:58:12 compute-0 useradd[188728]: new user: name=dnsmasq, UID=991, GID=991, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Oct 08 09:58:13 compute-0 dbus-broker-launch[754]: Noticed file-system modification, trigger reload.
Oct 08 09:58:13 compute-0 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Oct 08 09:58:13 compute-0 dbus-broker-launch[754]: Noticed file-system modification, trigger reload.
Oct 08 09:58:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 08 09:58:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 08 09:58:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 08 09:58:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 08 09:58:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 08 09:58:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 08 09:58:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 08 09:58:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 08 09:58:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 08 09:58:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 08 09:58:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 08 09:58:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 08 09:58:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 08 09:58:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 08 09:58:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 08 09:58:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 08 09:58:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 08 09:58:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 08 09:58:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 08 09:58:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 08 09:58:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 08 09:58:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 08 09:58:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 08 09:58:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 08 09:58:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 08 09:58:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 08 09:58:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 08 09:58:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:58:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5e0000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5d4001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:13 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:58:13 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:58:13 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:58:13.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:58:13 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:58:13 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:58:13 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:58:13.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:58:14 compute-0 groupadd[188758]: group added to /etc/group: name=clevis, GID=990
Oct 08 09:58:14 compute-0 groupadd[188758]: group added to /etc/gshadow: name=clevis
Oct 08 09:58:14 compute-0 groupadd[188758]: new group: name=clevis, GID=990
Oct 08 09:58:14 compute-0 useradd[188765]: new user: name=clevis, UID=990, GID=990, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Oct 08 09:58:14 compute-0 usermod[188775]: add 'clevis' to group 'tss'
Oct 08 09:58:14 compute-0 usermod[188775]: add 'clevis' to shadow group 'tss'
Oct 08 09:58:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:14 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5bc000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:14 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v397: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 08 09:58:14 compute-0 ceph-mon[73572]: pgmap v397: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 08 09:58:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:15 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5e0000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/095815 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 08 09:58:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:15 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5dc001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:15 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:58:15 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:58:15 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:58:15.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:58:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:58:15] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Oct 08 09:58:15 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:58:15] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Oct 08 09:58:15 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:58:15 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:58:15 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:58:15.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:58:16 compute-0 polkitd[6524]: Reloading rules
Oct 08 09:58:16 compute-0 polkitd[6524]: Collecting garbage unconditionally...
Oct 08 09:58:16 compute-0 polkitd[6524]: Loading rules from directory /etc/polkit-1/rules.d
Oct 08 09:58:16 compute-0 polkitd[6524]: Loading rules from directory /usr/share/polkit-1/rules.d
Oct 08 09:58:16 compute-0 polkitd[6524]: Finished loading, compiling and executing 4 rules
Oct 08 09:58:16 compute-0 polkitd[6524]: Reloading rules
Oct 08 09:58:16 compute-0 polkitd[6524]: Collecting garbage unconditionally...
Oct 08 09:58:16 compute-0 polkitd[6524]: Loading rules from directory /etc/polkit-1/rules.d
Oct 08 09:58:16 compute-0 polkitd[6524]: Loading rules from directory /usr/share/polkit-1/rules.d
Oct 08 09:58:16 compute-0 polkitd[6524]: Finished loading, compiling and executing 4 rules
Oct 08 09:58:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:16 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5d4002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:16 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v398: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 09:58:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:58:17.018Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:58:17 compute-0 ceph-mon[73572]: pgmap v398: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 09:58:17 compute-0 groupadd[188965]: group added to /etc/group: name=ceph, GID=167
Oct 08 09:58:17 compute-0 groupadd[188965]: group added to /etc/gshadow: name=ceph
Oct 08 09:58:17 compute-0 groupadd[188965]: new group: name=ceph, GID=167
Oct 08 09:58:17 compute-0 useradd[188971]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Oct 08 09:58:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:17 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5bc0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:17 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5e0000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:17 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:58:17 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:58:17 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:58:17.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:58:17 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 09:58:17 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:58:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:58:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:58:17 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:58:17 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:58:17 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:58:17.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:58:18 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:58:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:58:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:58:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:58:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:58:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:18 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5dc0023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:18 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v399: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 09:58:18 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:58:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:58:18.894Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:58:19 compute-0 ceph-mon[73572]: pgmap v399: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 09:58:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:19 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5d4002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:19 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5bc0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:19 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:58:19 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:58:19 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:58:19.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:58:19 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:58:19 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:58:19 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:58:19.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:58:20 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:20 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5e0000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:20 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v400: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 09:58:20 compute-0 ceph-mon[73572]: pgmap v400: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 09:58:20 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Oct 08 09:58:20 compute-0 sshd[1006]: Received signal 15; terminating.
Oct 08 09:58:20 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Oct 08 09:58:20 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Oct 08 09:58:20 compute-0 systemd[1]: sshd.service: Consumed 2.334s CPU time, no IO.
Oct 08 09:58:20 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Oct 08 09:58:20 compute-0 systemd[1]: Stopping sshd-keygen.target...
Oct 08 09:58:20 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 08 09:58:20 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 08 09:58:20 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 08 09:58:20 compute-0 systemd[1]: Reached target sshd-keygen.target.
Oct 08 09:58:20 compute-0 systemd[1]: Starting OpenSSH server daemon...
Oct 08 09:58:20 compute-0 sshd[189680]: Server listening on 0.0.0.0 port 22.
Oct 08 09:58:20 compute-0 sshd[189680]: Server listening on :: port 22.
Oct 08 09:58:20 compute-0 systemd[1]: Started OpenSSH server daemon.
Oct 08 09:58:20 compute-0 podman[189667]: 2025-10-08 09:58:20.783290798 +0000 UTC m=+0.090395316 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller)
Oct 08 09:58:21 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:21 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5dc0023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:21 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:21 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5d4002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:21 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:58:21 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:58:21 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:58:21.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:58:21 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:58:21 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:58:21 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:58:21.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:58:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:22 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5bc0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:22 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v401: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Oct 08 09:58:22 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 08 09:58:22 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 08 09:58:22 compute-0 systemd[1]: Reloading.
Oct 08 09:58:22 compute-0 systemd-rc-local-generator[189955]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:58:22 compute-0 systemd-sysv-generator[189958]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:58:23 compute-0 sudo[189976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 09:58:23 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 08 09:58:23 compute-0 sudo[189976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:58:23 compute-0 sudo[189976]: pam_unix(sudo:session): session closed for user root
Oct 08 09:58:23 compute-0 sudo[190668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:58:23 compute-0 sudo[190668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:58:23 compute-0 sudo[190668]: pam_unix(sudo:session): session closed for user root
Oct 08 09:58:23 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:58:23 compute-0 sudo[190753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 08 09:58:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:23 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5e00091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:23 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5e00091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:23 compute-0 sudo[190753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:58:23 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:58:23 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:58:23 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:58:23.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:58:23 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:58:23 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:58:23 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:58:23.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:58:24 compute-0 ceph-mon[73572]: pgmap v401: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Oct 08 09:58:24 compute-0 sudo[190753]: pam_unix(sudo:session): session closed for user root
Oct 08 09:58:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:24 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5dc0023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:24 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v402: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Oct 08 09:58:25 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Oct 08 09:58:25 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 08 09:58:25 compute-0 ceph-mon[73572]: pgmap v402: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Oct 08 09:58:25 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 08 09:58:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:25 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5bc002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:25 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5d4002520 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:25 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:58:25 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:58:25 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:58:25.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:58:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:58:25] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Oct 08 09:58:25 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:58:25] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Oct 08 09:58:25 compute-0 systemd[1]: Starting PackageKit Daemon...
Oct 08 09:58:25 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 08 09:58:25 compute-0 PackageKit[193649]: daemon start
Oct 08 09:58:25 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:58:25 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:58:25 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:58:25.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:58:25 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:58:25 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 08 09:58:25 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:58:26 compute-0 systemd[1]: Started PackageKit Daemon.
Oct 08 09:58:26 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 08 09:58:26 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:58:26 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 08 09:58:26 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:58:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:26 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5e00091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:26 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v403: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:58:26 compute-0 sudo[170381]: pam_unix(sudo:session): session closed for user root
Oct 08 09:58:26 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Oct 08 09:58:26 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 08 09:58:26 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Oct 08 09:58:26 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 08 09:58:26 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:58:26 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:58:26 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 08 09:58:26 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 09:58:26 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v404: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 289 B/s rd, 0 op/s
Oct 08 09:58:26 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v405: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 357 B/s rd, 0 op/s
Oct 08 09:58:26 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 08 09:58:26 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:58:26 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 08 09:58:26 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:58:26 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 08 09:58:26 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 09:58:26 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 08 09:58:26 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 09:58:26 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:58:26 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:58:26 compute-0 sudo[194586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:58:26 compute-0 sudo[194586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:58:26 compute-0 sudo[194586]: pam_unix(sudo:session): session closed for user root
Oct 08 09:58:26 compute-0 sudo[194666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 08 09:58:26 compute-0 sudo[194666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:58:26 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:58:26 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:58:26 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:58:26 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:58:26 compute-0 ceph-mon[73572]: pgmap v403: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:58:26 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 08 09:58:26 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 08 09:58:26 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:58:26 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 09:58:26 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:58:26 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:58:26 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 09:58:26 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 09:58:26 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:58:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:58:27.018Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 09:58:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:58:27.019Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 09:58:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:58:27.019Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 09:58:27 compute-0 podman[195091]: 2025-10-08 09:58:27.393717146 +0000 UTC m=+0.046271199 container create 3d3bfe3de7e31e2a79a27265dcb94e5c948092b653a464faf54b22b96d4acbb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_cerf, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:58:27 compute-0 systemd[1]: Started libpod-conmon-3d3bfe3de7e31e2a79a27265dcb94e5c948092b653a464faf54b22b96d4acbb5.scope.
Oct 08 09:58:27 compute-0 podman[195091]: 2025-10-08 09:58:27.36783566 +0000 UTC m=+0.020389733 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:58:27 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:58:27 compute-0 podman[195091]: 2025-10-08 09:58:27.499162357 +0000 UTC m=+0.151716440 container init 3d3bfe3de7e31e2a79a27265dcb94e5c948092b653a464faf54b22b96d4acbb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_cerf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:58:27 compute-0 podman[195091]: 2025-10-08 09:58:27.51062964 +0000 UTC m=+0.163183693 container start 3d3bfe3de7e31e2a79a27265dcb94e5c948092b653a464faf54b22b96d4acbb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_cerf, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 08 09:58:27 compute-0 podman[195091]: 2025-10-08 09:58:27.514833191 +0000 UTC m=+0.167387244 container attach 3d3bfe3de7e31e2a79a27265dcb94e5c948092b653a464faf54b22b96d4acbb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_cerf, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct 08 09:58:27 compute-0 naughty_cerf[195214]: 167 167
Oct 08 09:58:27 compute-0 systemd[1]: libpod-3d3bfe3de7e31e2a79a27265dcb94e5c948092b653a464faf54b22b96d4acbb5.scope: Deactivated successfully.
Oct 08 09:58:27 compute-0 podman[195091]: 2025-10-08 09:58:27.518233495 +0000 UTC m=+0.170787548 container died 3d3bfe3de7e31e2a79a27265dcb94e5c948092b653a464faf54b22b96d4acbb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_cerf, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 08 09:58:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-945e9ec094fb81f97c58ae3dcf2bc92dce55e919f9162e694cccd46e9a78c4ea-merged.mount: Deactivated successfully.
Oct 08 09:58:27 compute-0 podman[195091]: 2025-10-08 09:58:27.558881796 +0000 UTC m=+0.211435849 container remove 3d3bfe3de7e31e2a79a27265dcb94e5c948092b653a464faf54b22b96d4acbb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_cerf, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Oct 08 09:58:27 compute-0 systemd[1]: libpod-conmon-3d3bfe3de7e31e2a79a27265dcb94e5c948092b653a464faf54b22b96d4acbb5.scope: Deactivated successfully.
Oct 08 09:58:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:27 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5b8000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:27 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5d4002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:27 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:58:27 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:58:27 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:58:27.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:58:27 compute-0 podman[195455]: 2025-10-08 09:58:27.713241473 +0000 UTC m=+0.041940605 container create 7c2bd0c54da567afc1bc29d52bf49ebf106f0770730176bf0ae912ca2013617d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_kalam, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:58:27 compute-0 systemd[1]: Started libpod-conmon-7c2bd0c54da567afc1bc29d52bf49ebf106f0770730176bf0ae912ca2013617d.scope.
Oct 08 09:58:27 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:58:27 compute-0 podman[195455]: 2025-10-08 09:58:27.694961132 +0000 UTC m=+0.023660284 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:58:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e07df87e4161525e863ab83fceab3740d31b16ad1475a19cfee358fb0e1354de/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 09:58:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e07df87e4161525e863ab83fceab3740d31b16ad1475a19cfee358fb0e1354de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:58:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e07df87e4161525e863ab83fceab3740d31b16ad1475a19cfee358fb0e1354de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:58:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e07df87e4161525e863ab83fceab3740d31b16ad1475a19cfee358fb0e1354de/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 09:58:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e07df87e4161525e863ab83fceab3740d31b16ad1475a19cfee358fb0e1354de/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 09:58:27 compute-0 podman[195455]: 2025-10-08 09:58:27.807740567 +0000 UTC m=+0.136439729 container init 7c2bd0c54da567afc1bc29d52bf49ebf106f0770730176bf0ae912ca2013617d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_kalam, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Oct 08 09:58:27 compute-0 podman[195455]: 2025-10-08 09:58:27.818359483 +0000 UTC m=+0.147058615 container start 7c2bd0c54da567afc1bc29d52bf49ebf106f0770730176bf0ae912ca2013617d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_kalam, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct 08 09:58:27 compute-0 podman[195455]: 2025-10-08 09:58:27.821829418 +0000 UTC m=+0.150528550 container attach 7c2bd0c54da567afc1bc29d52bf49ebf106f0770730176bf0ae912ca2013617d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_kalam, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:58:27 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:58:27 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:58:27 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:58:27.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:58:28 compute-0 ceph-mon[73572]: pgmap v404: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 289 B/s rd, 0 op/s
Oct 08 09:58:28 compute-0 ceph-mon[73572]: pgmap v405: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 357 B/s rd, 0 op/s
Oct 08 09:58:28 compute-0 thirsty_kalam[195582]: --> passed data devices: 0 physical, 1 LVM
Oct 08 09:58:28 compute-0 thirsty_kalam[195582]: --> All data devices are unavailable
Oct 08 09:58:28 compute-0 systemd[1]: libpod-7c2bd0c54da567afc1bc29d52bf49ebf106f0770730176bf0ae912ca2013617d.scope: Deactivated successfully.
Oct 08 09:58:28 compute-0 podman[195455]: 2025-10-08 09:58:28.16777712 +0000 UTC m=+0.496476252 container died 7c2bd0c54da567afc1bc29d52bf49ebf106f0770730176bf0ae912ca2013617d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_kalam, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct 08 09:58:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-e07df87e4161525e863ab83fceab3740d31b16ad1475a19cfee358fb0e1354de-merged.mount: Deactivated successfully.
Oct 08 09:58:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:28 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5bc002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:28 compute-0 podman[195455]: 2025-10-08 09:58:28.213543702 +0000 UTC m=+0.542242834 container remove 7c2bd0c54da567afc1bc29d52bf49ebf106f0770730176bf0ae912ca2013617d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 08 09:58:28 compute-0 systemd[1]: libpod-conmon-7c2bd0c54da567afc1bc29d52bf49ebf106f0770730176bf0ae912ca2013617d.scope: Deactivated successfully.
Oct 08 09:58:28 compute-0 sudo[194666]: pam_unix(sudo:session): session closed for user root
Oct 08 09:58:28 compute-0 podman[196034]: 2025-10-08 09:58:28.286854456 +0000 UTC m=+0.084470089 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 08 09:58:28 compute-0 sudo[196153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:58:28 compute-0 sudo[196153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:58:28 compute-0 sudo[196153]: pam_unix(sudo:session): session closed for user root
Oct 08 09:58:28 compute-0 sudo[196229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- lvm list --format json
Oct 08 09:58:28 compute-0 sudo[196229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:58:28 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:58:28 compute-0 podman[196715]: 2025-10-08 09:58:28.762881672 +0000 UTC m=+0.039623938 container create 3bee25543bd2715194f6dd35bf3e87bb63f88f7424065684fe313163b8368375 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_mendel, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct 08 09:58:28 compute-0 systemd[1]: Started libpod-conmon-3bee25543bd2715194f6dd35bf3e87bb63f88f7424065684fe313163b8368375.scope.
Oct 08 09:58:28 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:58:28 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v406: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 357 B/s rd, 0 op/s
Oct 08 09:58:28 compute-0 podman[196715]: 2025-10-08 09:58:28.747119005 +0000 UTC m=+0.023861301 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:58:28 compute-0 podman[196715]: 2025-10-08 09:58:28.84316877 +0000 UTC m=+0.119911086 container init 3bee25543bd2715194f6dd35bf3e87bb63f88f7424065684fe313163b8368375 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:58:28 compute-0 podman[196715]: 2025-10-08 09:58:28.852477801 +0000 UTC m=+0.129220077 container start 3bee25543bd2715194f6dd35bf3e87bb63f88f7424065684fe313163b8368375 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:58:28 compute-0 podman[196715]: 2025-10-08 09:58:28.85631807 +0000 UTC m=+0.133060346 container attach 3bee25543bd2715194f6dd35bf3e87bb63f88f7424065684fe313163b8368375 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_mendel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 08 09:58:28 compute-0 sleepy_mendel[196837]: 167 167
Oct 08 09:58:28 compute-0 systemd[1]: libpod-3bee25543bd2715194f6dd35bf3e87bb63f88f7424065684fe313163b8368375.scope: Deactivated successfully.
Oct 08 09:58:28 compute-0 podman[196715]: 2025-10-08 09:58:28.860527031 +0000 UTC m=+0.137269307 container died 3bee25543bd2715194f6dd35bf3e87bb63f88f7424065684fe313163b8368375 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_mendel, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct 08 09:58:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-c797247a03231c0e657b2892cb8213a83b9fdf90626c3e35341724dba78c53a7-merged.mount: Deactivated successfully.
Oct 08 09:58:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:58:28.894Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:58:28 compute-0 podman[196715]: 2025-10-08 09:58:28.903005483 +0000 UTC m=+0.179747759 container remove 3bee25543bd2715194f6dd35bf3e87bb63f88f7424065684fe313163b8368375 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_mendel, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Oct 08 09:58:28 compute-0 systemd[1]: libpod-conmon-3bee25543bd2715194f6dd35bf3e87bb63f88f7424065684fe313163b8368375.scope: Deactivated successfully.
Oct 08 09:58:29 compute-0 podman[197056]: 2025-10-08 09:58:29.071303967 +0000 UTC m=+0.046923791 container create 7cd113447c1a6aa894d6feacdf2cec825fde5f943ed817b5f39ba2c470ab5c96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_tu, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:58:29 compute-0 systemd[1]: Started libpod-conmon-7cd113447c1a6aa894d6feacdf2cec825fde5f943ed817b5f39ba2c470ab5c96.scope.
Oct 08 09:58:29 compute-0 podman[197056]: 2025-10-08 09:58:29.052209648 +0000 UTC m=+0.027829502 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:58:29 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:58:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc9443b38c27974b89d2a4c3995b3f2ee046b0ce8c5234fcd8f543a88459880e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 09:58:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc9443b38c27974b89d2a4c3995b3f2ee046b0ce8c5234fcd8f543a88459880e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:58:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc9443b38c27974b89d2a4c3995b3f2ee046b0ce8c5234fcd8f543a88459880e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:58:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc9443b38c27974b89d2a4c3995b3f2ee046b0ce8c5234fcd8f543a88459880e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 09:58:29 compute-0 podman[197056]: 2025-10-08 09:58:29.181541478 +0000 UTC m=+0.157161322 container init 7cd113447c1a6aa894d6feacdf2cec825fde5f943ed817b5f39ba2c470ab5c96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_tu, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Oct 08 09:58:29 compute-0 podman[197056]: 2025-10-08 09:58:29.189498954 +0000 UTC m=+0.165118778 container start 7cd113447c1a6aa894d6feacdf2cec825fde5f943ed817b5f39ba2c470ab5c96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_tu, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:58:29 compute-0 podman[197056]: 2025-10-08 09:58:29.196096745 +0000 UTC m=+0.171716589 container attach 7cd113447c1a6aa894d6feacdf2cec825fde5f943ed817b5f39ba2c470ab5c96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_tu, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:58:29 compute-0 relaxed_tu[197167]: {
Oct 08 09:58:29 compute-0 relaxed_tu[197167]:     "1": [
Oct 08 09:58:29 compute-0 relaxed_tu[197167]:         {
Oct 08 09:58:29 compute-0 relaxed_tu[197167]:             "devices": [
Oct 08 09:58:29 compute-0 relaxed_tu[197167]:                 "/dev/loop3"
Oct 08 09:58:29 compute-0 relaxed_tu[197167]:             ],
Oct 08 09:58:29 compute-0 relaxed_tu[197167]:             "lv_name": "ceph_lv0",
Oct 08 09:58:29 compute-0 relaxed_tu[197167]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 09:58:29 compute-0 relaxed_tu[197167]:             "lv_size": "21470642176",
Oct 08 09:58:29 compute-0 relaxed_tu[197167]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 08 09:58:29 compute-0 relaxed_tu[197167]:             "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 09:58:29 compute-0 relaxed_tu[197167]:             "name": "ceph_lv0",
Oct 08 09:58:29 compute-0 relaxed_tu[197167]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 09:58:29 compute-0 relaxed_tu[197167]:             "tags": {
Oct 08 09:58:29 compute-0 relaxed_tu[197167]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 08 09:58:29 compute-0 relaxed_tu[197167]:                 "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 09:58:29 compute-0 relaxed_tu[197167]:                 "ceph.cephx_lockbox_secret": "",
Oct 08 09:58:29 compute-0 relaxed_tu[197167]:                 "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct 08 09:58:29 compute-0 relaxed_tu[197167]:                 "ceph.cluster_name": "ceph",
Oct 08 09:58:29 compute-0 relaxed_tu[197167]:                 "ceph.crush_device_class": "",
Oct 08 09:58:29 compute-0 relaxed_tu[197167]:                 "ceph.encrypted": "0",
Oct 08 09:58:29 compute-0 relaxed_tu[197167]:                 "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct 08 09:58:29 compute-0 relaxed_tu[197167]:                 "ceph.osd_id": "1",
Oct 08 09:58:29 compute-0 relaxed_tu[197167]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 08 09:58:29 compute-0 relaxed_tu[197167]:                 "ceph.type": "block",
Oct 08 09:58:29 compute-0 relaxed_tu[197167]:                 "ceph.vdo": "0",
Oct 08 09:58:29 compute-0 relaxed_tu[197167]:                 "ceph.with_tpm": "0"
Oct 08 09:58:29 compute-0 relaxed_tu[197167]:             },
Oct 08 09:58:29 compute-0 relaxed_tu[197167]:             "type": "block",
Oct 08 09:58:29 compute-0 relaxed_tu[197167]:             "vg_name": "ceph_vg0"
Oct 08 09:58:29 compute-0 relaxed_tu[197167]:         }
Oct 08 09:58:29 compute-0 relaxed_tu[197167]:     ]
Oct 08 09:58:29 compute-0 relaxed_tu[197167]: }
Oct 08 09:58:29 compute-0 systemd[1]: libpod-7cd113447c1a6aa894d6feacdf2cec825fde5f943ed817b5f39ba2c470ab5c96.scope: Deactivated successfully.
Oct 08 09:58:29 compute-0 podman[197056]: 2025-10-08 09:58:29.491164453 +0000 UTC m=+0.466784297 container died 7cd113447c1a6aa894d6feacdf2cec825fde5f943ed817b5f39ba2c470ab5c96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_tu, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:58:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc9443b38c27974b89d2a4c3995b3f2ee046b0ce8c5234fcd8f543a88459880e-merged.mount: Deactivated successfully.
Oct 08 09:58:29 compute-0 podman[197056]: 2025-10-08 09:58:29.530935935 +0000 UTC m=+0.506555759 container remove 7cd113447c1a6aa894d6feacdf2cec825fde5f943ed817b5f39ba2c470ab5c96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_tu, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct 08 09:58:29 compute-0 systemd[1]: libpod-conmon-7cd113447c1a6aa894d6feacdf2cec825fde5f943ed817b5f39ba2c470ab5c96.scope: Deactivated successfully.
Oct 08 09:58:29 compute-0 sudo[196229]: pam_unix(sudo:session): session closed for user root
Oct 08 09:58:29 compute-0 sudo[197680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:58:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:29 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5d4002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:29 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5e000a2b0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:29 compute-0 sudo[197680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:58:29 compute-0 sudo[197680]: pam_unix(sudo:session): session closed for user root
Oct 08 09:58:29 compute-0 sudo[197760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- raw list --format json
Oct 08 09:58:29 compute-0 sudo[197760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:58:29 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:58:29 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:58:29 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:58:29.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:58:29 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:58:29 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:58:29 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:58:29.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:58:30 compute-0 podman[198158]: 2025-10-08 09:58:30.120432809 +0000 UTC m=+0.086179466 container create 961206e716d6737cbe316362c0b7f2ad9f416c0c9a32da4c758b548a1251c0fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_yonath, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Oct 08 09:58:30 compute-0 podman[198158]: 2025-10-08 09:58:30.05684477 +0000 UTC m=+0.022591457 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:58:30 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:30 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5b80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:30 compute-0 systemd[1]: Started libpod-conmon-961206e716d6737cbe316362c0b7f2ad9f416c0c9a32da4c758b548a1251c0fe.scope.
Oct 08 09:58:30 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:58:30 compute-0 podman[198158]: 2025-10-08 09:58:30.328772714 +0000 UTC m=+0.294519401 container init 961206e716d6737cbe316362c0b7f2ad9f416c0c9a32da4c758b548a1251c0fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_yonath, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:58:30 compute-0 podman[198158]: 2025-10-08 09:58:30.335878751 +0000 UTC m=+0.301625408 container start 961206e716d6737cbe316362c0b7f2ad9f416c0c9a32da4c758b548a1251c0fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_yonath, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True)
Oct 08 09:58:30 compute-0 friendly_yonath[198353]: 167 167
Oct 08 09:58:30 compute-0 systemd[1]: libpod-961206e716d6737cbe316362c0b7f2ad9f416c0c9a32da4c758b548a1251c0fe.scope: Deactivated successfully.
Oct 08 09:58:30 compute-0 podman[198158]: 2025-10-08 09:58:30.414260065 +0000 UTC m=+0.380006712 container attach 961206e716d6737cbe316362c0b7f2ad9f416c0c9a32da4c758b548a1251c0fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_yonath, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 08 09:58:30 compute-0 podman[198158]: 2025-10-08 09:58:30.414745542 +0000 UTC m=+0.380492199 container died 961206e716d6737cbe316362c0b7f2ad9f416c0c9a32da4c758b548a1251c0fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_yonath, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 08 09:58:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9667a434889fdd8d7c98b4045100b943b329c2ab94bca1452346f6048d6b64f-merged.mount: Deactivated successfully.
Oct 08 09:58:30 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v407: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 357 B/s rd, 0 op/s
Oct 08 09:58:30 compute-0 podman[198158]: 2025-10-08 09:58:30.958115302 +0000 UTC m=+0.923861959 container remove 961206e716d6737cbe316362c0b7f2ad9f416c0c9a32da4c758b548a1251c0fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:58:30 compute-0 ceph-mon[73572]: pgmap v406: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 357 B/s rd, 0 op/s
Oct 08 09:58:30 compute-0 systemd[1]: libpod-conmon-961206e716d6737cbe316362c0b7f2ad9f416c0c9a32da4c758b548a1251c0fe.scope: Deactivated successfully.
Oct 08 09:58:31 compute-0 podman[198782]: 2025-10-08 09:58:31.146480088 +0000 UTC m=+0.069168057 container create 476ecd1c11280460b19ac851e8702b97ee97582e07e56e8ee735b9d22976f70f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_cannon, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct 08 09:58:31 compute-0 podman[198782]: 2025-10-08 09:58:31.097742786 +0000 UTC m=+0.020430775 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:58:31 compute-0 systemd[1]: Started libpod-conmon-476ecd1c11280460b19ac851e8702b97ee97582e07e56e8ee735b9d22976f70f.scope.
Oct 08 09:58:31 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:58:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8739d9ba8f464f3b1cf883596341984f23a7de90259ed0892401f2959585c88f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 09:58:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8739d9ba8f464f3b1cf883596341984f23a7de90259ed0892401f2959585c88f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:58:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8739d9ba8f464f3b1cf883596341984f23a7de90259ed0892401f2959585c88f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:58:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8739d9ba8f464f3b1cf883596341984f23a7de90259ed0892401f2959585c88f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 09:58:31 compute-0 podman[198782]: 2025-10-08 09:58:31.311072958 +0000 UTC m=+0.233760957 container init 476ecd1c11280460b19ac851e8702b97ee97582e07e56e8ee735b9d22976f70f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:58:31 compute-0 podman[198782]: 2025-10-08 09:58:31.321097534 +0000 UTC m=+0.243785503 container start 476ecd1c11280460b19ac851e8702b97ee97582e07e56e8ee735b9d22976f70f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:58:31 compute-0 podman[198782]: 2025-10-08 09:58:31.362606784 +0000 UTC m=+0.285294803 container attach 476ecd1c11280460b19ac851e8702b97ee97582e07e56e8ee735b9d22976f70f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_cannon, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:58:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:31 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5d4002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:31 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5c4000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:31 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:58:31 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:58:31 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:58:31.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:58:31 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:58:31 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:58:31 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:58:31.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:58:31 compute-0 lvm[198921]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 08 09:58:31 compute-0 lvm[198921]: VG ceph_vg0 finished
Oct 08 09:58:32 compute-0 serene_cannon[198846]: {}
Oct 08 09:58:32 compute-0 systemd[1]: libpod-476ecd1c11280460b19ac851e8702b97ee97582e07e56e8ee735b9d22976f70f.scope: Deactivated successfully.
Oct 08 09:58:32 compute-0 systemd[1]: libpod-476ecd1c11280460b19ac851e8702b97ee97582e07e56e8ee735b9d22976f70f.scope: Consumed 1.237s CPU time.
Oct 08 09:58:32 compute-0 podman[198782]: 2025-10-08 09:58:32.088896878 +0000 UTC m=+1.011584857 container died 476ecd1c11280460b19ac851e8702b97ee97582e07e56e8ee735b9d22976f70f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_cannon, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True)
Oct 08 09:58:32 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:32 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5dc0034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-8739d9ba8f464f3b1cf883596341984f23a7de90259ed0892401f2959585c88f-merged.mount: Deactivated successfully.
Oct 08 09:58:32 compute-0 podman[198782]: 2025-10-08 09:58:32.467243774 +0000 UTC m=+1.389931763 container remove 476ecd1c11280460b19ac851e8702b97ee97582e07e56e8ee735b9d22976f70f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_cannon, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:58:32 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 08 09:58:32 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 08 09:58:32 compute-0 systemd[1]: man-db-cache-update.service: Consumed 10.163s CPU time.
Oct 08 09:58:32 compute-0 systemd[1]: run-r7106b39303f8423ab15aadc2e42f0a32.service: Deactivated successfully.
Oct 08 09:58:32 compute-0 sudo[197760]: pam_unix(sudo:session): session closed for user root
Oct 08 09:58:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 09:58:32 compute-0 systemd[1]: libpod-conmon-476ecd1c11280460b19ac851e8702b97ee97582e07e56e8ee735b9d22976f70f.scope: Deactivated successfully.
Oct 08 09:58:32 compute-0 ceph-mon[73572]: pgmap v407: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 357 B/s rd, 0 op/s
Oct 08 09:58:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 09:58:32 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:58:32 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v408: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:58:33 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:58:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 09:58:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:58:33 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:58:33 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:33 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5dc0034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:33 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:33 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5dc0034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:33 compute-0 sudo[198938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 08 09:58:33 compute-0 sudo[198938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:58:33 compute-0 sudo[198938]: pam_unix(sudo:session): session closed for user root
Oct 08 09:58:33 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:58:33 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:58:33 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:58:33.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:58:33 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:58:33 compute-0 ceph-mon[73572]: pgmap v408: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail
Oct 08 09:58:33 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:58:33 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:58:33 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:58:33 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:58:33 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:58:33.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:58:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:34 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5dc0034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:34 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v409: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 357 B/s rd, 0 op/s
Oct 08 09:58:35 compute-0 sudo[199090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhzmklblovsrlwtmlptqbwqhmlkhkmvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917514.8315837-968-263609073155090/AnsiballZ_systemd.py'
Oct 08 09:58:35 compute-0 sudo[199090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:58:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:35 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5d4002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:35 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5b8001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:35 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:58:35 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.002000067s ======
Oct 08 09:58:35 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:58:35.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000067s
Oct 08 09:58:35 compute-0 python3.9[199092]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 08 09:58:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:58:35] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Oct 08 09:58:35 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:58:35] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Oct 08 09:58:35 compute-0 systemd[1]: Reloading.
Oct 08 09:58:35 compute-0 ceph-mon[73572]: pgmap v409: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 357 B/s rd, 0 op/s
Oct 08 09:58:35 compute-0 systemd-rc-local-generator[199122]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:58:35 compute-0 systemd-sysv-generator[199125]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:58:35 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:58:35 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:58:35 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:58:35.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:58:36 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:36 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5dc0034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:36 compute-0 sudo[199090]: pam_unix(sudo:session): session closed for user root
Oct 08 09:58:36 compute-0 sudo[199281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgoeyivrfcmlovkdspvdqrtlevpyfejw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917516.377228-968-95237537990447/AnsiballZ_systemd.py'
Oct 08 09:58:36 compute-0 sudo[199281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:58:36 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v410: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 0 op/s
Oct 08 09:58:36 compute-0 python3.9[199283]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 08 09:58:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:58:37.020Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:58:37 compute-0 systemd[1]: Reloading.
Oct 08 09:58:37 compute-0 systemd-rc-local-generator[199316]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:58:37 compute-0 systemd-sysv-generator[199319]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:58:37 compute-0 sudo[199281]: pam_unix(sudo:session): session closed for user root
Oct 08 09:58:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:37 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5c4001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:37 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5d4002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:37 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:58:37 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:58:37 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:58:37.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:58:37 compute-0 sudo[199472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpbfwcaumoxozdtoocrqogtxisfzitez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917517.6024742-968-102464402653902/AnsiballZ_systemd.py'
Oct 08 09:58:37 compute-0 sudo[199472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:58:37 compute-0 ceph-mon[73572]: pgmap v410: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 0 op/s
Oct 08 09:58:37 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:58:37 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:58:37 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:58:37.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:58:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:38 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5b8001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:38 compute-0 python3.9[199474]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 08 09:58:38 compute-0 systemd[1]: Reloading.
Oct 08 09:58:38 compute-0 systemd-rc-local-generator[199504]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:58:38 compute-0 systemd-sysv-generator[199508]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:58:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:58:38 compute-0 sudo[199472]: pam_unix(sudo:session): session closed for user root
Oct 08 09:58:38 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v411: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:58:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:58:38.896Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 09:58:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:58:38.897Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 09:58:39 compute-0 sudo[199663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dubxwjvdazqzmtuujxgqjaqgqlmodvbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917518.793778-968-162470395350270/AnsiballZ_systemd.py'
Oct 08 09:58:39 compute-0 sudo[199663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:58:39 compute-0 python3.9[199665]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 08 09:58:39 compute-0 systemd[1]: Reloading.
Oct 08 09:58:39 compute-0 systemd-rc-local-generator[199696]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:58:39 compute-0 systemd-sysv-generator[199700]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:58:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:39 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5dc0034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:39 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5c40023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:39 compute-0 sudo[199663]: pam_unix(sudo:session): session closed for user root
Oct 08 09:58:39 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:58:39 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:58:39 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:58:39.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:58:39 compute-0 ceph-mon[73572]: pgmap v411: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:58:39 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:58:39 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:58:39 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:58:39.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:58:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:40 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5d4002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:40 compute-0 sudo[199855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrljvcjiwfytzgchymlilwvpmbdouxax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917519.96652-1055-145406499570633/AnsiballZ_systemd.py'
Oct 08 09:58:40 compute-0 sudo[199855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:58:40 compute-0 python3.9[199857]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 08 09:58:40 compute-0 systemd[1]: Reloading.
Oct 08 09:58:40 compute-0 systemd-sysv-generator[199891]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:58:40 compute-0 systemd-rc-local-generator[199888]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:58:40 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v412: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:58:40 compute-0 sudo[199855]: pam_unix(sudo:session): session closed for user root
Oct 08 09:58:41 compute-0 sudo[200046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucklbzpvfilisziuopmnoemivwzvkgsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917521.0778506-1055-37201527282951/AnsiballZ_systemd.py'
Oct 08 09:58:41 compute-0 sudo[200046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:58:41 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:41 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5b8001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:41 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:41 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5dc0034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:41 compute-0 python3.9[200048]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 08 09:58:41 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:58:41 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:58:41 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:58:41.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:58:41 compute-0 systemd[1]: Reloading.
Oct 08 09:58:41 compute-0 systemd-rc-local-generator[200077]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:58:41 compute-0 systemd-sysv-generator[200083]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:58:41 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:58:41 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:58:41 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:58:41.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:58:42 compute-0 ceph-mon[73572]: pgmap v412: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:58:42 compute-0 sudo[200046]: pam_unix(sudo:session): session closed for user root
Oct 08 09:58:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:42 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5c40023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:42 compute-0 sudo[200237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxxdvhdxmsyvyreupagdjnycrvzmlley ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917522.1727567-1055-121293523311468/AnsiballZ_systemd.py'
Oct 08 09:58:42 compute-0 sudo[200237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:58:42 compute-0 python3.9[200239]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 08 09:58:42 compute-0 systemd[1]: Reloading.
Oct 08 09:58:42 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v413: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:58:42 compute-0 systemd-sysv-generator[200268]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:58:42 compute-0 systemd-rc-local-generator[200265]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:58:43 compute-0 sudo[200279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 09:58:43 compute-0 sudo[200279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:58:43 compute-0 sudo[200279]: pam_unix(sudo:session): session closed for user root
Oct 08 09:58:43 compute-0 sudo[200237]: pam_unix(sudo:session): session closed for user root
Oct 08 09:58:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:58:43 compute-0 sudo[200453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkicokpbfoytvuboeacitzhhpzuepbzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917523.276722-1055-203023932890783/AnsiballZ_systemd.py'
Oct 08 09:58:43 compute-0 sudo[200453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:58:43 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:43 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5d4002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:43 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:43 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5b80032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:43 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:58:43 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:58:43 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:58:43.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:58:43 compute-0 python3.9[200455]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 08 09:58:43 compute-0 sudo[200453]: pam_unix(sudo:session): session closed for user root
Oct 08 09:58:43 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:58:43 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:58:43 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:58:43.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:58:44 compute-0 ceph-mon[73572]: pgmap v413: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:58:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:44 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5dc0034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:44 compute-0 sudo[200609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzqgneqrpakosltzzmksozsttvgwgaaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917524.0813365-1055-16133355839637/AnsiballZ_systemd.py'
Oct 08 09:58:44 compute-0 sudo[200609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:58:44 compute-0 python3.9[200611]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 08 09:58:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/095844 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 08 09:58:44 compute-0 systemd[1]: Reloading.
Oct 08 09:58:44 compute-0 systemd-rc-local-generator[200640]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:58:44 compute-0 systemd-sysv-generator[200645]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:58:44 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v414: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 08 09:58:45 compute-0 sudo[200609]: pam_unix(sudo:session): session closed for user root
Oct 08 09:58:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:45 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5c40023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:45 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5d4002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:45 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:58:45 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000035s ======
Oct 08 09:58:45 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:58:45.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Oct 08 09:58:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:58:45] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Oct 08 09:58:45 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:58:45] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Oct 08 09:58:45 compute-0 sudo[200801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amtxzyiuhqakoweqbtgnbllbxeglhesq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917525.5913088-1163-155344836800133/AnsiballZ_systemd.py'
Oct 08 09:58:45 compute-0 sudo[200801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:58:46 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:58:46 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:58:46 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:58:45.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:58:46 compute-0 ceph-mon[73572]: pgmap v414: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 08 09:58:46 compute-0 python3.9[200803]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 08 09:58:46 compute-0 systemd[1]: Reloading.
Oct 08 09:58:46 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:46 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5b0000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:46 compute-0 systemd-sysv-generator[200841]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:58:46 compute-0 systemd-rc-local-generator[200836]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:58:46 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Oct 08 09:58:46 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Oct 08 09:58:46 compute-0 sudo[200801]: pam_unix(sudo:session): session closed for user root
Oct 08 09:58:46 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v415: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 09:58:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:58:47.022Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:58:47 compute-0 sudo[200997]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkkprveullylorniismfaknhgbnnpusc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917526.9264667-1187-131162434721044/AnsiballZ_systemd.py'
Oct 08 09:58:47 compute-0 sudo[200997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:58:47 compute-0 python3.9[200999]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 08 09:58:47 compute-0 sudo[200997]: pam_unix(sudo:session): session closed for user root
Oct 08 09:58:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_09:58:47
Oct 08 09:58:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 08 09:58:47 compute-0 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct 08 09:58:47 compute-0 ceph-mgr[73869]: [balancer INFO root] pools ['.mgr', 'volumes', 'images', 'cephfs.cephfs.meta', '.rgw.root', 'backups', '.nfs', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', 'vms', 'default.rgw.control']
Oct 08 09:58:47 compute-0 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct 08 09:58:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:47 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5b80032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:47 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5c4003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:47 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:58:47 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000035s ======
Oct 08 09:58:47 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:58:47.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Oct 08 09:58:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct 08 09:58:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:58:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 08 09:58:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:58:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:58:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:58:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:58:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:58:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:58:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:58:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:58:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:58:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 08 09:58:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:58:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:58:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:58:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 08 09:58:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:58:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 08 09:58:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:58:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:58:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:58:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 08 09:58:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:58:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 09:58:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 09:58:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:58:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:58:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:58:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 08 09:58:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 09:58:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 09:58:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 09:58:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 09:58:48 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:58:48 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:58:48 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:58:48.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:58:48 compute-0 sudo[201153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynozwglrwlfkrinlwjxhpaysebhfropp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917527.7430964-1187-149885891966697/AnsiballZ_systemd.py'
Oct 08 09:58:48 compute-0 sudo[201153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:58:48 compute-0 ceph-mon[73572]: pgmap v415: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 09:58:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:58:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:58:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:58:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:58:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:58:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 08 09:58:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 09:58:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 09:58:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 09:58:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 09:58:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:48 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5d4002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:48 compute-0 python3.9[201155]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 08 09:58:48 compute-0 sudo[201153]: pam_unix(sudo:session): session closed for user root
Oct 08 09:58:48 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:58:48 compute-0 auditd[703]: Audit daemon rotating log files
Oct 08 09:58:48 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v416: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 09:58:48 compute-0 sudo[201308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqrecvmqihsuhgzznyvgzwybwovolyav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917528.5868242-1187-149328662925271/AnsiballZ_systemd.py'
Oct 08 09:58:48 compute-0 sudo[201308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:58:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:58:48.898Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:58:49 compute-0 python3.9[201310]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 08 09:58:49 compute-0 sudo[201308]: pam_unix(sudo:session): session closed for user root
Oct 08 09:58:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:49 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5b00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:49 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5b8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:49 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:58:49 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:58:49 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:58:49.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:58:49 compute-0 sudo[201464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arjfpcczpvldzxwhpqwgxfkvkgrvqcmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917529.43143-1187-159945205038113/AnsiballZ_systemd.py'
Oct 08 09:58:49 compute-0 sudo[201464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:58:50 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:58:50 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:58:50 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:58:50.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:58:50 compute-0 python3.9[201466]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 08 09:58:50 compute-0 sudo[201464]: pam_unix(sudo:session): session closed for user root
Oct 08 09:58:50 compute-0 ceph-mon[73572]: pgmap v416: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 09:58:50 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:50 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5c4003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:50 compute-0 sudo[201620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vurogmgmojljspmfxvjqxtznpaqoadlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917530.2566888-1187-116927455818616/AnsiballZ_systemd.py'
Oct 08 09:58:50 compute-0 sudo[201620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:58:50 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v417: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 09:58:50 compute-0 python3.9[201622]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 08 09:58:50 compute-0 podman[201623]: 2025-10-08 09:58:50.973190426 +0000 UTC m=+0.121526391 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct 08 09:58:51 compute-0 sudo[201620]: pam_unix(sudo:session): session closed for user root
Oct 08 09:58:51 compute-0 sudo[201802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfrwvvbzzrffmcqalbpsqpfzzibmbfmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917531.1339428-1187-16923000780851/AnsiballZ_systemd.py'
Oct 08 09:58:51 compute-0 sudo[201802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:58:51 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:51 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5d4002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:51 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:51 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5b00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:51 compute-0 python3.9[201804]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 08 09:58:51 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:58:51 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:58:51 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:58:51.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:58:51 compute-0 sudo[201802]: pam_unix(sudo:session): session closed for user root
Oct 08 09:58:52 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:58:52 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000035s ======
Oct 08 09:58:52 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:58:52.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Oct 08 09:58:52 compute-0 ceph-mon[73572]: pgmap v417: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 09:58:52 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:52 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5b8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:52 compute-0 sudo[201958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czivpwpuwnvbwhkjjckoydswvrkwxeea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917531.9455042-1187-69261619886293/AnsiballZ_systemd.py'
Oct 08 09:58:52 compute-0 sudo[201958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:58:52 compute-0 python3.9[201960]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 08 09:58:52 compute-0 sudo[201958]: pam_unix(sudo:session): session closed for user root
Oct 08 09:58:52 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v418: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 09:58:53 compute-0 sudo[202114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrhlencvveadrxfjtabnxxvccdovmwgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917532.8502548-1187-189256595668569/AnsiballZ_systemd.py'
Oct 08 09:58:53 compute-0 sudo[202114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:58:53 compute-0 python3.9[202116]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 08 09:58:53 compute-0 sudo[202114]: pam_unix(sudo:session): session closed for user root
Oct 08 09:58:53 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:58:53 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:53 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5c4003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:53 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:53 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5d4002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:53 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:58:53 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:58:53 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:58:53.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:58:53 compute-0 sudo[202270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbdrfmljssvdtnqcsmioxtmhntayqtor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917533.6741824-1187-275147491198720/AnsiballZ_systemd.py'
Oct 08 09:58:53 compute-0 sudo[202270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:58:54 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:58:54 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:58:54 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:58:54.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:58:54 compute-0 ceph-mon[73572]: pgmap v418: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 09:58:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:54 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5b00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:54 compute-0 python3.9[202272]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 08 09:58:54 compute-0 sudo[202270]: pam_unix(sudo:session): session closed for user root
Oct 08 09:58:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:54 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 09:58:54 compute-0 sudo[202425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iruudcdlhqsttlcpoattbdiipiumfrfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917534.4688938-1187-104614255768447/AnsiballZ_systemd.py'
Oct 08 09:58:54 compute-0 sudo[202425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:58:54 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v419: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Oct 08 09:58:55 compute-0 python3.9[202427]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 08 09:58:55 compute-0 sudo[202425]: pam_unix(sudo:session): session closed for user root
Oct 08 09:58:55 compute-0 sudo[202581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgyfvqwnlsxcvcbnnjlnghtdrslyaosf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917535.26897-1187-169258116404619/AnsiballZ_systemd.py'
Oct 08 09:58:55 compute-0 sudo[202581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:58:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:55 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5b00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:55 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5b00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:55 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:58:55 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:58:55 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:58:55.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:58:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:58:55] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Oct 08 09:58:55 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:58:55] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Oct 08 09:58:55 compute-0 python3.9[202583]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 08 09:58:56 compute-0 sudo[202581]: pam_unix(sudo:session): session closed for user root
Oct 08 09:58:56 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:58:56 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:58:56 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:58:56.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:58:56 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:56 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5c4003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:56 compute-0 ceph-mon[73572]: pgmap v419: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Oct 08 09:58:56 compute-0 sudo[202737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjslvbispzzzvcomsjzqsmftqlvzihju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917536.1112833-1187-23707750581812/AnsiballZ_systemd.py'
Oct 08 09:58:56 compute-0 sudo[202737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:58:56 compute-0 python3.9[202739]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 08 09:58:56 compute-0 sudo[202737]: pam_unix(sudo:session): session closed for user root
Oct 08 09:58:56 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v420: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct 08 09:58:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:58:57.025Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:58:57 compute-0 sudo[202893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhpmnbythgutkemxszqadyamkvuldcef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917536.848739-1187-141001611468442/AnsiballZ_systemd.py'
Oct 08 09:58:57 compute-0 sudo[202893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:58:57 compute-0 ceph-mon[73572]: pgmap v420: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct 08 09:58:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:58:57.397 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 09:58:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:58:57.397 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 09:58:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:58:57.397 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 09:58:57 compute-0 python3.9[202895]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 08 09:58:57 compute-0 sudo[202893]: pam_unix(sudo:session): session closed for user root
Oct 08 09:58:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:57 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5d4002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:57 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 09:58:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:57 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 09:58:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:57 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5b00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:57 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:58:57 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000035s ======
Oct 08 09:58:57 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:58:57.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Oct 08 09:58:57 compute-0 sudo[203048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzzwagtymsbanzzdeykjhxrtozhantaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917537.6150007-1187-180209856973604/AnsiballZ_systemd.py'
Oct 08 09:58:57 compute-0 sudo[203048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:58:58 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:58:58 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:58:58 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:58:58.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:58:58 compute-0 python3.9[203050]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 08 09:58:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:58 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5b00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:58 compute-0 sudo[203048]: pam_unix(sudo:session): session closed for user root
Oct 08 09:58:58 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:58:58 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v421: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 09:58:58 compute-0 podman[203079]: 2025-10-08 09:58:58.898824626 +0000 UTC m=+0.055187271 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Oct 08 09:58:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:58:58.900Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:58:59 compute-0 sudo[203225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzfexjbcxpdfqibdwymivqlvofioglxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917539.001015-1493-32646009663205/AnsiballZ_file.py'
Oct 08 09:58:59 compute-0 sudo[203225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:58:59 compute-0 python3.9[203227]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:58:59 compute-0 sudo[203225]: pam_unix(sudo:session): session closed for user root
Oct 08 09:58:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:59 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5c4003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:59 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5d4002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:58:59 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:58:59 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000035s ======
Oct 08 09:58:59 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:58:59.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Oct 08 09:58:59 compute-0 sudo[203377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kouzxfavxmgukrjhakntqhmmhllhkxak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917539.6136208-1493-193510631742155/AnsiballZ_file.py'
Oct 08 09:58:59 compute-0 sudo[203377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:58:59 compute-0 ceph-mon[73572]: pgmap v421: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 09:59:00 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:59:00 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:59:00 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:59:00.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:59:00 compute-0 python3.9[203379]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:59:00 compute-0 sudo[203377]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:00 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:00 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5b00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:59:00 compute-0 sudo[203530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqfxfaltpzgvvfhtbdtixnhmculbaxtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917540.2753308-1493-92983054729226/AnsiballZ_file.py'
Oct 08 09:59:00 compute-0 sudo[203530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:00 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:00 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 08 09:59:00 compute-0 python3.9[203532]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:59:00 compute-0 sudo[203530]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:00 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v422: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 09:59:01 compute-0 sudo[203683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlibsvwwkntvtuudxpdzsjerfjrrrbbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917540.9247644-1493-263612182442690/AnsiballZ_file.py'
Oct 08 09:59:01 compute-0 sudo[203683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:01 compute-0 python3.9[203685]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:59:01 compute-0 sudo[203683]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:01 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:01 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5b00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:59:01 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:01 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5c4003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:59:01 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:59:01 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:59:01 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:59:01.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:59:01 compute-0 sudo[203837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvrohdnwxwdqslwxwkxdplltqsdrqfoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917541.5349307-1493-153339752529583/AnsiballZ_file.py'
Oct 08 09:59:01 compute-0 sudo[203837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:01 compute-0 ceph-mon[73572]: pgmap v422: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 09:59:02 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:59:02 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:59:02 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:59:02.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:59:02 compute-0 python3.9[203839]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:59:02 compute-0 sudo[203837]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:02 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:02 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5d4002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:59:02 compute-0 sudo[203990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqvipnxcbiuftvsrzcdogglwiohpgbuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917542.2841432-1493-88834167431796/AnsiballZ_file.py'
Oct 08 09:59:02 compute-0 sudo[203990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:02 compute-0 python3.9[203992]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 08 09:59:02 compute-0 sudo[203990]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 09:59:02 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:59:02 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v423: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 937 B/s wr, 3 op/s
Oct 08 09:59:02 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:59:03 compute-0 sudo[204093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 09:59:03 compute-0 sudo[204093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:59:03 compute-0 sudo[204093]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:03 compute-0 sudo[204168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwbnqajezuvvzhnggbwwckaoxbscacgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917543.0660093-1622-220423562716082/AnsiballZ_stat.py'
Oct 08 09:59:03 compute-0 sudo[204168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:03 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:59:03 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:03 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5bc001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:59:03 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:03 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5b8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:59:03 compute-0 python3.9[204170]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:59:03 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:59:03 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:59:03 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:59:03.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:59:03 compute-0 sudo[204168]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:03 compute-0 ceph-mon[73572]: pgmap v423: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 937 B/s wr, 3 op/s
Oct 08 09:59:04 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:59:04 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:59:04 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:59:04.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:59:04 compute-0 sudo[204294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zeooedizhljtyfvsqogehwqubpitwylb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917543.0660093-1622-220423562716082/AnsiballZ_copy.py'
Oct 08 09:59:04 compute-0 sudo[204294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:04 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5c4003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:59:04 compute-0 python3.9[204296]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759917543.0660093-1622-220423562716082/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:59:04 compute-0 sudo[204294]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:04 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v424: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1022 B/s wr, 3 op/s
Oct 08 09:59:04 compute-0 sudo[204446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvckraspjhfyrcngcmluzjjimedrsaqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917544.5575209-1622-71640988957292/AnsiballZ_stat.py'
Oct 08 09:59:04 compute-0 sudo[204446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:05 compute-0 python3.9[204448]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:59:05 compute-0 sudo[204446]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:05 compute-0 sudo[204572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfqzjpebetvyrlzmizsufjufqostxrew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917544.5575209-1622-71640988957292/AnsiballZ_copy.py'
Oct 08 09:59:05 compute-0 sudo[204572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:05 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5d4002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:59:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:05 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5bc001dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:59:05 compute-0 python3.9[204574]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759917544.5575209-1622-71640988957292/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:59:05 compute-0 sudo[204572]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:59:05] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Oct 08 09:59:05 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:59:05] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Oct 08 09:59:05 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:59:05 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:59:05 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:59:05.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:59:05 compute-0 ceph-mon[73572]: pgmap v424: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1022 B/s wr, 3 op/s
Oct 08 09:59:06 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:59:06 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:59:06 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:59:06.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:59:06 compute-0 sudo[204725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghkpdpdvcwyhzmvbcftcmpgrudppfmua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917545.8827927-1622-81557711266192/AnsiballZ_stat.py'
Oct 08 09:59:06 compute-0 sudo[204725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:06 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5b8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:59:06 compute-0 python3.9[204727]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:59:06 compute-0 sudo[204725]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:06 compute-0 sudo[204850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmiezsutreaargwufszrtjcdupihxddl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917545.8827927-1622-81557711266192/AnsiballZ_copy.py'
Oct 08 09:59:06 compute-0 sudo[204850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/095906 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 08 09:59:06 compute-0 python3.9[204852]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759917545.8827927-1622-81557711266192/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:59:06 compute-0 sudo[204850]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:06 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v425: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 937 B/s wr, 2 op/s
Oct 08 09:59:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:59:07.026Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:59:07 compute-0 sudo[205003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukylvwqyczdhmpphttkjakfyvfhushcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917547.0086787-1622-124114604733317/AnsiballZ_stat.py'
Oct 08 09:59:07 compute-0 sudo[205003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:07 compute-0 python3.9[205005]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:59:07 compute-0 sudo[205003]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:07 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5c4003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:59:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:07 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5c4003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:59:07 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:59:07 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:59:07 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:59:07.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:59:07 compute-0 sudo[205128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbcvrricgqsfewxshepljcgzfcvfbypp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917547.0086787-1622-124114604733317/AnsiballZ_copy.py'
Oct 08 09:59:07 compute-0 sudo[205128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:07 compute-0 ceph-mon[73572]: pgmap v425: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 937 B/s wr, 2 op/s
Oct 08 09:59:08 compute-0 python3.9[205130]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759917547.0086787-1622-124114604733317/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:59:08 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:59:08 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:59:08 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:59:08.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:59:08 compute-0 sudo[205128]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:08 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5bc001dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:59:08 compute-0 sudo[205281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgwgdsoiirihyolfrtmtkbumakcjqvpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917548.1934218-1622-238163493429264/AnsiballZ_stat.py'
Oct 08 09:59:08 compute-0 sudo[205281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:08 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:59:08 compute-0 python3.9[205283]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:59:08 compute-0 sudo[205281]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:08 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v426: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 937 B/s wr, 2 op/s
Oct 08 09:59:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:59:08.901Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:59:09 compute-0 sudo[205407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjaxjahzsojbiirvoimjenfumbedkkvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917548.1934218-1622-238163493429264/AnsiballZ_copy.py'
Oct 08 09:59:09 compute-0 sudo[205407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:09 compute-0 python3.9[205409]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759917548.1934218-1622-238163493429264/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:59:09 compute-0 sudo[205407]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:09 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5d4002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:59:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:09 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5d4002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:59:09 compute-0 sudo[205559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryabvfyqxkmydgiiysuahnaecbpnyszd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917549.479857-1622-261542011601556/AnsiballZ_stat.py'
Oct 08 09:59:09 compute-0 sudo[205559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:09 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:59:09 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:59:09 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:59:09.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:59:09 compute-0 python3.9[205561]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:59:09 compute-0 sudo[205559]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:10 compute-0 ceph-mon[73572]: pgmap v426: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 937 B/s wr, 2 op/s
Oct 08 09:59:10 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:59:10 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:59:10 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:59:10.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:59:10 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:10 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5c4003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:59:10 compute-0 sudo[205685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jafyxkxogntxvcfezbzsfxdmzcirzlyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917549.479857-1622-261542011601556/AnsiballZ_copy.py'
Oct 08 09:59:10 compute-0 sudo[205685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:10 compute-0 python3.9[205687]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759917549.479857-1622-261542011601556/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:59:10 compute-0 sudo[205685]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:10 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v427: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct 08 09:59:11 compute-0 sudo[205837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfnyemmvutzlvnvgwtkkgmrcxaydeips ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917550.6971312-1622-205710504951775/AnsiballZ_stat.py'
Oct 08 09:59:11 compute-0 sudo[205837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:11 compute-0 python3.9[205839]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:59:11 compute-0 sudo[205837]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:11 compute-0 sudo[205961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbgseutemrymidflcntveppfcuzmalqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917550.6971312-1622-205710504951775/AnsiballZ_copy.py'
Oct 08 09:59:11 compute-0 sudo[205961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:11 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:11 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5bc001dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:59:11 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:11 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5bc001dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:59:11 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:59:11 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000035s ======
Oct 08 09:59:11 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:59:11.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Oct 08 09:59:11 compute-0 python3.9[205963]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759917550.6971312-1622-205710504951775/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:59:11 compute-0 sudo[205961]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:12 compute-0 ceph-mon[73572]: pgmap v427: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct 08 09:59:12 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:59:12 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:59:12 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:59:12.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:59:12 compute-0 sudo[206114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkcdquwurhqedamlohwpkmwusstetkfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917551.9534554-1622-124437164045017/AnsiballZ_stat.py'
Oct 08 09:59:12 compute-0 sudo[206114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:12 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:12 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5bc001dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:59:12 compute-0 python3.9[206116]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:59:12 compute-0 sudo[206114]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:12 compute-0 sudo[206239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfonedfrjabuiknjdxmxoqtjhrdeacnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917551.9534554-1622-124437164045017/AnsiballZ_copy.py'
Oct 08 09:59:12 compute-0 sudo[206239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:12 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v428: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct 08 09:59:12 compute-0 python3.9[206241]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759917551.9534554-1622-124437164045017/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:59:12 compute-0 sudo[206239]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:59:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5c4003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:59:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5bc001dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:59:13 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:59:13 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:59:13 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:59:13.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:59:13 compute-0 sudo[206392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iaokhgpmmbllgskonhpellszrsyaghqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917553.4650486-1961-278592662548436/AnsiballZ_command.py'
Oct 08 09:59:13 compute-0 sudo[206392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:14 compute-0 python3.9[206394]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Oct 08 09:59:14 compute-0 ceph-mon[73572]: pgmap v428: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct 08 09:59:14 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:59:14 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:59:14 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:59:14.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:59:14 compute-0 sudo[206392]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:14 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5bc001dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:59:14 compute-0 sudo[206546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-haryqihrmqzaptckenabrwcglnskakwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917554.2945971-1988-73316120426270/AnsiballZ_file.py'
Oct 08 09:59:14 compute-0 sudo[206546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:14 compute-0 python3.9[206548]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:59:14 compute-0 sudo[206546]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:14 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v429: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct 08 09:59:15 compute-0 sudo[206699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxwamsmkvnkvunpqksaehrmmjdjwaipx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917554.9664445-1988-265111543224922/AnsiballZ_file.py'
Oct 08 09:59:15 compute-0 sudo[206699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:15 compute-0 python3.9[206701]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:59:15 compute-0 sudo[206699]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:15 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5bc001dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:59:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:15 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5c4003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:59:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:59:15] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Oct 08 09:59:15 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:59:15] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Oct 08 09:59:15 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:59:15 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:59:15 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:59:15.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:59:15 compute-0 sudo[206851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvqkfixuxjngivkbsglubyytafbdcnyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917555.5579436-1988-232465518003058/AnsiballZ_file.py'
Oct 08 09:59:15 compute-0 sudo[206851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:16 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:59:16 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:59:16 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:59:16.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:59:16 compute-0 ceph-mon[73572]: pgmap v429: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct 08 09:59:16 compute-0 python3.9[206854]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:59:16 compute-0 sudo[206851]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:16 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5ac000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:59:16 compute-0 sudo[207004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjbjxxzfuibrpbhqercgsyuvlupgkrve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917556.244978-1988-169727827724339/AnsiballZ_file.py'
Oct 08 09:59:16 compute-0 sudo[207004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:16 compute-0 python3.9[207006]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:59:16 compute-0 sudo[207004]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:16 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v430: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:59:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:59:17.026Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 09:59:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:59:17.027Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 09:59:17 compute-0 sudo[207157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkkverwiekgvrtbndzckbsbqrcgvlnny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917556.8774478-1988-89174775148160/AnsiballZ_file.py'
Oct 08 09:59:17 compute-0 sudo[207157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:17 compute-0 python3.9[207159]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:59:17 compute-0 sudo[207157]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:17 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5bc001dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:59:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:17 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5b8004180 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:59:17 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:59:17 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:59:17 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:59:17.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:59:17 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 09:59:17 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:59:17 compute-0 sudo[207309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpwslmmuyqwuldrtrhmuhpmdhnchmsnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917557.5705602-1988-165704766524081/AnsiballZ_file.py'
Oct 08 09:59:17 compute-0 sudo[207309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:59:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:59:18 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:59:18 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 09:59:18 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:59:18.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 09:59:18 compute-0 python3.9[207311]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:59:18 compute-0 ceph-mon[73572]: pgmap v430: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:59:18 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:59:18 compute-0 sudo[207309]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:59:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:59:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:59:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:59:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:18 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5c4003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:59:18 compute-0 sudo[207462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glsxnwzkakvbugblkbdyehzmszwvtbjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917558.2407186-1988-55094433956171/AnsiballZ_file.py'
Oct 08 09:59:18 compute-0 sudo[207462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:18 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:59:18 compute-0 python3.9[207464]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:59:18 compute-0 sudo[207462]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:18 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v431: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:59:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:59:18.902Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:59:19 compute-0 sudo[207615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktqqurjdgpepjyaqvtlxtogctotfaspg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917558.84671-1988-9897576359247/AnsiballZ_file.py'
Oct 08 09:59:19 compute-0 sudo[207615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:19 compute-0 python3.9[207617]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:59:19 compute-0 sudo[207615]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:19 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5ac0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:59:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:19 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5bc0039b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:59:19 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:59:19 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:59:19 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:59:19.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:59:20 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:59:20 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:59:20 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:59:20.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:59:20 compute-0 sudo[207768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xaakcurzotidivuctnnatlntjfaqpieu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917559.5287967-1988-163585269915966/AnsiballZ_file.py'
Oct 08 09:59:20 compute-0 sudo[207768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:20 compute-0 ceph-mon[73572]: pgmap v431: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:59:20 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:20 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5b8004180 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:59:20 compute-0 python3.9[207770]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:59:20 compute-0 sudo[207768]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:20 compute-0 sudo[207920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-youcbiqtjhdbyykabpuelasoqbsgybja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917560.4639962-1988-231724522889523/AnsiballZ_file.py'
Oct 08 09:59:20 compute-0 sudo[207920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:20 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v432: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:59:20 compute-0 python3.9[207922]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:59:20 compute-0 sudo[207920]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:21 compute-0 sudo[208092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxxcjtayozwddtdahkkzreykboewkuga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917561.091257-1988-116656386663136/AnsiballZ_file.py'
Oct 08 09:59:21 compute-0 sudo[208092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:21 compute-0 podman[208047]: 2025-10-08 09:59:21.474716554 +0000 UTC m=+0.165396801 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 08 09:59:21 compute-0 python3.9[208096]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:59:21 compute-0 sudo[208092]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:21 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:21 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5c4003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:59:21 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:21 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5ac0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:59:21 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:59:21 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:59:21 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:59:21.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:59:21 compute-0 sudo[208254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogseixyjpqctieysgedzrefzfanzunhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917561.6897986-1988-59294442058592/AnsiballZ_file.py'
Oct 08 09:59:21 compute-0 sudo[208254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:22 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:59:22 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:59:22 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:59:22.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:59:22 compute-0 ceph-mon[73572]: pgmap v432: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:59:22 compute-0 python3.9[208256]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:59:22 compute-0 sudo[208254]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:22 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5bc0039b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:59:22 compute-0 sudo[208406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ompysihwnyovryvxdvssegbliqogujuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917562.3407927-1988-60144582532000/AnsiballZ_file.py'
Oct 08 09:59:22 compute-0 sudo[208406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:22 compute-0 python3.9[208408]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:59:22 compute-0 sudo[208406]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:22 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v433: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:59:23 compute-0 sudo[208559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxkczxwcknbyksxeajogbcsszpxmnguc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917562.9684858-1988-232284109117150/AnsiballZ_file.py'
Oct 08 09:59:23 compute-0 sudo[208559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:23 compute-0 python3.9[208561]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:59:23 compute-0 sudo[208559]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:23 compute-0 sudo[208562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 09:59:23 compute-0 sudo[208562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:59:23 compute-0 sudo[208562]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:23 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:59:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:23 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5b8004320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:59:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:23 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5c4003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:59:23 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:59:23 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000035s ======
Oct 08 09:59:23 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:59:23.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Oct 08 09:59:24 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:59:24 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:59:24 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:59:24.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:59:24 compute-0 ceph-mon[73572]: pgmap v433: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:59:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:24 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5ac0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:59:24 compute-0 sudo[208737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uioxrlqlgzlyizxxcplzexmeecutwave ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917564.4075418-2285-225649093465777/AnsiballZ_stat.py'
Oct 08 09:59:24 compute-0 sudo[208737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:24 compute-0 python3.9[208739]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:59:24 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v434: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 08 09:59:24 compute-0 sudo[208737]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:25 compute-0 sudo[208861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xywokovgrisudlcrcccvtcqvysuzhwrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917564.4075418-2285-225649093465777/AnsiballZ_copy.py'
Oct 08 09:59:25 compute-0 sudo[208861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:25 compute-0 python3.9[208863]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917564.4075418-2285-225649093465777/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:59:25 compute-0 sudo[208861]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:25 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5ac0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:59:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:25 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5b8004340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:59:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:59:25] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Oct 08 09:59:25 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:59:25] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Oct 08 09:59:25 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:59:25 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:59:25 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:59:25.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:59:25 compute-0 sudo[209013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txlgblcpizykhamqlabzpzptzeporskc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917565.5929344-2285-156920356207424/AnsiballZ_stat.py'
Oct 08 09:59:25 compute-0 sudo[209013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:26 compute-0 python3.9[209015]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:59:26 compute-0 sudo[209013]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:26 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:59:26 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:59:26 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:59:26.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:59:26 compute-0 ceph-mon[73572]: pgmap v434: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 08 09:59:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:26 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5c4003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:59:26 compute-0 sudo[209137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhkpvgtjogchpttuzjkjkfpdswtizbpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917565.5929344-2285-156920356207424/AnsiballZ_copy.py'
Oct 08 09:59:26 compute-0 sudo[209137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:26 compute-0 python3.9[209139]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917565.5929344-2285-156920356207424/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:59:26 compute-0 sudo[209137]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:26 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v435: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:59:26 compute-0 sudo[209289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgkvstiooyclzrwlselrijablipsuzdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917566.744582-2285-59249924568419/AnsiballZ_stat.py'
Oct 08 09:59:27 compute-0 sudo[209289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:59:27.028Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:59:27 compute-0 python3.9[209291]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:59:27 compute-0 sudo[209289]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:27 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5ac0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:59:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:27 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5ac0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:59:27 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:59:27 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:59:27 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:59:27.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:59:27 compute-0 sudo[209413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnmxqysqtnqbsmfzlxzksfudslxgckaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917566.744582-2285-59249924568419/AnsiballZ_copy.py'
Oct 08 09:59:27 compute-0 sudo[209413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:27 compute-0 python3.9[209415]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917566.744582-2285-59249924568419/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:59:28 compute-0 sudo[209413]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:28 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:59:28 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:59:28 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:59:28.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:59:28 compute-0 ceph-mon[73572]: pgmap v435: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:59:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:28 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5b8004360 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:59:28 compute-0 sudo[209566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtsnsaultpqahzcnfgulvnyijzsmcckj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917568.1722934-2285-261852975168970/AnsiballZ_stat.py'
Oct 08 09:59:28 compute-0 sudo[209566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:28 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:59:28 compute-0 python3.9[209568]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:59:28 compute-0 sudo[209566]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:28 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v436: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:59:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:59:28.904Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 09:59:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:59:28.904Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 09:59:29 compute-0 sudo[209704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aiupjiuhexuomludanwvfciftouggkox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917568.1722934-2285-261852975168970/AnsiballZ_copy.py'
Oct 08 09:59:29 compute-0 sudo[209704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:29 compute-0 podman[209663]: 2025-10-08 09:59:29.104875211 +0000 UTC m=+0.065119330 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 08 09:59:29 compute-0 python3.9[209709]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917568.1722934-2285-261852975168970/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:59:29 compute-0 sudo[209704]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:29 compute-0 ceph-mon[73572]: pgmap v436: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:59:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:29 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5c4003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:59:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:29 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5c4003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:59:29 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:59:29 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:59:29 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:59:29.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:59:29 compute-0 sudo[209859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxbeofotnkpuwakkwgcvqwemrwsxvhsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917569.5138454-2285-250338820494429/AnsiballZ_stat.py'
Oct 08 09:59:29 compute-0 sudo[209859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:29 compute-0 python3.9[209861]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:59:30 compute-0 sudo[209859]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:30 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:59:30 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:59:30 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:59:30.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:59:30 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:30 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5c4003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:59:30 compute-0 sudo[209983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysjfjgkqhpstpytsoebtpmzfevhmzjve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917569.5138454-2285-250338820494429/AnsiballZ_copy.py'
Oct 08 09:59:30 compute-0 sudo[209983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:30 compute-0 python3.9[209985]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917569.5138454-2285-250338820494429/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:59:30 compute-0 sudo[209983]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:30 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v437: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:59:30 compute-0 sudo[210135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svfunftxqkjcuimlgzqbtkulwgalhgzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917570.6729443-2285-278591622654645/AnsiballZ_stat.py'
Oct 08 09:59:30 compute-0 sudo[210135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:31 compute-0 python3.9[210137]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:59:31 compute-0 sudo[210135]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:31 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5b8004380 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:59:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:31 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5bc003cf0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:59:31 compute-0 sudo[210259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-liznwbuwjybjfokqaajojfeufbtonoya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917570.6729443-2285-278591622654645/AnsiballZ_copy.py'
Oct 08 09:59:31 compute-0 sudo[210259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:31 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:59:31 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000035s ======
Oct 08 09:59:31 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:59:31.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Oct 08 09:59:31 compute-0 python3.9[210261]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917570.6729443-2285-278591622654645/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:59:31 compute-0 ceph-mon[73572]: pgmap v437: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:59:31 compute-0 sudo[210259]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:32 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:59:32 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:59:32 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:59:32.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:59:32 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:32 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5bc003cf0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 09:59:32 compute-0 sudo[210412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plcfwwkcmmptfzgybwcbrgeabfvkvmim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917572.085847-2285-221965350089793/AnsiballZ_stat.py'
Oct 08 09:59:32 compute-0 sudo[210412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:32 compute-0 python3.9[210414]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:59:32 compute-0 sudo[210412]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 09:59:32 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:59:32 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v438: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:59:32 compute-0 sudo[210535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjqhszlyosrtttcaxolfygjwztczghjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917572.085847-2285-221965350089793/AnsiballZ_copy.py'
Oct 08 09:59:32 compute-0 sudo[210535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:32 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:59:33 compute-0 python3.9[210537]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917572.085847-2285-221965350089793/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:59:33 compute-0 sudo[210535]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:59:33 compute-0 sudo[210688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glfzudrbfocdavsuzbrhtpvlkqcvpgep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917573.3078132-2285-74431260311386/AnsiballZ_stat.py'
Oct 08 09:59:33 compute-0 sudo[210688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:33 compute-0 kernel: ganesha.nfsd[188752]: segfault at 50 ip 00007ff68c83632e sp 00007ff640ff8210 error 4 in libntirpc.so.5.8[7ff68c81b000+2c000] likely on CPU 5 (core 0, socket 5)
Oct 08 09:59:33 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 08 09:59:33 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:33 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5c4003730 fd 38 proxy ignored for local
Oct 08 09:59:33 compute-0 systemd[1]: Started Process Core Dump (PID 210691/UID 0).
Oct 08 09:59:33 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:59:33 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:59:33 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:59:33.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:59:33 compute-0 python3.9[210690]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:59:33 compute-0 sudo[210688]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:34 compute-0 sudo[210741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:59:34 compute-0 sudo[210741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:59:34 compute-0 sudo[210741]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:34 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:59:34 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:59:34 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:59:34.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:59:34 compute-0 sudo[210789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 08 09:59:34 compute-0 sudo[210789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:59:34 compute-0 ceph-mon[73572]: pgmap v438: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 09:59:34 compute-0 sudo[210864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-voduholmcvckujqrsctuovbqzpngwfmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917573.3078132-2285-74431260311386/AnsiballZ_copy.py'
Oct 08 09:59:34 compute-0 sudo[210864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:34 compute-0 python3.9[210866]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917573.3078132-2285-74431260311386/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:59:34 compute-0 sudo[210864]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:34 compute-0 sudo[210789]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:59:34 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:59:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 08 09:59:34 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 09:59:34 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v439: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 519 B/s rd, 0 op/s
Oct 08 09:59:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 08 09:59:34 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:59:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 08 09:59:34 compute-0 systemd-coredump[210692]: Process 188623 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 53:
                                                    #0  0x00007ff68c83632e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Oct 08 09:59:34 compute-0 sudo[211047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxoymyxceawlcsrcnceoaurjcsfzfckw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917574.5159693-2285-247550885788487/AnsiballZ_stat.py'
Oct 08 09:59:34 compute-0 sudo[211047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:34 compute-0 systemd[1]: systemd-coredump@5-210691-0.service: Deactivated successfully.
Oct 08 09:59:34 compute-0 systemd[1]: systemd-coredump@5-210691-0.service: Consumed 1.116s CPU time.
Oct 08 09:59:34 compute-0 podman[211054]: 2025-10-08 09:59:34.947707761 +0000 UTC m=+0.024526560 container died c7ddf9eb043b2f4319271f9f52568ea96e1aa7b542b5cb278857b11c92e1ddaa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:59:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-b305198b3d8efc9db631e903d993aee68b48b62f795ac868089a526f78c5ea29-merged.mount: Deactivated successfully.
Oct 08 09:59:35 compute-0 podman[211054]: 2025-10-08 09:59:35.009754485 +0000 UTC m=+0.086573274 container remove c7ddf9eb043b2f4319271f9f52568ea96e1aa7b542b5cb278857b11c92e1ddaa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Oct 08 09:59:35 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Main process exited, code=exited, status=139/n/a
Oct 08 09:59:35 compute-0 python3.9[211050]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:59:35 compute-0 sudo[211047]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:35 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Failed with result 'exit-code'.
Oct 08 09:59:35 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Consumed 1.494s CPU time.
Oct 08 09:59:35 compute-0 sudo[211218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glgtnitnldydhthidqulvtixwfyeojzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917574.5159693-2285-247550885788487/AnsiballZ_copy.py'
Oct 08 09:59:35 compute-0 sudo[211218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:35 compute-0 python3.9[211220]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917574.5159693-2285-247550885788487/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:59:35 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:59:35 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 08 09:59:35 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 09:59:35 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 08 09:59:35 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 09:59:35 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 09:59:35 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:59:35 compute-0 sudo[211218]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:35 compute-0 sudo[211221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:59:35 compute-0 sudo[211221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:59:35 compute-0 sudo[211221]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:35 compute-0 sudo[211246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 08 09:59:35 compute-0 sudo[211246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:59:35 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:59:35 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 09:59:35 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:59:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:59:35] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Oct 08 09:59:35 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:59:35] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Oct 08 09:59:35 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:59:35 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000035s ======
Oct 08 09:59:35 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:59:35.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Oct 08 09:59:36 compute-0 podman[211360]: 2025-10-08 09:59:36.030923992 +0000 UTC m=+0.039120280 container create 3376d35931367c07541de03e6bfb839360a3655e43b2e3d21245dbe3c08b9645 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Oct 08 09:59:36 compute-0 systemd[1]: Started libpod-conmon-3376d35931367c07541de03e6bfb839360a3655e43b2e3d21245dbe3c08b9645.scope.
Oct 08 09:59:36 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:59:36 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:59:36 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:59:36.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:59:36 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:59:36 compute-0 podman[211360]: 2025-10-08 09:59:36.109411388 +0000 UTC m=+0.117607696 container init 3376d35931367c07541de03e6bfb839360a3655e43b2e3d21245dbe3c08b9645 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_kowalevski, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct 08 09:59:36 compute-0 podman[211360]: 2025-10-08 09:59:36.015018388 +0000 UTC m=+0.023214696 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:59:36 compute-0 podman[211360]: 2025-10-08 09:59:36.117457993 +0000 UTC m=+0.125654291 container start 3376d35931367c07541de03e6bfb839360a3655e43b2e3d21245dbe3c08b9645 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:59:36 compute-0 podman[211360]: 2025-10-08 09:59:36.121110739 +0000 UTC m=+0.129307037 container attach 3376d35931367c07541de03e6bfb839360a3655e43b2e3d21245dbe3c08b9645 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_kowalevski, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:59:36 compute-0 eager_kowalevski[211411]: 167 167
Oct 08 09:59:36 compute-0 systemd[1]: libpod-3376d35931367c07541de03e6bfb839360a3655e43b2e3d21245dbe3c08b9645.scope: Deactivated successfully.
Oct 08 09:59:36 compute-0 podman[211360]: 2025-10-08 09:59:36.122630231 +0000 UTC m=+0.130826599 container died 3376d35931367c07541de03e6bfb839360a3655e43b2e3d21245dbe3c08b9645 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 08 09:59:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-df96504db490188a969b67ef22ef20d59eb63be801a233d856527b73ba945a5a-merged.mount: Deactivated successfully.
Oct 08 09:59:36 compute-0 podman[211360]: 2025-10-08 09:59:36.158651404 +0000 UTC m=+0.166847702 container remove 3376d35931367c07541de03e6bfb839360a3655e43b2e3d21245dbe3c08b9645 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:59:36 compute-0 systemd[1]: libpod-conmon-3376d35931367c07541de03e6bfb839360a3655e43b2e3d21245dbe3c08b9645.scope: Deactivated successfully.
Oct 08 09:59:36 compute-0 sudo[211493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrmiqillvzkrqrxygvrchzemiqunneit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917575.96791-2285-113090132573873/AnsiballZ_stat.py'
Oct 08 09:59:36 compute-0 sudo[211493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:36 compute-0 podman[211501]: 2025-10-08 09:59:36.330914479 +0000 UTC m=+0.041052896 container create a1b7827e38bf457f89ba03bc58521486dcf436d77aeb8a5d6c3d39ba91bfa8d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_maxwell, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:59:36 compute-0 systemd[1]: Started libpod-conmon-a1b7827e38bf457f89ba03bc58521486dcf436d77aeb8a5d6c3d39ba91bfa8d9.scope.
Oct 08 09:59:36 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:59:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abd756db4e2d82f6fd502af173e5f4e0bdd72898fb66b556038947db4e5da9a3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 09:59:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abd756db4e2d82f6fd502af173e5f4e0bdd72898fb66b556038947db4e5da9a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:59:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abd756db4e2d82f6fd502af173e5f4e0bdd72898fb66b556038947db4e5da9a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:59:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abd756db4e2d82f6fd502af173e5f4e0bdd72898fb66b556038947db4e5da9a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 09:59:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abd756db4e2d82f6fd502af173e5f4e0bdd72898fb66b556038947db4e5da9a3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 09:59:36 compute-0 podman[211501]: 2025-10-08 09:59:36.313826024 +0000 UTC m=+0.023964431 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:59:36 compute-0 podman[211501]: 2025-10-08 09:59:36.414181698 +0000 UTC m=+0.124320135 container init a1b7827e38bf457f89ba03bc58521486dcf436d77aeb8a5d6c3d39ba91bfa8d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_maxwell, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct 08 09:59:36 compute-0 podman[211501]: 2025-10-08 09:59:36.427473514 +0000 UTC m=+0.137611921 container start a1b7827e38bf457f89ba03bc58521486dcf436d77aeb8a5d6c3d39ba91bfa8d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_maxwell, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct 08 09:59:36 compute-0 podman[211501]: 2025-10-08 09:59:36.447075155 +0000 UTC m=+0.157213592 container attach a1b7827e38bf457f89ba03bc58521486dcf436d77aeb8a5d6c3d39ba91bfa8d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct 08 09:59:36 compute-0 python3.9[211495]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:59:36 compute-0 sudo[211493]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:36 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v440: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 259 B/s rd, 0 op/s
Oct 08 09:59:36 compute-0 recursing_maxwell[211518]: --> passed data devices: 0 physical, 1 LVM
Oct 08 09:59:36 compute-0 recursing_maxwell[211518]: --> All data devices are unavailable
Oct 08 09:59:36 compute-0 systemd[1]: libpod-a1b7827e38bf457f89ba03bc58521486dcf436d77aeb8a5d6c3d39ba91bfa8d9.scope: Deactivated successfully.
Oct 08 09:59:36 compute-0 podman[211501]: 2025-10-08 09:59:36.77684905 +0000 UTC m=+0.486987457 container died a1b7827e38bf457f89ba03bc58521486dcf436d77aeb8a5d6c3d39ba91bfa8d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 08 09:59:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-abd756db4e2d82f6fd502af173e5f4e0bdd72898fb66b556038947db4e5da9a3-merged.mount: Deactivated successfully.
Oct 08 09:59:36 compute-0 sudo[211654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjspaoxshdtdonjwvpqeqqqbmbysuole ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917575.96791-2285-113090132573873/AnsiballZ_copy.py'
Oct 08 09:59:36 compute-0 sudo[211654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:36 compute-0 ceph-mon[73572]: pgmap v439: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 519 B/s rd, 0 op/s
Oct 08 09:59:36 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:59:36 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 09:59:36 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 09:59:36 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 09:59:36 compute-0 podman[211501]: 2025-10-08 09:59:36.830460095 +0000 UTC m=+0.540598502 container remove a1b7827e38bf457f89ba03bc58521486dcf436d77aeb8a5d6c3d39ba91bfa8d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_maxwell, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:59:36 compute-0 systemd[1]: libpod-conmon-a1b7827e38bf457f89ba03bc58521486dcf436d77aeb8a5d6c3d39ba91bfa8d9.scope: Deactivated successfully.
Oct 08 09:59:36 compute-0 sudo[211246]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:36 compute-0 sudo[211667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:59:36 compute-0 sudo[211667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:59:36 compute-0 sudo[211667]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:36 compute-0 sudo[211692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- lvm list --format json
Oct 08 09:59:36 compute-0 sudo[211692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:59:36 compute-0 python3.9[211666]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917575.96791-2285-113090132573873/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:59:37 compute-0 sudo[211654]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:59:37.029Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 09:59:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:59:37.033Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:59:37 compute-0 podman[211842]: 2025-10-08 09:59:37.382962913 +0000 UTC m=+0.050634594 container create 7c35dfd7b61f9ce25f954ff181c007b0a2c62b0294a23f74b91357dbbaa13b08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_golick, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 08 09:59:37 compute-0 systemd[1]: Started libpod-conmon-7c35dfd7b61f9ce25f954ff181c007b0a2c62b0294a23f74b91357dbbaa13b08.scope.
Oct 08 09:59:37 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:59:37 compute-0 podman[211842]: 2025-10-08 09:59:37.449404398 +0000 UTC m=+0.117076089 container init 7c35dfd7b61f9ce25f954ff181c007b0a2c62b0294a23f74b91357dbbaa13b08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 08 09:59:37 compute-0 podman[211842]: 2025-10-08 09:59:37.361167207 +0000 UTC m=+0.028838978 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:59:37 compute-0 podman[211842]: 2025-10-08 09:59:37.455201046 +0000 UTC m=+0.122872717 container start 7c35dfd7b61f9ce25f954ff181c007b0a2c62b0294a23f74b91357dbbaa13b08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:59:37 compute-0 podman[211842]: 2025-10-08 09:59:37.458136695 +0000 UTC m=+0.125808366 container attach 7c35dfd7b61f9ce25f954ff181c007b0a2c62b0294a23f74b91357dbbaa13b08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:59:37 compute-0 condescending_golick[211899]: 167 167
Oct 08 09:59:37 compute-0 systemd[1]: libpod-7c35dfd7b61f9ce25f954ff181c007b0a2c62b0294a23f74b91357dbbaa13b08.scope: Deactivated successfully.
Oct 08 09:59:37 compute-0 podman[211842]: 2025-10-08 09:59:37.459749421 +0000 UTC m=+0.127421092 container died 7c35dfd7b61f9ce25f954ff181c007b0a2c62b0294a23f74b91357dbbaa13b08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_golick, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 08 09:59:37 compute-0 sudo[211931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kenlxcrmpnbqoanpwmecpuslcmodftar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917577.2056708-2285-211746489354310/AnsiballZ_stat.py'
Oct 08 09:59:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-2518039d2939ed4151eb4133fea793cc9536eae744575d594c88d8b1d50311c0-merged.mount: Deactivated successfully.
Oct 08 09:59:37 compute-0 sudo[211931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:37 compute-0 podman[211842]: 2025-10-08 09:59:37.500179704 +0000 UTC m=+0.167851375 container remove 7c35dfd7b61f9ce25f954ff181c007b0a2c62b0294a23f74b91357dbbaa13b08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:59:37 compute-0 systemd[1]: libpod-conmon-7c35dfd7b61f9ce25f954ff181c007b0a2c62b0294a23f74b91357dbbaa13b08.scope: Deactivated successfully.
Oct 08 09:59:37 compute-0 podman[211952]: 2025-10-08 09:59:37.657584472 +0000 UTC m=+0.044816715 container create 748f49c47fe67cdac74267bc3d64067d33db18c77dd26f920e78514ba9cb5715 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 09:59:37 compute-0 systemd[1]: Started libpod-conmon-748f49c47fe67cdac74267bc3d64067d33db18c77dd26f920e78514ba9cb5715.scope.
Oct 08 09:59:37 compute-0 python3.9[211939]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:59:37 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:59:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5689a59894d6119bd944b9ecb295b34ad9a6745d3d9d017fcb8199da0e5605c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 09:59:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5689a59894d6119bd944b9ecb295b34ad9a6745d3d9d017fcb8199da0e5605c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:59:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5689a59894d6119bd944b9ecb295b34ad9a6745d3d9d017fcb8199da0e5605c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:59:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5689a59894d6119bd944b9ecb295b34ad9a6745d3d9d017fcb8199da0e5605c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 09:59:37 compute-0 sudo[211931]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:37 compute-0 podman[211952]: 2025-10-08 09:59:37.721481828 +0000 UTC m=+0.108714101 container init 748f49c47fe67cdac74267bc3d64067d33db18c77dd26f920e78514ba9cb5715 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_babbage, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:59:37 compute-0 podman[211952]: 2025-10-08 09:59:37.728686815 +0000 UTC m=+0.115919058 container start 748f49c47fe67cdac74267bc3d64067d33db18c77dd26f920e78514ba9cb5715 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_babbage, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid)
Oct 08 09:59:37 compute-0 podman[211952]: 2025-10-08 09:59:37.642112092 +0000 UTC m=+0.029344365 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:59:37 compute-0 podman[211952]: 2025-10-08 09:59:37.731790722 +0000 UTC m=+0.119022965 container attach 748f49c47fe67cdac74267bc3d64067d33db18c77dd26f920e78514ba9cb5715 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_babbage, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct 08 09:59:37 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:59:37 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:59:37 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:59:37.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:59:37 compute-0 ceph-mon[73572]: pgmap v440: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 259 B/s rd, 0 op/s
Oct 08 09:59:38 compute-0 sudo[212099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dznedhhojskuetvzonnnyxiwwocydvon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917577.2056708-2285-211746489354310/AnsiballZ_copy.py'
Oct 08 09:59:38 compute-0 sudo[212099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:38 compute-0 hungry_babbage[211969]: {
Oct 08 09:59:38 compute-0 hungry_babbage[211969]:     "1": [
Oct 08 09:59:38 compute-0 hungry_babbage[211969]:         {
Oct 08 09:59:38 compute-0 hungry_babbage[211969]:             "devices": [
Oct 08 09:59:38 compute-0 hungry_babbage[211969]:                 "/dev/loop3"
Oct 08 09:59:38 compute-0 hungry_babbage[211969]:             ],
Oct 08 09:59:38 compute-0 hungry_babbage[211969]:             "lv_name": "ceph_lv0",
Oct 08 09:59:38 compute-0 hungry_babbage[211969]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 09:59:38 compute-0 hungry_babbage[211969]:             "lv_size": "21470642176",
Oct 08 09:59:38 compute-0 hungry_babbage[211969]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 08 09:59:38 compute-0 hungry_babbage[211969]:             "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 09:59:38 compute-0 hungry_babbage[211969]:             "name": "ceph_lv0",
Oct 08 09:59:38 compute-0 hungry_babbage[211969]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 09:59:38 compute-0 hungry_babbage[211969]:             "tags": {
Oct 08 09:59:38 compute-0 hungry_babbage[211969]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 08 09:59:38 compute-0 hungry_babbage[211969]:                 "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 09:59:38 compute-0 hungry_babbage[211969]:                 "ceph.cephx_lockbox_secret": "",
Oct 08 09:59:38 compute-0 hungry_babbage[211969]:                 "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct 08 09:59:38 compute-0 hungry_babbage[211969]:                 "ceph.cluster_name": "ceph",
Oct 08 09:59:38 compute-0 hungry_babbage[211969]:                 "ceph.crush_device_class": "",
Oct 08 09:59:38 compute-0 hungry_babbage[211969]:                 "ceph.encrypted": "0",
Oct 08 09:59:38 compute-0 hungry_babbage[211969]:                 "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct 08 09:59:38 compute-0 hungry_babbage[211969]:                 "ceph.osd_id": "1",
Oct 08 09:59:38 compute-0 hungry_babbage[211969]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 08 09:59:38 compute-0 hungry_babbage[211969]:                 "ceph.type": "block",
Oct 08 09:59:38 compute-0 hungry_babbage[211969]:                 "ceph.vdo": "0",
Oct 08 09:59:38 compute-0 hungry_babbage[211969]:                 "ceph.with_tpm": "0"
Oct 08 09:59:38 compute-0 hungry_babbage[211969]:             },
Oct 08 09:59:38 compute-0 hungry_babbage[211969]:             "type": "block",
Oct 08 09:59:38 compute-0 hungry_babbage[211969]:             "vg_name": "ceph_vg0"
Oct 08 09:59:38 compute-0 hungry_babbage[211969]:         }
Oct 08 09:59:38 compute-0 hungry_babbage[211969]:     ]
Oct 08 09:59:38 compute-0 hungry_babbage[211969]: }
Oct 08 09:59:38 compute-0 systemd[1]: libpod-748f49c47fe67cdac74267bc3d64067d33db18c77dd26f920e78514ba9cb5715.scope: Deactivated successfully.
Oct 08 09:59:38 compute-0 podman[211952]: 2025-10-08 09:59:38.050515939 +0000 UTC m=+0.437748182 container died 748f49c47fe67cdac74267bc3d64067d33db18c77dd26f920e78514ba9cb5715 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:59:38 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:59:38 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:59:38 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:59:38.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:59:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-5689a59894d6119bd944b9ecb295b34ad9a6745d3d9d017fcb8199da0e5605c1-merged.mount: Deactivated successfully.
Oct 08 09:59:38 compute-0 podman[211952]: 2025-10-08 09:59:38.097391043 +0000 UTC m=+0.484623276 container remove 748f49c47fe67cdac74267bc3d64067d33db18c77dd26f920e78514ba9cb5715 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:59:38 compute-0 systemd[1]: libpod-conmon-748f49c47fe67cdac74267bc3d64067d33db18c77dd26f920e78514ba9cb5715.scope: Deactivated successfully.
Oct 08 09:59:38 compute-0 sudo[211692]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:38 compute-0 sudo[212116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 09:59:38 compute-0 sudo[212116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:59:38 compute-0 python3.9[212101]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917577.2056708-2285-211746489354310/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:59:38 compute-0 sudo[212116]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:38 compute-0 sudo[212099]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:38 compute-0 sudo[212141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- raw list --format json
Oct 08 09:59:38 compute-0 sudo[212141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:59:38 compute-0 sudo[212356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmdpjmlxcyfjvuzcpyhwtymoodbdffmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917578.336809-2285-188981391456574/AnsiballZ_stat.py'
Oct 08 09:59:38 compute-0 sudo[212356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:59:38 compute-0 podman[212352]: 2025-10-08 09:59:38.618892971 +0000 UTC m=+0.043198580 container create 439f3487353009913c924ba15cceea273ebdff5a23225adc02c0862f1c03229a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 09:59:38 compute-0 systemd[1]: Started libpod-conmon-439f3487353009913c924ba15cceea273ebdff5a23225adc02c0862f1c03229a.scope.
Oct 08 09:59:38 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:59:38 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v441: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 259 B/s rd, 0 op/s
Oct 08 09:59:38 compute-0 podman[212352]: 2025-10-08 09:59:38.60134948 +0000 UTC m=+0.025655109 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:59:38 compute-0 podman[212352]: 2025-10-08 09:59:38.694517279 +0000 UTC m=+0.118822908 container init 439f3487353009913c924ba15cceea273ebdff5a23225adc02c0862f1c03229a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 09:59:38 compute-0 podman[212352]: 2025-10-08 09:59:38.701215138 +0000 UTC m=+0.125520747 container start 439f3487353009913c924ba15cceea273ebdff5a23225adc02c0862f1c03229a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_cerf, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct 08 09:59:38 compute-0 podman[212352]: 2025-10-08 09:59:38.703800727 +0000 UTC m=+0.128106336 container attach 439f3487353009913c924ba15cceea273ebdff5a23225adc02c0862f1c03229a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_cerf, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 09:59:38 compute-0 focused_cerf[212374]: 167 167
Oct 08 09:59:38 compute-0 systemd[1]: libpod-439f3487353009913c924ba15cceea273ebdff5a23225adc02c0862f1c03229a.scope: Deactivated successfully.
Oct 08 09:59:38 compute-0 podman[212352]: 2025-10-08 09:59:38.705963761 +0000 UTC m=+0.130269370 container died 439f3487353009913c924ba15cceea273ebdff5a23225adc02c0862f1c03229a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:59:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-2af6956d2899801a6cfb17b663ac6f7531ed0893761dab4ff6e1c4f72a0dc90d-merged.mount: Deactivated successfully.
Oct 08 09:59:38 compute-0 podman[212352]: 2025-10-08 09:59:38.738599297 +0000 UTC m=+0.162904906 container remove 439f3487353009913c924ba15cceea273ebdff5a23225adc02c0862f1c03229a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 08 09:59:38 compute-0 systemd[1]: libpod-conmon-439f3487353009913c924ba15cceea273ebdff5a23225adc02c0862f1c03229a.scope: Deactivated successfully.
Oct 08 09:59:38 compute-0 python3.9[212367]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:59:38 compute-0 sudo[212356]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:38 compute-0 podman[212401]: 2025-10-08 09:59:38.885599729 +0000 UTC m=+0.039160271 container create 020d9885466ceebb0d7a41be8f46bbd19f537f0bf1b35aa9ad60ef60705779ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_varahamihira, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 09:59:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:59:38.905Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:59:38 compute-0 systemd[1]: Started libpod-conmon-020d9885466ceebb0d7a41be8f46bbd19f537f0bf1b35aa9ad60ef60705779ea.scope.
Oct 08 09:59:38 compute-0 systemd[1]: Started libcrun container.
Oct 08 09:59:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37dba5f71e4da9032834e67fdae6d0870602d49cc48678f48c4b98082c7d0c02/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 09:59:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37dba5f71e4da9032834e67fdae6d0870602d49cc48678f48c4b98082c7d0c02/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 09:59:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37dba5f71e4da9032834e67fdae6d0870602d49cc48678f48c4b98082c7d0c02/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:59:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37dba5f71e4da9032834e67fdae6d0870602d49cc48678f48c4b98082c7d0c02/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 09:59:38 compute-0 podman[212401]: 2025-10-08 09:59:38.963022308 +0000 UTC m=+0.116582870 container init 020d9885466ceebb0d7a41be8f46bbd19f537f0bf1b35aa9ad60ef60705779ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_varahamihira, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:59:38 compute-0 podman[212401]: 2025-10-08 09:59:38.867640104 +0000 UTC m=+0.021200666 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:59:38 compute-0 podman[212401]: 2025-10-08 09:59:38.969993666 +0000 UTC m=+0.123554208 container start 020d9885466ceebb0d7a41be8f46bbd19f537f0bf1b35aa9ad60ef60705779ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True)
Oct 08 09:59:38 compute-0 podman[212401]: 2025-10-08 09:59:38.972563145 +0000 UTC m=+0.126123697 container attach 020d9885466ceebb0d7a41be8f46bbd19f537f0bf1b35aa9ad60ef60705779ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_varahamihira, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 09:59:39 compute-0 sudo[212540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elyvqhxnljplegtchgslajzaovvghdrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917578.336809-2285-188981391456574/AnsiballZ_copy.py'
Oct 08 09:59:39 compute-0 sudo[212540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:39 compute-0 python3.9[212542]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917578.336809-2285-188981391456574/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:59:39 compute-0 sudo[212540]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:39 compute-0 lvm[212688]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 08 09:59:39 compute-0 lvm[212688]: VG ceph_vg0 finished
Oct 08 09:59:39 compute-0 reverent_varahamihira[212449]: {}
Oct 08 09:59:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/095939 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 08 09:59:39 compute-0 systemd[1]: libpod-020d9885466ceebb0d7a41be8f46bbd19f537f0bf1b35aa9ad60ef60705779ea.scope: Deactivated successfully.
Oct 08 09:59:39 compute-0 podman[212401]: 2025-10-08 09:59:39.67928154 +0000 UTC m=+0.832842092 container died 020d9885466ceebb0d7a41be8f46bbd19f537f0bf1b35aa9ad60ef60705779ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct 08 09:59:39 compute-0 systemd[1]: libpod-020d9885466ceebb0d7a41be8f46bbd19f537f0bf1b35aa9ad60ef60705779ea.scope: Consumed 1.051s CPU time.
Oct 08 09:59:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-37dba5f71e4da9032834e67fdae6d0870602d49cc48678f48c4b98082c7d0c02-merged.mount: Deactivated successfully.
Oct 08 09:59:39 compute-0 podman[212401]: 2025-10-08 09:59:39.721833147 +0000 UTC m=+0.875393709 container remove 020d9885466ceebb0d7a41be8f46bbd19f537f0bf1b35aa9ad60ef60705779ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1)
Oct 08 09:59:39 compute-0 systemd[1]: libpod-conmon-020d9885466ceebb0d7a41be8f46bbd19f537f0bf1b35aa9ad60ef60705779ea.scope: Deactivated successfully.
Oct 08 09:59:39 compute-0 sudo[212141]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:39 compute-0 sudo[212774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twrrddtrcqwyznglnmjlxdndumgqaamk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917579.50294-2285-683954650607/AnsiballZ_stat.py'
Oct 08 09:59:39 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 09:59:39 compute-0 sudo[212774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:39 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:59:39 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:59:39 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:59:39.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:59:39 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:59:39 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 09:59:39 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:59:39 compute-0 sudo[212777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 08 09:59:39 compute-0 ceph-mon[73572]: pgmap v441: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 259 B/s rd, 0 op/s
Oct 08 09:59:39 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:59:39 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 09:59:39 compute-0 sudo[212777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:59:39 compute-0 sudo[212777]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:39 compute-0 python3.9[212776]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:59:39 compute-0 sudo[212774]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:40 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:59:40 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:59:40 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:59:40.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:59:40 compute-0 sudo[212923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqvbxssrhtbyfrumrdvxniclmzezcrvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917579.50294-2285-683954650607/AnsiballZ_copy.py'
Oct 08 09:59:40 compute-0 sudo[212923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:40 compute-0 python3.9[212925]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917579.50294-2285-683954650607/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:59:40 compute-0 sudo[212923]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:40 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v442: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 259 B/s rd, 0 op/s
Oct 08 09:59:40 compute-0 sudo[213075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqjpgwrwicuqsanqntgxdgybkvypyttw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917580.7190804-2285-120139854877645/AnsiballZ_stat.py'
Oct 08 09:59:40 compute-0 sudo[213075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:41 compute-0 python3.9[213077]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 09:59:41 compute-0 sudo[213075]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:41 compute-0 sudo[213199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uifpicnvgsmmxvnvkigncakgnwlmqojx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917580.7190804-2285-120139854877645/AnsiballZ_copy.py'
Oct 08 09:59:41 compute-0 sudo[213199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:41 compute-0 python3.9[213201]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917580.7190804-2285-120139854877645/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:59:41 compute-0 sudo[213199]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:41 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:59:41 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:59:41 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:59:41.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:59:41 compute-0 ceph-mon[73572]: pgmap v442: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 259 B/s rd, 0 op/s
Oct 08 09:59:42 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:59:42 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:59:42 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:59:42.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:59:42 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v443: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 259 B/s rd, 0 op/s
Oct 08 09:59:43 compute-0 python3.9[213352]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 09:59:43 compute-0 sudo[213433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 09:59:43 compute-0 sudo[213433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 09:59:43 compute-0 sudo[213433]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:59:43 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:59:43 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:59:43 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:59:43.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:59:43 compute-0 sudo[213531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfbakpyquuijpmqihtglmpnljnjgrmsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917583.4701145-2903-250651398797374/AnsiballZ_seboolean.py'
Oct 08 09:59:43 compute-0 sudo[213531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:43 compute-0 ceph-mon[73572]: pgmap v443: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 259 B/s rd, 0 op/s
Oct 08 09:59:44 compute-0 python3.9[213534]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Oct 08 09:59:44 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:59:44 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:59:44 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:59:44.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:59:44 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v444: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 433 B/s rd, 0 op/s
Oct 08 09:59:45 compute-0 sudo[213531]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:45 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Scheduled restart job, restart counter is at 6.
Oct 08 09:59:45 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct 08 09:59:45 compute-0 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Oct 08 09:59:45 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Consumed 1.494s CPU time.
Oct 08 09:59:45 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct 08 09:59:45 compute-0 podman[213616]: 2025-10-08 09:59:45.689149787 +0000 UTC m=+0.075511035 container create 2a68b9f1bcb66211021ed9b4fd46add9bf3082d3ff8f1593df68d96a304a7aa2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct 08 09:59:45 compute-0 podman[213616]: 2025-10-08 09:59:45.642487681 +0000 UTC m=+0.028848959 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 09:59:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:59:45] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Oct 08 09:59:45 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:59:45] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Oct 08 09:59:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d95dca391282c1ee9ead62ef4a6924429d82eaa2356c084f9e1be43d78b2b69/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 08 09:59:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d95dca391282c1ee9ead62ef4a6924429d82eaa2356c084f9e1be43d78b2b69/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 09:59:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d95dca391282c1ee9ead62ef4a6924429d82eaa2356c084f9e1be43d78b2b69/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 09:59:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d95dca391282c1ee9ead62ef4a6924429d82eaa2356c084f9e1be43d78b2b69/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.uynkmx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 09:59:45 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:59:45 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:59:45 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:59:45.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:59:45 compute-0 podman[213616]: 2025-10-08 09:59:45.791281423 +0000 UTC m=+0.177642701 container init 2a68b9f1bcb66211021ed9b4fd46add9bf3082d3ff8f1593df68d96a304a7aa2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Oct 08 09:59:45 compute-0 podman[213616]: 2025-10-08 09:59:45.798141377 +0000 UTC m=+0.184502625 container start 2a68b9f1bcb66211021ed9b4fd46add9bf3082d3ff8f1593df68d96a304a7aa2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 09:59:45 compute-0 bash[213616]: 2a68b9f1bcb66211021ed9b4fd46add9bf3082d3ff8f1593df68d96a304a7aa2
Oct 08 09:59:45 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct 08 09:59:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 09:59:45 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 08 09:59:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 09:59:45 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 08 09:59:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 09:59:45 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 08 09:59:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 09:59:45 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 08 09:59:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 09:59:45 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 08 09:59:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 09:59:45 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 08 09:59:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 09:59:45 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 08 09:59:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 09:59:45 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 09:59:45 compute-0 sudo[213799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrmmrwsrlroryweodgqudtgxqaxejwjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917585.6790357-2927-120993619527290/AnsiballZ_copy.py'
Oct 08 09:59:45 compute-0 sudo[213799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:46 compute-0 ceph-mon[73572]: pgmap v444: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 433 B/s rd, 0 op/s
Oct 08 09:59:46 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:59:46 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:59:46 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:59:46.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:59:46 compute-0 python3.9[213801]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:59:46 compute-0 sudo[213799]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:46 compute-0 sudo[213951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikjmogziycaaadcagqmgmzdjfgvdtmym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917586.3116617-2927-209396505687337/AnsiballZ_copy.py'
Oct 08 09:59:46 compute-0 sudo[213951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:46 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v445: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 09:59:46 compute-0 python3.9[213953]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:59:46 compute-0 sudo[213951]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:59:47.034Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:59:47 compute-0 sudo[214104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onolsnkimwxabyzkhvxszgudaeyvivll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917586.9158754-2927-22891000711177/AnsiballZ_copy.py'
Oct 08 09:59:47 compute-0 sudo[214104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:47 compute-0 python3.9[214106]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:59:47 compute-0 sudo[214104]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_09:59:47
Oct 08 09:59:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 08 09:59:47 compute-0 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct 08 09:59:47 compute-0 ceph-mgr[73869]: [balancer INFO root] pools ['volumes', 'backups', '.nfs', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', '.rgw.root', '.mgr', 'vms', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta']
Oct 08 09:59:47 compute-0 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct 08 09:59:47 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:59:47 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:59:47 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:59:47.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:59:47 compute-0 sudo[214256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpkshvninifknoltcgwekrwwlinjpbin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917587.5414786-2927-128080163879731/AnsiballZ_copy.py'
Oct 08 09:59:47 compute-0 sudo[214256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 09:59:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:59:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct 08 09:59:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:59:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 08 09:59:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:59:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:59:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:59:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:59:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:59:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:59:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:59:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:59:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:59:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 08 09:59:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:59:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:59:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:59:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 08 09:59:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:59:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 08 09:59:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:59:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 09:59:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:59:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 08 09:59:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 09:59:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 09:59:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:59:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:59:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 08 09:59:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 09:59:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 09:59:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 09:59:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 09:59:48 compute-0 python3.9[214258]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:59:48 compute-0 sudo[214256]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:48 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:59:48 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:59:48 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:59:48.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:59:48 compute-0 ceph-mon[73572]: pgmap v445: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 09:59:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 09:59:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:59:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:59:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 09:59:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 09:59:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 08 09:59:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 09:59:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 09:59:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 09:59:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 09:59:48 compute-0 sudo[214409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-firqqlnjbxzlzynrvjgaxltjphnwzzpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917588.181811-2927-151553935949541/AnsiballZ_copy.py'
Oct 08 09:59:48 compute-0 sudo[214409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:48 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:59:48 compute-0 python3.9[214411]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:59:48 compute-0 sudo[214409]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:48 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v446: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 09:59:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:59:48.908Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:59:49 compute-0 sudo[214562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-weberdfvuwvfamkufxxpniyktyjawchd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917589.0065622-3035-21902623379246/AnsiballZ_copy.py'
Oct 08 09:59:49 compute-0 sudo[214562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:49 compute-0 python3.9[214564]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:59:49 compute-0 sudo[214562]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:49 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:59:49 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:59:49 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:59:49.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:59:49 compute-0 sudo[214714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwlarczcfgxgdggcxhkggicaegkbxmeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917589.6163054-3035-10957189534986/AnsiballZ_copy.py'
Oct 08 09:59:49 compute-0 sudo[214714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:50 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:59:50 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:59:50 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:59:50.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:59:50 compute-0 python3.9[214717]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:59:50 compute-0 sudo[214714]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:50 compute-0 ceph-mon[73572]: pgmap v446: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 09:59:50 compute-0 sudo[214867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyzciraxscgzvqjigbocmdgbixkzuxly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917590.2744176-3035-46740641929217/AnsiballZ_copy.py'
Oct 08 09:59:50 compute-0 sudo[214867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:50 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v447: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 08 09:59:50 compute-0 python3.9[214869]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:59:50 compute-0 sudo[214867]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:51 compute-0 sudo[215020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovrjcbmmxnwzblhmwofgcajuwcrisnbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917590.9294775-3035-131177455103935/AnsiballZ_copy.py'
Oct 08 09:59:51 compute-0 sudo[215020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:51 compute-0 python3.9[215022]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:59:51 compute-0 ceph-mon[73572]: pgmap v447: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 08 09:59:51 compute-0 sudo[215020]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:51 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:59:51 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:59:51 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:59:51.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:59:51 compute-0 sudo[215183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atmhbjmwoaydswzxqpcdzvoawbosgicj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917591.6139972-3035-153657068317665/AnsiballZ_copy.py'
Oct 08 09:59:51 compute-0 sudo[215183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:51 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 09:59:51 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 09:59:51 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 09:59:51 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 09:59:51 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 09:59:51 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 09:59:51 compute-0 podman[215146]: 2025-10-08 09:59:51.97414355 +0000 UTC m=+0.125060381 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 08 09:59:52 compute-0 python3.9[215192]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:59:52 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:59:52 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000035s ======
Oct 08 09:59:52 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:59:52.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Oct 08 09:59:52 compute-0 sudo[215183]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:52 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/095952 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 08 09:59:52 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v448: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 08 09:59:52 compute-0 sudo[215352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phwlkwhqwkimxbgptckfbkijkjvhmxav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917592.4609964-3143-80550270073279/AnsiballZ_systemd.py'
Oct 08 09:59:52 compute-0 sudo[215352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:53 compute-0 python3.9[215354]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 08 09:59:53 compute-0 systemd[1]: Reloading.
Oct 08 09:59:53 compute-0 systemd-rc-local-generator[215385]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:59:53 compute-0 systemd-sysv-generator[215389]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:59:53 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Oct 08 09:59:53 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Oct 08 09:59:53 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Oct 08 09:59:53 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Oct 08 09:59:53 compute-0 systemd[1]: Starting libvirt logging daemon...
Oct 08 09:59:53 compute-0 systemd[1]: Started libvirt logging daemon.
Oct 08 09:59:53 compute-0 sudo[215352]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:53 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:59:53 compute-0 ceph-mon[73572]: pgmap v448: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 08 09:59:53 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:59:53 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:59:53 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:59:53.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:59:54 compute-0 sudo[215548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffksddcdfidoyizhbbhimmwgartlopkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917593.742149-3143-35943952499447/AnsiballZ_systemd.py'
Oct 08 09:59:54 compute-0 sudo[215548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:54 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:59:54 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:59:54 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:59:54.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:59:54 compute-0 python3.9[215550]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 08 09:59:54 compute-0 systemd[1]: Reloading.
Oct 08 09:59:54 compute-0 systemd-rc-local-generator[215577]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:59:54 compute-0 systemd-sysv-generator[215580]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:59:54 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v449: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 1 op/s
Oct 08 09:59:54 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Oct 08 09:59:54 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Oct 08 09:59:54 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Oct 08 09:59:54 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Oct 08 09:59:54 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Oct 08 09:59:54 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Oct 08 09:59:54 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Oct 08 09:59:54 compute-0 systemd[1]: Started libvirt nodedev daemon.
Oct 08 09:59:54 compute-0 sudo[215548]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:55 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Oct 08 09:59:55 compute-0 sudo[215765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxnadmquerpqqvvnrgrlgpsbfihzyxzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917594.968896-3143-79742719790300/AnsiballZ_systemd.py'
Oct 08 09:59:55 compute-0 sudo[215765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:55 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Oct 08 09:59:55 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Oct 08 09:59:55 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Oct 08 09:59:55 compute-0 python3.9[215767]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 08 09:59:55 compute-0 systemd[1]: Reloading.
Oct 08 09:59:55 compute-0 systemd-rc-local-generator[215800]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:59:55 compute-0 systemd-sysv-generator[215804]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:59:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:59:55] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Oct 08 09:59:55 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:59:55] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Oct 08 09:59:55 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:59:55 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:59:55 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:59:55.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:59:55 compute-0 ceph-mon[73572]: pgmap v449: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 1 op/s
Oct 08 09:59:55 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Oct 08 09:59:55 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Oct 08 09:59:55 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Oct 08 09:59:55 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Oct 08 09:59:55 compute-0 systemd[1]: Starting libvirt proxy daemon...
Oct 08 09:59:55 compute-0 systemd[1]: Started libvirt proxy daemon.
Oct 08 09:59:55 compute-0 sudo[215765]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:56 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 09:59:55 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 09:59:56 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 09:59:55 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 09:59:56 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 09:59:55 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 09:59:56 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 09:59:56 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 09:59:56 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:59:56 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000035s ======
Oct 08 09:59:56 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:59:56.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Oct 08 09:59:56 compute-0 setroubleshoot[215690]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 6bfffd59-0a64-4f0c-b98b-fdb9bc299a30
Oct 08 09:59:56 compute-0 setroubleshoot[215690]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Oct 08 09:59:56 compute-0 setroubleshoot[215690]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 6bfffd59-0a64-4f0c-b98b-fdb9bc299a30
Oct 08 09:59:56 compute-0 setroubleshoot[215690]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Oct 08 09:59:56 compute-0 sudo[215986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oajpthiavkjgsugvxqzvoehhtpicqdpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917596.164176-3143-13384679134387/AnsiballZ_systemd.py'
Oct 08 09:59:56 compute-0 sudo[215986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:56 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v450: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 08 09:59:56 compute-0 python3.9[215988]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 08 09:59:56 compute-0 systemd[1]: Reloading.
Oct 08 09:59:56 compute-0 systemd-rc-local-generator[216016]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:59:56 compute-0 systemd-sysv-generator[216019]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:59:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:59:57.034Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 09:59:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:59:57.037Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 09:59:57 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Oct 08 09:59:57 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Oct 08 09:59:57 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Oct 08 09:59:57 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Oct 08 09:59:57 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Oct 08 09:59:57 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Oct 08 09:59:57 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Oct 08 09:59:57 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Oct 08 09:59:57 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Oct 08 09:59:57 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Oct 08 09:59:57 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Oct 08 09:59:57 compute-0 systemd[1]: Started libvirt QEMU daemon.
Oct 08 09:59:57 compute-0 sudo[215986]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:59:57.398 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 09:59:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:59:57.399 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 09:59:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 09:59:57.399 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 09:59:57 compute-0 sudo[216201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oipfxhqqeekuppefhzwyvxnipzbxvgib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917597.3641796-3143-64493038651301/AnsiballZ_systemd.py'
Oct 08 09:59:57 compute-0 sudo[216201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:57 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:59:57 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:59:57 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:59:57.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:59:57 compute-0 ceph-mon[73572]: pgmap v450: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 08 09:59:57 compute-0 python3.9[216203]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 08 09:59:57 compute-0 systemd[1]: Reloading.
Oct 08 09:59:58 compute-0 systemd-rc-local-generator[216232]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 09:59:58 compute-0 systemd-sysv-generator[216235]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 09:59:58 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:59:58 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:59:58 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:59:58.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:59:58 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Oct 08 09:59:58 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Oct 08 09:59:58 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Oct 08 09:59:58 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Oct 08 09:59:58 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Oct 08 09:59:58 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Oct 08 09:59:58 compute-0 systemd[1]: Starting libvirt secret daemon...
Oct 08 09:59:58 compute-0 systemd[1]: Started libvirt secret daemon.
Oct 08 09:59:58 compute-0 sudo[216201]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:58 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 09:59:58 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v451: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 08 09:59:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 09:59:58 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 09:59:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 09:59:58 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 09:59:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 09:59:58 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 09:59:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:59:58.909Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 09:59:59 compute-0 sudo[216424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhyvprsaaowxyawnzmuppikpvpdjyewu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917599.0709984-3254-240926547346574/AnsiballZ_file.py'
Oct 08 09:59:59 compute-0 sudo[216424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 09:59:59 compute-0 podman[216387]: 2025-10-08 09:59:59.35367864 +0000 UTC m=+0.053775400 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 08 09:59:59 compute-0 python3.9[216434]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 09:59:59 compute-0 sudo[216424]: pam_unix(sudo:session): session closed for user root
Oct 08 09:59:59 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 09:59:59 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 09:59:59 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:59:59.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 09:59:59 compute-0 ceph-mon[73572]: pgmap v451: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 08 10:00:00 compute-0 ceph-mon[73572]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1 failed cephadm daemon(s)
Oct 08 10:00:00 compute-0 ceph-mon[73572]: log_channel(cluster) log [WRN] : [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
Oct 08 10:00:00 compute-0 ceph-mon[73572]: log_channel(cluster) log [WRN] :     daemon nfs.cephfs.2.0.compute-0.uynkmx on compute-0 is in unknown state
Oct 08 10:00:00 compute-0 sudo[216585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qaxzoecoxjwowmlfcerijwwgokgjzdey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917599.7789896-3278-26105112623651/AnsiballZ_find.py'
Oct 08 10:00:00 compute-0 sudo[216585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:00:00 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:00:00 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:00:00 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:00:00.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:00:00 compute-0 python3.9[216587]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 08 10:00:00 compute-0 sudo[216585]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:00 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v452: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 682 B/s wr, 2 op/s
Oct 08 10:00:00 compute-0 sudo[216737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bowljbawnothdizhjhiyayfytxmcrpft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917600.6237023-3302-102623296229257/AnsiballZ_command.py'
Oct 08 10:00:00 compute-0 sudo[216737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:00:00 compute-0 ceph-mon[73572]: Health detail: HEALTH_WARN 1 failed cephadm daemon(s)
Oct 08 10:00:00 compute-0 ceph-mon[73572]: [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
Oct 08 10:00:00 compute-0 ceph-mon[73572]:     daemon nfs.cephfs.2.0.compute-0.uynkmx on compute-0 is in unknown state
Oct 08 10:00:01 compute-0 python3.9[216739]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 10:00:01 compute-0 sudo[216737]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:01 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:00:01 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:00:01 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:00:01.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:00:01 compute-0 python3.9[216894]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 08 10:00:01 compute-0 ceph-mon[73572]: pgmap v452: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 682 B/s wr, 2 op/s
Oct 08 10:00:02 compute-0 anacron[1066]: Job `cron.monthly' started
Oct 08 10:00:02 compute-0 anacron[1066]: Job `cron.monthly' terminated
Oct 08 10:00:02 compute-0 anacron[1066]: Normal exit (3 jobs run)
Oct 08 10:00:02 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:00:02 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:00:02 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:00:02.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:00:02 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v453: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 597 B/s wr, 2 op/s
Oct 08 10:00:02 compute-0 python3.9[217047]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 10:00:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:00:02 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:00:03 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:00:03 compute-0 python3.9[217169]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759917602.3965933-3359-76328881645627/.source.xml follow=False _original_basename=secret.xml.j2 checksum=d427a8b5e6de2d31449678af6b172a3fb9e01a89 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:00:03 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:00:03 compute-0 sudo[217194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:00:03 compute-0 sudo[217194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:00:03 compute-0 sudo[217194]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:03 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:00:03 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000035s ======
Oct 08 10:00:03 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:00:03.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Oct 08 10:00:04 compute-0 ceph-mon[73572]: pgmap v453: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 597 B/s wr, 2 op/s
Oct 08 10:00:04 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:00:04 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:00:04 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:00:04.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:00:04 compute-0 sudo[217345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzcedilypkioouckyliwhqxrexxxmfii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917603.7844245-3404-34596426309896/AnsiballZ_command.py'
Oct 08 10:00:04 compute-0 sudo[217345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:00:04 compute-0 python3.9[217347]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 787292cc-8154-50c4-9e00-e9be3e817149
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 10:00:04 compute-0 polkitd[6524]: Registered Authentication Agent for unix-process:217349:356302 (system bus name :1.2983 [/usr/bin/pkttyagent --process 217349 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Oct 08 10:00:04 compute-0 polkitd[6524]: Unregistered Authentication Agent for unix-process:217349:356302 (system bus name :1.2983, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Oct 08 10:00:04 compute-0 polkitd[6524]: Registered Authentication Agent for unix-process:217348:356301 (system bus name :1.2984 [/usr/bin/pkttyagent --process 217348 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Oct 08 10:00:04 compute-0 polkitd[6524]: Unregistered Authentication Agent for unix-process:217348:356301 (system bus name :1.2984, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Oct 08 10:00:04 compute-0 sudo[217345]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:04 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v454: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 10:00:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:04 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 08 10:00:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:04 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 08 10:00:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:04 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 08 10:00:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:04 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 08 10:00:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:04 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 08 10:00:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:04 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 08 10:00:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:04 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 08 10:00:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:04 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 08 10:00:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:04 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 08 10:00:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:04 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 08 10:00:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:04 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 08 10:00:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:04 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 08 10:00:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:04 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 08 10:00:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:04 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 08 10:00:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:04 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 08 10:00:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:04 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 08 10:00:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:04 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 08 10:00:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:04 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 08 10:00:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:04 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 08 10:00:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:04 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 08 10:00:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:04 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 08 10:00:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:04 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 08 10:00:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:04 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 08 10:00:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:04 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 08 10:00:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:04 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 08 10:00:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:04 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 08 10:00:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:04 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 08 10:00:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:04 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:00:05 compute-0 python3.9[217521]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:00:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:05 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3c04000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:05 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bf8001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:00:05] "GET /metrics HTTP/1.1" 200 48348 "" "Prometheus/2.51.0"
Oct 08 10:00:05 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:00:05] "GET /metrics HTTP/1.1" 200 48348 "" "Prometheus/2.51.0"
Oct 08 10:00:05 compute-0 sudo[217675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhjguprpuvnciflahqscsngnpuwtfhdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917605.433183-3452-21973135337765/AnsiballZ_command.py'
Oct 08 10:00:05 compute-0 sudo[217675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:00:05 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:00:05 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:00:05 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:00:05.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:00:05 compute-0 sudo[217675]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:06 compute-0 ceph-mon[73572]: pgmap v454: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 10:00:06 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:00:06 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:00:06 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:00:06.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:00:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:06 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be0000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:06 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Oct 08 10:00:06 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Oct 08 10:00:06 compute-0 sudo[217829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyvieosfvxdcunovqqcvjkashxsmpvbd ; FSID=787292cc-8154-50c4-9e00-e9be3e817149 KEY=AQADMuZoAAAAABAAatv7Ix+93M4zPKi4UUkwMw== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917606.3721344-3476-11815291812855/AnsiballZ_command.py'
Oct 08 10:00:06 compute-0 sudo[217829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:00:06 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v455: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 597 B/s wr, 2 op/s
Oct 08 10:00:06 compute-0 polkitd[6524]: Registered Authentication Agent for unix-process:217832:356551 (system bus name :1.2987 [/usr/bin/pkttyagent --process 217832 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Oct 08 10:00:06 compute-0 polkitd[6524]: Unregistered Authentication Agent for unix-process:217832:356551 (system bus name :1.2987, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Oct 08 10:00:06 compute-0 sudo[217829]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:00:07.039Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:00:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:00:07.039Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:00:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:00:07.040Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:00:07 compute-0 sudo[217988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpwjgdbfjtzsxhqxeuwsdgoaycqvesen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917607.1483948-3500-122022577392420/AnsiballZ_copy.py'
Oct 08 10:00:07 compute-0 sudo[217988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:00:07 compute-0 python3.9[217990]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:00:07 compute-0 sudo[217988]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/100007 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 08 10:00:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:07 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bdc000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:07 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be8000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:07 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:00:07 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 10:00:07 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:00:07.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 10:00:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:07 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:00:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:07 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:00:08 compute-0 ceph-mon[73572]: pgmap v455: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 597 B/s wr, 2 op/s
Oct 08 10:00:08 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:00:08 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:00:08 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:00:08.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:00:08 compute-0 sudo[218141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jexpdbcevugtajdpxucvkrckdwktdrjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917607.8698964-3524-111120123926551/AnsiballZ_stat.py'
Oct 08 10:00:08 compute-0 sudo[218141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:00:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:08 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bf8001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:08 compute-0 python3.9[218143]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 10:00:08 compute-0 sudo[218141]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:08 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:00:08 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v456: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 597 B/s wr, 2 op/s
Oct 08 10:00:08 compute-0 sudo[218264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtcrdttjzxmctcaetegdnmlbeymuuyvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917607.8698964-3524-111120123926551/AnsiballZ_copy.py'
Oct 08 10:00:08 compute-0 sudo[218264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:00:08 compute-0 python3.9[218266]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1759917607.8698964-3524-111120123926551/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:00:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:00:08.911Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:00:08 compute-0 sudo[218264]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:09 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:09 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bdc0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:09 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:00:09 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:00:09 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:00:09.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:00:09 compute-0 sudo[218417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqldqoemsjvbzskmycmgsfoluezdqzsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917609.3173356-3572-235968668310711/AnsiballZ_file.py'
Oct 08 10:00:09 compute-0 sudo[218417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:00:10 compute-0 python3.9[218419]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:00:10 compute-0 sudo[218417]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:10 compute-0 ceph-mon[73572]: pgmap v456: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 597 B/s wr, 2 op/s
Oct 08 10:00:10 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:00:10 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:00:10 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:00:10.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:00:10 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:10 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be8001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:10 compute-0 sudo[218570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njagafwgqrstmnhttxkbmkmfckvzhntn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917610.3876185-3596-68385020189566/AnsiballZ_stat.py'
Oct 08 10:00:10 compute-0 sudo[218570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:00:10 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v457: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Oct 08 10:00:10 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:10 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 08 10:00:10 compute-0 python3.9[218572]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 10:00:10 compute-0 sudo[218570]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:11 compute-0 sudo[218649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gncfvaikcewlogmzvlndibaselqzztyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917610.3876185-3596-68385020189566/AnsiballZ_file.py'
Oct 08 10:00:11 compute-0 sudo[218649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:00:11 compute-0 python3.9[218651]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:00:11 compute-0 sudo[218649]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:11 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:11 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bf8001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:11 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:11 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:11 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:00:11 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:00:11 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:00:11.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:00:11 compute-0 sudo[218802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-athfxrvbsehcdqczfqdneufypweauarp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917611.6243994-3632-79595393413709/AnsiballZ_stat.py'
Oct 08 10:00:11 compute-0 sudo[218802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:00:12 compute-0 python3.9[218804]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 10:00:12 compute-0 sudo[218802]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:12 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:00:12 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 10:00:12 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:00:12.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 10:00:12 compute-0 ceph-mon[73572]: pgmap v457: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Oct 08 10:00:12 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:12 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bdc0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:12 compute-0 sudo[218880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vedwjbtdzsciucjyqsehjqvglzdxjhun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917611.6243994-3632-79595393413709/AnsiballZ_file.py'
Oct 08 10:00:12 compute-0 sudo[218880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:00:12 compute-0 python3.9[218882]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.xwgqcnpk recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:00:12 compute-0 sudo[218880]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:12 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v458: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Oct 08 10:00:13 compute-0 sudo[219033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmzdollrilnudwbjagntyzjhtmfglmjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917612.9107118-3668-168883634912353/AnsiballZ_stat.py'
Oct 08 10:00:13 compute-0 sudo[219033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:00:13 compute-0 python3.9[219035]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 10:00:13 compute-0 sudo[219033]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:00:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:13 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be8001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:13 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bf8001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:13 compute-0 sudo[219111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmouhzukayxiwycjwsxkkuxfhjtemekc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917612.9107118-3668-168883634912353/AnsiballZ_file.py'
Oct 08 10:00:13 compute-0 sudo[219111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:00:13 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:00:13 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:00:13 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:00:13.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:00:13 compute-0 python3.9[219113]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:00:14 compute-0 sudo[219111]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:14 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:00:14 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:00:14 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:00:14.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:00:14 compute-0 ceph-mon[73572]: pgmap v458: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Oct 08 10:00:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:14 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:14 compute-0 sudo[219264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yumimgakbqjcenmaylvbnidsalbifiuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917614.2422183-3707-235909908494948/AnsiballZ_command.py'
Oct 08 10:00:14 compute-0 sudo[219264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:00:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/100014 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 08 10:00:14 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v459: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Oct 08 10:00:14 compute-0 python3.9[219266]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 10:00:14 compute-0 sudo[219264]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:15 compute-0 sudo[219418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixukjjukcodweczcmsiikaosisrngnml ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759917615.0258467-3731-52289438210989/AnsiballZ_edpm_nftables_from_files.py'
Oct 08 10:00:15 compute-0 sudo[219418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:00:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:15 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bdc0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:00:15] "GET /metrics HTTP/1.1" 200 48348 "" "Prometheus/2.51.0"
Oct 08 10:00:15 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:00:15] "GET /metrics HTTP/1.1" 200 48348 "" "Prometheus/2.51.0"
Oct 08 10:00:15 compute-0 python3[219420]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 08 10:00:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:15 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be8001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:15 compute-0 sudo[219418]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:15 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:00:15 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:00:15 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:00:15.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:00:16 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:00:16 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:00:16 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:00:16.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:00:16 compute-0 ceph-mon[73572]: pgmap v459: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Oct 08 10:00:16 compute-0 sudo[219571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dakkeaklzeevprsrxkopxxbfzcjtuoxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917615.9545262-3755-181989626118425/AnsiballZ_stat.py'
Oct 08 10:00:16 compute-0 sudo[219571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:00:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:16 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bf8001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:16 compute-0 python3.9[219573]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 10:00:16 compute-0 sudo[219571]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:16 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v460: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Oct 08 10:00:16 compute-0 sudo[219649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odpubukxoqlwlpzgurwskcpeymjkcjdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917615.9545262-3755-181989626118425/AnsiballZ_file.py'
Oct 08 10:00:16 compute-0 sudo[219649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:00:16 compute-0 python3.9[219651]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:00:16 compute-0 sudo[219649]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:00:17.040Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:00:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:00:17.041Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:00:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:00:17.041Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:00:17 compute-0 sudo[219802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umiwqwgasnkwlpxwpojpvmjjydratvma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917617.2228465-3791-281338792983402/AnsiballZ_stat.py'
Oct 08 10:00:17 compute-0 sudo[219802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:00:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:17 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:17 compute-0 python3.9[219804]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 10:00:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:17 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bdc002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:17 compute-0 sudo[219802]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:17 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:00:17 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:00:17 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:00:17.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:00:17 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:00:17 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:00:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:00:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:00:17 compute-0 sudo[219881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nutiwxtrfzhwgzihriexuhvgkiikmovk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917617.2228465-3791-281338792983402/AnsiballZ_file.py'
Oct 08 10:00:18 compute-0 sudo[219881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:00:18 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:00:18 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:00:18 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:00:18.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:00:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:00:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:00:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:00:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:00:18 compute-0 ceph-mon[73572]: pgmap v460: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Oct 08 10:00:18 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:00:18 compute-0 python3.9[219883]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:00:18 compute-0 sudo[219881]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:18 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be8002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:18 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:00:18 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v461: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Oct 08 10:00:18 compute-0 sudo[220033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjifastldluznixxyhcqtwioirmxwzeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917618.4865792-3827-145259696826539/AnsiballZ_stat.py'
Oct 08 10:00:18 compute-0 sudo[220033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:00:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:00:18.911Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:00:18 compute-0 python3.9[220035]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 10:00:18 compute-0 sudo[220033]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:19 compute-0 sudo[220112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxinimxhauykavdwhejtgirenuodfuau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917618.4865792-3827-145259696826539/AnsiballZ_file.py'
Oct 08 10:00:19 compute-0 sudo[220112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:00:19 compute-0 python3.9[220114]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:00:19 compute-0 sudo[220112]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:19 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bf8001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:19 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:19 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:00:19 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 10:00:19 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:00:19.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 10:00:19 compute-0 sudo[220265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olqawbgjptlgufgoejgnabifyzqokgpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917619.7476764-3863-96007073368700/AnsiballZ_stat.py'
Oct 08 10:00:19 compute-0 sudo[220265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:00:20 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:00:20 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 10:00:20 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:00:20.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 10:00:20 compute-0 python3.9[220267]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 10:00:20 compute-0 sudo[220265]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:20 compute-0 ceph-mon[73572]: pgmap v461: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Oct 08 10:00:20 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:20 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bdc002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:20 compute-0 sudo[220343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqiqhfxkycjxmeqquflvntgfdobigfge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917619.7476764-3863-96007073368700/AnsiballZ_file.py'
Oct 08 10:00:20 compute-0 sudo[220343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:00:20 compute-0 python3.9[220345]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:00:20 compute-0 sudo[220343]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:20 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v462: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Oct 08 10:00:21 compute-0 sudo[220496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-roybsedytrxuragbebscvxajxuyvkvbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917620.9783435-3899-14575891026483/AnsiballZ_stat.py'
Oct 08 10:00:21 compute-0 sudo[220496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:00:21 compute-0 python3.9[220498]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 10:00:21 compute-0 sudo[220496]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:21 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:21 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be8002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:21 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:21 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bf8001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:21 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:00:21 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000035s ======
Oct 08 10:00:21 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:00:21.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Oct 08 10:00:21 compute-0 sudo[220621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isonjbjrpsdzjjudlyibotjhlplrgkbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917620.9783435-3899-14575891026483/AnsiballZ_copy.py'
Oct 08 10:00:21 compute-0 sudo[220621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:00:22 compute-0 python3.9[220623]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917620.9783435-3899-14575891026483/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:00:22 compute-0 sudo[220621]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:22 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:00:22 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:00:22 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:00:22.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:00:22 compute-0 ceph-mon[73572]: pgmap v462: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Oct 08 10:00:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:22 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:22 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v463: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Oct 08 10:00:22 compute-0 sudo[220785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsworwtyyjvtjsvnvrmwufrpzhrlckyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917622.419605-3944-164602651193350/AnsiballZ_file.py'
Oct 08 10:00:22 compute-0 sudo[220785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:00:22 compute-0 podman[220748]: 2025-10-08 10:00:22.792969858 +0000 UTC m=+0.104592761 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:00:22 compute-0 python3.9[220792]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:00:22 compute-0 sudo[220785]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:23 compute-0 sudo[220953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbsnrcihsnvrlhutdanhfnoqrrnqxonm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917623.1247587-3968-233292910937475/AnsiballZ_command.py'
Oct 08 10:00:23 compute-0 sudo[220953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:00:23 compute-0 ceph-mon[73572]: pgmap v463: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Oct 08 10:00:23 compute-0 python3.9[220955]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 10:00:23 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:00:23 compute-0 sudo[220953]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:23 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bdc002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:23 compute-0 sudo[220965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:00:23 compute-0 sudo[220965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:00:23 compute-0 sudo[220965]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:23 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be8002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:23 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:00:23 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:00:23 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:00:23.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:00:24 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:00:24 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:00:24 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:00:24.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:00:24 compute-0 sudo[221134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edyranujmaxrwjwfnjihzeszdjspkjes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917623.8560302-3992-240190720382063/AnsiballZ_blockinfile.py'
Oct 08 10:00:24 compute-0 sudo[221134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:00:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:24 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bf8001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:24 compute-0 python3.9[221136]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:00:24 compute-0 sudo[221134]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:24 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v464: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Oct 08 10:00:24 compute-0 sudo[221286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvdykaufbslywyyedwkmzlyggiiekunu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917624.7752943-4019-203145025199643/AnsiballZ_command.py'
Oct 08 10:00:24 compute-0 sudo[221286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:00:25 compute-0 python3.9[221288]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 10:00:25 compute-0 sudo[221286]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:25 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:00:25] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Oct 08 10:00:25 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:00:25] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Oct 08 10:00:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:25 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bdc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:25 compute-0 ceph-mon[73572]: pgmap v464: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Oct 08 10:00:25 compute-0 sudo[221440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mclyjualarsukzcsqsdziouiwzaoanld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917625.5266273-4043-99599663814941/AnsiballZ_stat.py'
Oct 08 10:00:25 compute-0 sudo[221440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:00:25 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:00:25 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:00:25 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:00:25.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:00:26 compute-0 python3.9[221442]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 10:00:26 compute-0 sudo[221440]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:26 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:00:26 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000035s ======
Oct 08 10:00:26 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:00:26.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Oct 08 10:00:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:26 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be8004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:26 compute-0 sudo[221595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgelxbgmhsrfiicobdsfahhsjbejxyrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917626.2281337-4067-167676785098611/AnsiballZ_command.py'
Oct 08 10:00:26 compute-0 sudo[221595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:00:26 compute-0 python3.9[221597]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 10:00:26 compute-0 sudo[221595]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:26 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v465: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:00:26 compute-0 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Oct 08 10:00:26 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:00:26.799775) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 08 10:00:26 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Oct 08 10:00:26 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917626799851, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 4199, "num_deletes": 502, "total_data_size": 8625206, "memory_usage": 8754368, "flush_reason": "Manual Compaction"}
Oct 08 10:00:26 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Oct 08 10:00:26 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917626845670, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 8359867, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13198, "largest_seqno": 17396, "table_properties": {"data_size": 8342098, "index_size": 12023, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 4677, "raw_key_size": 36519, "raw_average_key_size": 19, "raw_value_size": 8305374, "raw_average_value_size": 4477, "num_data_blocks": 525, "num_entries": 1855, "num_filter_entries": 1855, "num_deletions": 502, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759917182, "oldest_key_time": 1759917182, "file_creation_time": 1759917626, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Oct 08 10:00:26 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 45893 microseconds, and 24175 cpu microseconds.
Oct 08 10:00:26 compute-0 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 08 10:00:26 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:00:26.845706) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 8359867 bytes OK
Oct 08 10:00:26 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:00:26.845722) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Oct 08 10:00:26 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:00:26.847189) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Oct 08 10:00:26 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:00:26.847201) EVENT_LOG_v1 {"time_micros": 1759917626847197, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 08 10:00:26 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:00:26.847215) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 08 10:00:26 compute-0 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 8608427, prev total WAL file size 8608427, number of live WAL files 2.
Oct 08 10:00:26 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 10:00:26 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:00:26.849112) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Oct 08 10:00:26 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 08 10:00:26 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(8163KB)], [32(12MB)]
Oct 08 10:00:26 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917626849201, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 20989986, "oldest_snapshot_seqno": -1}
Oct 08 10:00:26 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 5061 keys, 15487052 bytes, temperature: kUnknown
Oct 08 10:00:26 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917626918324, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 15487052, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15448451, "index_size": 24859, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12677, "raw_key_size": 126580, "raw_average_key_size": 25, "raw_value_size": 15351800, "raw_average_value_size": 3033, "num_data_blocks": 1043, "num_entries": 5061, "num_filter_entries": 5061, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916581, "oldest_key_time": 0, "file_creation_time": 1759917626, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct 08 10:00:26 compute-0 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 08 10:00:26 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:00:26.918689) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 15487052 bytes
Oct 08 10:00:26 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:00:26.920302) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 303.2 rd, 223.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(8.0, 12.0 +0.0 blob) out(14.8 +0.0 blob), read-write-amplify(4.4) write-amplify(1.9) OK, records in: 6083, records dropped: 1022 output_compression: NoCompression
Oct 08 10:00:26 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:00:26.920339) EVENT_LOG_v1 {"time_micros": 1759917626920322, "job": 14, "event": "compaction_finished", "compaction_time_micros": 69230, "compaction_time_cpu_micros": 34526, "output_level": 6, "num_output_files": 1, "total_output_size": 15487052, "num_input_records": 6083, "num_output_records": 5061, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 08 10:00:26 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 10:00:26 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917626923428, "job": 14, "event": "table_file_deletion", "file_number": 34}
Oct 08 10:00:26 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 10:00:26 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917626928415, "job": 14, "event": "table_file_deletion", "file_number": 32}
Oct 08 10:00:26 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:00:26.848968) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:00:26 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:00:26.928506) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:00:26 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:00:26.928512) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:00:26 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:00:26.928514) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:00:26 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:00:26.928515) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:00:26 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:00:26.928517) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:00:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:00:27.042Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:00:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:00:27.042Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:00:27 compute-0 sudo[221751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szunihxyxeatxhbekpodnewgfmgfwudj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917626.980484-4091-249642964229419/AnsiballZ_file.py'
Oct 08 10:00:27 compute-0 sudo[221751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:00:27 compute-0 python3.9[221753]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:00:27 compute-0 sudo[221751]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:27 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bf8001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:27 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:27 compute-0 ceph-mon[73572]: pgmap v465: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:00:27 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:00:27 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:00:27 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:00:27.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:00:27 compute-0 sudo[221904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umbusvugwdekzlolqxsnxfxijtfbgfqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917627.6770072-4115-171306775214405/AnsiballZ_stat.py'
Oct 08 10:00:27 compute-0 sudo[221904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:00:28 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:00:28 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000035s ======
Oct 08 10:00:28 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:00:28.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Oct 08 10:00:28 compute-0 python3.9[221906]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 10:00:28 compute-0 sudo[221904]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:28 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bdc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:28 compute-0 sudo[222027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhcjvnpfzxdjweugzlwvrfpvxwqoashl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917627.6770072-4115-171306775214405/AnsiballZ_copy.py'
Oct 08 10:00:28 compute-0 sudo[222027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:00:28 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:00:28 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v466: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:00:28 compute-0 python3.9[222029]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759917627.6770072-4115-171306775214405/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:00:28 compute-0 sudo[222027]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:00:28.912Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:00:29 compute-0 sudo[222180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glgghoxgyegqsuxxydofyfcjzmfgyijk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917628.9816735-4160-102771871899239/AnsiballZ_stat.py'
Oct 08 10:00:29 compute-0 sudo[222180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:00:29 compute-0 python3.9[222182]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 10:00:29 compute-0 sudo[222180]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:29 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be8004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:29 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bf8001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:29 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:00:29 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:00:29 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:00:29.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:00:29 compute-0 ceph-mon[73572]: pgmap v466: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:00:29 compute-0 podman[222277]: 2025-10-08 10:00:29.835206896 +0000 UTC m=+0.049634331 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 08 10:00:29 compute-0 sudo[222320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyllqqupdyzhaoyczwdmttzyopjaghmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917628.9816735-4160-102771871899239/AnsiballZ_copy.py'
Oct 08 10:00:29 compute-0 sudo[222320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:00:30 compute-0 python3.9[222324]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759917628.9816735-4160-102771871899239/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:00:30 compute-0 sudo[222320]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:30 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:00:30 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:00:30 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:00:30.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:00:30 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:30 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:30 compute-0 sudo[222476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnbiahjcbddmsrygufuiwzojjykorhqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917630.3207877-4205-107230438104499/AnsiballZ_stat.py'
Oct 08 10:00:30 compute-0 sudo[222476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:00:30 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v467: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:00:30 compute-0 python3.9[222478]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 10:00:30 compute-0 sudo[222476]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:31 compute-0 sudo[222600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izwwvtapsybunkcuushlzwgrxeizflat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917630.3207877-4205-107230438104499/AnsiballZ_copy.py'
Oct 08 10:00:31 compute-0 sudo[222600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:00:31 compute-0 python3.9[222602]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759917630.3207877-4205-107230438104499/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:00:31 compute-0 sudo[222600]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:31 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bdc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:31 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bf8003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:31 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:00:31 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:00:31 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:00:31.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:00:31 compute-0 ceph-mon[73572]: pgmap v467: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:00:31 compute-0 sudo[222753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyvtrbqqijeddgvhokiackjrrjwdwnyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917631.672658-4250-213789011161569/AnsiballZ_systemd.py'
Oct 08 10:00:31 compute-0 sudo[222753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:00:32 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:00:32 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:00:32 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:00:32.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:00:32 compute-0 python3.9[222755]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 10:00:32 compute-0 systemd[1]: Reloading.
Oct 08 10:00:32 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:32 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be8004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:32 compute-0 systemd-sysv-generator[222786]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 10:00:32 compute-0 systemd-rc-local-generator[222783]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 10:00:32 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Oct 08 10:00:32 compute-0 sudo[222753]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:32 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v468: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:00:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:00:32 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:00:32 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:00:33 compute-0 sudo[222946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmvupuishylbmajbftoablittdmogcke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917632.9064064-4274-27074078337163/AnsiballZ_systemd.py'
Oct 08 10:00:33 compute-0 sudo[222946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:00:33 compute-0 python3.9[222948]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 08 10:00:33 compute-0 systemd[1]: Reloading.
Oct 08 10:00:33 compute-0 systemd-sysv-generator[222978]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 10:00:33 compute-0 systemd-rc-local-generator[222975]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 10:00:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:00:33 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:33 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:33 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:33 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bdc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:33 compute-0 systemd[1]: Reloading.
Oct 08 10:00:33 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:00:33 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:00:33 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:00:33.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:00:33 compute-0 ceph-mon[73572]: pgmap v468: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:00:33 compute-0 systemd-rc-local-generator[223011]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 10:00:33 compute-0 systemd-sysv-generator[223014]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 10:00:34 compute-0 sudo[222946]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:34 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:00:34 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:00:34 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:00:34.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:00:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:34 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bf8003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:34 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v469: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 08 10:00:34 compute-0 sshd-session[163300]: Connection closed by 192.168.122.30 port 51082
Oct 08 10:00:34 compute-0 sshd-session[163297]: pam_unix(sshd:session): session closed for user zuul
Oct 08 10:00:34 compute-0 systemd[1]: session-54.scope: Deactivated successfully.
Oct 08 10:00:34 compute-0 systemd[1]: session-54.scope: Consumed 3min 26.799s CPU time.
Oct 08 10:00:34 compute-0 systemd-logind[798]: Session 54 logged out. Waiting for processes to exit.
Oct 08 10:00:34 compute-0 systemd-logind[798]: Removed session 54.
Oct 08 10:00:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:35 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be8004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:00:35] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct 08 10:00:35 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:00:35] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct 08 10:00:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:35 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be8004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:35 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:00:35 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:00:35 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:00:35.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:00:35 compute-0 ceph-mon[73572]: pgmap v469: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 08 10:00:36 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:00:36 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:00:36 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:00:36.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:00:36 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:36 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bdc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:36 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v470: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:00:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:00:37.043Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:00:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:37 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bf8003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:37 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:37 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:00:37 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:00:37 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:00:37.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:00:37 compute-0 ceph-mon[73572]: pgmap v470: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:00:38 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:00:38 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:00:38 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:00:38.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:00:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:38 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bd4000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:00:38 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v471: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:00:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:00:38.913Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:00:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:00:38.914Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:00:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:00:38.914Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:00:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/100039 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 08 10:00:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:39 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bdc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:39 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3c04000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:39 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:00:39 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:00:39 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:00:39.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:00:39 compute-0 ceph-mon[73572]: pgmap v471: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:00:40 compute-0 sudo[223052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:00:40 compute-0 sudo[223052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:00:40 compute-0 sudo[223052]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:40 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:00:40 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:00:40 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:00:40.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:00:40 compute-0 sudo[223077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 08 10:00:40 compute-0 sudo[223077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:00:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:40 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:40 compute-0 sudo[223077]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:40 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v472: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:00:40 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 10:00:40 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:00:40 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 08 10:00:40 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 10:00:40 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v473: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 303 B/s rd, 0 op/s
Oct 08 10:00:40 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v474: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 377 B/s rd, 0 op/s
Oct 08 10:00:40 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 08 10:00:40 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:00:40 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 08 10:00:40 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:00:40 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 08 10:00:40 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 10:00:40 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 08 10:00:40 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 10:00:40 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 10:00:40 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:00:40 compute-0 sudo[223132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:00:40 compute-0 sudo[223132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:00:40 compute-0 sudo[223132]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:40 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:00:40 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 10:00:40 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:00:40 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:00:40 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 10:00:40 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 10:00:40 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:00:40 compute-0 sudo[223157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 08 10:00:40 compute-0 sudo[223157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:00:41 compute-0 ceph-osd[81751]: bluestore.MempoolThread fragmentation_score=0.000031 took=0.000080s
Oct 08 10:00:41 compute-0 podman[223223]: 2025-10-08 10:00:41.370177462 +0000 UTC m=+0.042812228 container create a672a82230da747907f40af503ad408f09f09425acb60239ce162640586ea14d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_proskuriakova, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Oct 08 10:00:41 compute-0 systemd[1]: Started libpod-conmon-a672a82230da747907f40af503ad408f09f09425acb60239ce162640586ea14d.scope.
Oct 08 10:00:41 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:00:41 compute-0 podman[223223]: 2025-10-08 10:00:41.352744511 +0000 UTC m=+0.025379307 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:00:41 compute-0 podman[223223]: 2025-10-08 10:00:41.45619673 +0000 UTC m=+0.128831546 container init a672a82230da747907f40af503ad408f09f09425acb60239ce162640586ea14d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_proskuriakova, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 10:00:41 compute-0 podman[223223]: 2025-10-08 10:00:41.462863644 +0000 UTC m=+0.135498410 container start a672a82230da747907f40af503ad408f09f09425acb60239ce162640586ea14d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_proskuriakova, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct 08 10:00:41 compute-0 podman[223223]: 2025-10-08 10:00:41.466861133 +0000 UTC m=+0.139495949 container attach a672a82230da747907f40af503ad408f09f09425acb60239ce162640586ea14d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_proskuriakova, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default)
Oct 08 10:00:41 compute-0 vigilant_proskuriakova[223239]: 167 167
Oct 08 10:00:41 compute-0 systemd[1]: libpod-a672a82230da747907f40af503ad408f09f09425acb60239ce162640586ea14d.scope: Deactivated successfully.
Oct 08 10:00:41 compute-0 podman[223223]: 2025-10-08 10:00:41.469780567 +0000 UTC m=+0.142415333 container died a672a82230da747907f40af503ad408f09f09425acb60239ce162640586ea14d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_proskuriakova, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:00:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-d27befae0dfbc73295ff6ee4e0bfad8b447e73385369f5b3e9e3b7bc6b886b7f-merged.mount: Deactivated successfully.
Oct 08 10:00:41 compute-0 podman[223223]: 2025-10-08 10:00:41.53794742 +0000 UTC m=+0.210582196 container remove a672a82230da747907f40af503ad408f09f09425acb60239ce162640586ea14d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_proskuriakova, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct 08 10:00:41 compute-0 systemd[1]: libpod-conmon-a672a82230da747907f40af503ad408f09f09425acb60239ce162640586ea14d.scope: Deactivated successfully.
Oct 08 10:00:41 compute-0 sshd-session[223258]: Accepted publickey for zuul from 192.168.122.30 port 34140 ssh2: ECDSA SHA256:7LvTHAj52RCSkKXOsIbzSDrEw7lwj23D0dAJ0Qgx0Rg
Oct 08 10:00:41 compute-0 systemd-logind[798]: New session 55 of user zuul.
Oct 08 10:00:41 compute-0 systemd[1]: Started Session 55 of User zuul.
Oct 08 10:00:41 compute-0 sshd-session[223258]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 08 10:00:41 compute-0 podman[223265]: 2025-10-08 10:00:41.703966682 +0000 UTC m=+0.041412903 container create d7468a467564cbca601981e73a309d104046aad1506dbeb4a82dcd8e9d79a799 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 10:00:41 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:41 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bd40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:41 compute-0 systemd[1]: Started libpod-conmon-d7468a467564cbca601981e73a309d104046aad1506dbeb4a82dcd8e9d79a799.scope.
Oct 08 10:00:41 compute-0 podman[223265]: 2025-10-08 10:00:41.687361318 +0000 UTC m=+0.024807569 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:00:41 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:00:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a92e72c2c5dd55512e683116adf994aa1dc567632d68c118449e6276a06aaca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:00:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a92e72c2c5dd55512e683116adf994aa1dc567632d68c118449e6276a06aaca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:00:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a92e72c2c5dd55512e683116adf994aa1dc567632d68c118449e6276a06aaca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:00:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a92e72c2c5dd55512e683116adf994aa1dc567632d68c118449e6276a06aaca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:00:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a92e72c2c5dd55512e683116adf994aa1dc567632d68c118449e6276a06aaca/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 10:00:41 compute-0 podman[223265]: 2025-10-08 10:00:41.808453594 +0000 UTC m=+0.145899835 container init d7468a467564cbca601981e73a309d104046aad1506dbeb4a82dcd8e9d79a799 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_hertz, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct 08 10:00:41 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:41 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bdc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:41 compute-0 podman[223265]: 2025-10-08 10:00:41.817553896 +0000 UTC m=+0.155000117 container start d7468a467564cbca601981e73a309d104046aad1506dbeb4a82dcd8e9d79a799 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_hertz, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 08 10:00:41 compute-0 podman[223265]: 2025-10-08 10:00:41.821841945 +0000 UTC m=+0.159288186 container attach d7468a467564cbca601981e73a309d104046aad1506dbeb4a82dcd8e9d79a799 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_hertz, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:00:41 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:00:41 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:00:41 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:00:41.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:00:41 compute-0 ceph-mon[73572]: pgmap v472: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:00:41 compute-0 ceph-mon[73572]: pgmap v473: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 303 B/s rd, 0 op/s
Oct 08 10:00:41 compute-0 ceph-mon[73572]: pgmap v474: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 377 B/s rd, 0 op/s
Oct 08 10:00:42 compute-0 awesome_hertz[223296]: --> passed data devices: 0 physical, 1 LVM
Oct 08 10:00:42 compute-0 awesome_hertz[223296]: --> All data devices are unavailable
Oct 08 10:00:42 compute-0 systemd[1]: libpod-d7468a467564cbca601981e73a309d104046aad1506dbeb4a82dcd8e9d79a799.scope: Deactivated successfully.
Oct 08 10:00:42 compute-0 podman[223265]: 2025-10-08 10:00:42.167824647 +0000 UTC m=+0.505270878 container died d7468a467564cbca601981e73a309d104046aad1506dbeb4a82dcd8e9d79a799 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_hertz, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 10:00:42 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:00:42 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:00:42 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:00:42.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:00:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a92e72c2c5dd55512e683116adf994aa1dc567632d68c118449e6276a06aaca-merged.mount: Deactivated successfully.
Oct 08 10:00:42 compute-0 podman[223265]: 2025-10-08 10:00:42.218302281 +0000 UTC m=+0.555748502 container remove d7468a467564cbca601981e73a309d104046aad1506dbeb4a82dcd8e9d79a799 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_hertz, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct 08 10:00:42 compute-0 systemd[1]: libpod-conmon-d7468a467564cbca601981e73a309d104046aad1506dbeb4a82dcd8e9d79a799.scope: Deactivated successfully.
Oct 08 10:00:42 compute-0 sudo[223157]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:42 compute-0 sudo[223409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:00:42 compute-0 sudo[223409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:00:42 compute-0 sudo[223409]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:42 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3c04001e00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:42 compute-0 sudo[223458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- lvm list --format json
Oct 08 10:00:42 compute-0 sudo[223458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:00:42 compute-0 python3.9[223507]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 08 10:00:42 compute-0 podman[223553]: 2025-10-08 10:00:42.790884805 +0000 UTC m=+0.042203929 container create 12181e9c4f8d14b44f2e591efa32b7340712d8bbea02408bf87f34dfbda2aafe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_mclean, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct 08 10:00:42 compute-0 systemd[1]: Started libpod-conmon-12181e9c4f8d14b44f2e591efa32b7340712d8bbea02408bf87f34dfbda2aafe.scope.
Oct 08 10:00:42 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v475: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail
Oct 08 10:00:42 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:00:42 compute-0 podman[223553]: 2025-10-08 10:00:42.774782337 +0000 UTC m=+0.026101481 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:00:42 compute-0 podman[223553]: 2025-10-08 10:00:42.883594518 +0000 UTC m=+0.134913652 container init 12181e9c4f8d14b44f2e591efa32b7340712d8bbea02408bf87f34dfbda2aafe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_mclean, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Oct 08 10:00:42 compute-0 podman[223553]: 2025-10-08 10:00:42.890643784 +0000 UTC m=+0.141962898 container start 12181e9c4f8d14b44f2e591efa32b7340712d8bbea02408bf87f34dfbda2aafe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_mclean, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:00:42 compute-0 podman[223553]: 2025-10-08 10:00:42.894412116 +0000 UTC m=+0.145731270 container attach 12181e9c4f8d14b44f2e591efa32b7340712d8bbea02408bf87f34dfbda2aafe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 08 10:00:42 compute-0 tender_mclean[223569]: 167 167
Oct 08 10:00:42 compute-0 systemd[1]: libpod-12181e9c4f8d14b44f2e591efa32b7340712d8bbea02408bf87f34dfbda2aafe.scope: Deactivated successfully.
Oct 08 10:00:42 compute-0 podman[223553]: 2025-10-08 10:00:42.898560269 +0000 UTC m=+0.149879433 container died 12181e9c4f8d14b44f2e591efa32b7340712d8bbea02408bf87f34dfbda2aafe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_mclean, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:00:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4560307df61da1601e80d5e32b22bf5f08e3458bbafee3ffa00c5d822a9370b-merged.mount: Deactivated successfully.
Oct 08 10:00:42 compute-0 podman[223553]: 2025-10-08 10:00:42.952891268 +0000 UTC m=+0.204210392 container remove 12181e9c4f8d14b44f2e591efa32b7340712d8bbea02408bf87f34dfbda2aafe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_mclean, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1)
Oct 08 10:00:42 compute-0 systemd[1]: libpod-conmon-12181e9c4f8d14b44f2e591efa32b7340712d8bbea02408bf87f34dfbda2aafe.scope: Deactivated successfully.
Oct 08 10:00:43 compute-0 podman[223619]: 2025-10-08 10:00:43.150890778 +0000 UTC m=+0.043463839 container create 25c3774c58d7e06aba9d16d8c1a4350f3d09db5248ab2f1b1123b94abadfc0a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_khayyam, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:00:43 compute-0 systemd[1]: Started libpod-conmon-25c3774c58d7e06aba9d16d8c1a4350f3d09db5248ab2f1b1123b94abadfc0a0.scope.
Oct 08 10:00:43 compute-0 podman[223619]: 2025-10-08 10:00:43.129150189 +0000 UTC m=+0.021723250 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:00:43 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:00:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a8bb4e973894ed17f2424bbdb41c5462cb6ce9aaccd468b2c0d4b4d64bd9795/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:00:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a8bb4e973894ed17f2424bbdb41c5462cb6ce9aaccd468b2c0d4b4d64bd9795/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:00:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a8bb4e973894ed17f2424bbdb41c5462cb6ce9aaccd468b2c0d4b4d64bd9795/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:00:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a8bb4e973894ed17f2424bbdb41c5462cb6ce9aaccd468b2c0d4b4d64bd9795/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:00:43 compute-0 podman[223619]: 2025-10-08 10:00:43.256180647 +0000 UTC m=+0.148753768 container init 25c3774c58d7e06aba9d16d8c1a4350f3d09db5248ab2f1b1123b94abadfc0a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:00:43 compute-0 podman[223619]: 2025-10-08 10:00:43.268760211 +0000 UTC m=+0.161333282 container start 25c3774c58d7e06aba9d16d8c1a4350f3d09db5248ab2f1b1123b94abadfc0a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_khayyam, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 08 10:00:43 compute-0 podman[223619]: 2025-10-08 10:00:43.273598226 +0000 UTC m=+0.166171307 container attach 25c3774c58d7e06aba9d16d8c1a4350f3d09db5248ab2f1b1123b94abadfc0a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 08 10:00:43 compute-0 strange_khayyam[223635]: {
Oct 08 10:00:43 compute-0 strange_khayyam[223635]:     "1": [
Oct 08 10:00:43 compute-0 strange_khayyam[223635]:         {
Oct 08 10:00:43 compute-0 strange_khayyam[223635]:             "devices": [
Oct 08 10:00:43 compute-0 strange_khayyam[223635]:                 "/dev/loop3"
Oct 08 10:00:43 compute-0 strange_khayyam[223635]:             ],
Oct 08 10:00:43 compute-0 strange_khayyam[223635]:             "lv_name": "ceph_lv0",
Oct 08 10:00:43 compute-0 strange_khayyam[223635]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:00:43 compute-0 strange_khayyam[223635]:             "lv_size": "21470642176",
Oct 08 10:00:43 compute-0 strange_khayyam[223635]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 08 10:00:43 compute-0 strange_khayyam[223635]:             "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 10:00:43 compute-0 strange_khayyam[223635]:             "name": "ceph_lv0",
Oct 08 10:00:43 compute-0 strange_khayyam[223635]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:00:43 compute-0 strange_khayyam[223635]:             "tags": {
Oct 08 10:00:43 compute-0 strange_khayyam[223635]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:00:43 compute-0 strange_khayyam[223635]:                 "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 10:00:43 compute-0 strange_khayyam[223635]:                 "ceph.cephx_lockbox_secret": "",
Oct 08 10:00:43 compute-0 strange_khayyam[223635]:                 "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct 08 10:00:43 compute-0 strange_khayyam[223635]:                 "ceph.cluster_name": "ceph",
Oct 08 10:00:43 compute-0 strange_khayyam[223635]:                 "ceph.crush_device_class": "",
Oct 08 10:00:43 compute-0 strange_khayyam[223635]:                 "ceph.encrypted": "0",
Oct 08 10:00:43 compute-0 strange_khayyam[223635]:                 "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct 08 10:00:43 compute-0 strange_khayyam[223635]:                 "ceph.osd_id": "1",
Oct 08 10:00:43 compute-0 strange_khayyam[223635]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 08 10:00:43 compute-0 strange_khayyam[223635]:                 "ceph.type": "block",
Oct 08 10:00:43 compute-0 strange_khayyam[223635]:                 "ceph.vdo": "0",
Oct 08 10:00:43 compute-0 strange_khayyam[223635]:                 "ceph.with_tpm": "0"
Oct 08 10:00:43 compute-0 strange_khayyam[223635]:             },
Oct 08 10:00:43 compute-0 strange_khayyam[223635]:             "type": "block",
Oct 08 10:00:43 compute-0 strange_khayyam[223635]:             "vg_name": "ceph_vg0"
Oct 08 10:00:43 compute-0 strange_khayyam[223635]:         }
Oct 08 10:00:43 compute-0 strange_khayyam[223635]:     ]
Oct 08 10:00:43 compute-0 strange_khayyam[223635]: }
Oct 08 10:00:43 compute-0 systemd[1]: libpod-25c3774c58d7e06aba9d16d8c1a4350f3d09db5248ab2f1b1123b94abadfc0a0.scope: Deactivated successfully.
Oct 08 10:00:43 compute-0 podman[223619]: 2025-10-08 10:00:43.615942832 +0000 UTC m=+0.508515883 container died 25c3774c58d7e06aba9d16d8c1a4350f3d09db5248ab2f1b1123b94abadfc0a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_khayyam, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 10:00:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:00:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a8bb4e973894ed17f2424bbdb41c5462cb6ce9aaccd468b2c0d4b4d64bd9795-merged.mount: Deactivated successfully.
Oct 08 10:00:43 compute-0 podman[223619]: 2025-10-08 10:00:43.662582092 +0000 UTC m=+0.555155113 container remove 25c3774c58d7e06aba9d16d8c1a4350f3d09db5248ab2f1b1123b94abadfc0a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_khayyam, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:00:43 compute-0 systemd[1]: libpod-conmon-25c3774c58d7e06aba9d16d8c1a4350f3d09db5248ab2f1b1123b94abadfc0a0.scope: Deactivated successfully.
Oct 08 10:00:43 compute-0 sudo[223458]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:43 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:43 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:43 compute-0 sudo[223731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:00:43 compute-0 sudo[223731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:00:43 compute-0 sudo[223731]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:43 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:43 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bd40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:43 compute-0 sudo[223775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:00:43 compute-0 sudo[223775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:00:43 compute-0 sudo[223775]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:43 compute-0 sudo[223780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- raw list --format json
Oct 08 10:00:43 compute-0 sudo[223780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:00:43 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:00:43 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:00:43 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:00:43.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:00:43 compute-0 sudo[223856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndxokaosrnlysdufpevczbssrnaltaeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917643.397051-62-54462164036723/AnsiballZ_file.py'
Oct 08 10:00:43 compute-0 sudo[223856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:00:43 compute-0 ceph-mon[73572]: pgmap v475: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail
Oct 08 10:00:44 compute-0 python3.9[223858]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 08 10:00:44 compute-0 sudo[223856]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:44 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:00:44 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:00:44 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:00:44.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:00:44 compute-0 podman[223924]: 2025-10-08 10:00:44.201871575 +0000 UTC m=+0.042657754 container create 191de1e0210538fa7caa9427ce9a6d47154464afcd70bd850f92835c8da49cc8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_shamir, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct 08 10:00:44 compute-0 systemd[1]: Started libpod-conmon-191de1e0210538fa7caa9427ce9a6d47154464afcd70bd850f92835c8da49cc8.scope.
Oct 08 10:00:44 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:00:44 compute-0 podman[223924]: 2025-10-08 10:00:44.180834138 +0000 UTC m=+0.021620347 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:00:44 compute-0 podman[223924]: 2025-10-08 10:00:44.291571912 +0000 UTC m=+0.132358101 container init 191de1e0210538fa7caa9427ce9a6d47154464afcd70bd850f92835c8da49cc8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 10:00:44 compute-0 podman[223924]: 2025-10-08 10:00:44.303429613 +0000 UTC m=+0.144215782 container start 191de1e0210538fa7caa9427ce9a6d47154464afcd70bd850f92835c8da49cc8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_shamir, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:00:44 compute-0 podman[223924]: 2025-10-08 10:00:44.306723608 +0000 UTC m=+0.147509797 container attach 191de1e0210538fa7caa9427ce9a6d47154464afcd70bd850f92835c8da49cc8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:00:44 compute-0 quirky_shamir[223969]: 167 167
Oct 08 10:00:44 compute-0 systemd[1]: libpod-191de1e0210538fa7caa9427ce9a6d47154464afcd70bd850f92835c8da49cc8.scope: Deactivated successfully.
Oct 08 10:00:44 compute-0 podman[223924]: 2025-10-08 10:00:44.311793392 +0000 UTC m=+0.152579591 container died 191de1e0210538fa7caa9427ce9a6d47154464afcd70bd850f92835c8da49cc8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_shamir, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Oct 08 10:00:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:44 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bdc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-85816907de03f36e3888a37cc2ac484b8be21399f188a9c340c068fe0432098a-merged.mount: Deactivated successfully.
Oct 08 10:00:44 compute-0 podman[223924]: 2025-10-08 10:00:44.363167425 +0000 UTC m=+0.203953594 container remove 191de1e0210538fa7caa9427ce9a6d47154464afcd70bd850f92835c8da49cc8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 08 10:00:44 compute-0 systemd[1]: libpod-conmon-191de1e0210538fa7caa9427ce9a6d47154464afcd70bd850f92835c8da49cc8.scope: Deactivated successfully.
Oct 08 10:00:44 compute-0 sudo[224091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aaqxbszxmksrtqxueajfghamivuknjeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917644.2276144-62-242028423959021/AnsiballZ_file.py'
Oct 08 10:00:44 compute-0 sudo[224091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:00:44 compute-0 podman[224086]: 2025-10-08 10:00:44.526989316 +0000 UTC m=+0.046258550 container create de0f96888bafec752d6060b41466b164606ce99a9899e328f201bf3587ef1e5c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct 08 10:00:44 compute-0 systemd[1]: Started libpod-conmon-de0f96888bafec752d6060b41466b164606ce99a9899e328f201bf3587ef1e5c.scope.
Oct 08 10:00:44 compute-0 podman[224086]: 2025-10-08 10:00:44.508621525 +0000 UTC m=+0.027890779 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:00:44 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:00:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e08030753090018ec54c0e00238dc3a27abf120e59b41776574df2d1f8f72c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:00:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e08030753090018ec54c0e00238dc3a27abf120e59b41776574df2d1f8f72c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:00:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e08030753090018ec54c0e00238dc3a27abf120e59b41776574df2d1f8f72c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:00:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e08030753090018ec54c0e00238dc3a27abf120e59b41776574df2d1f8f72c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:00:44 compute-0 podman[224086]: 2025-10-08 10:00:44.619183253 +0000 UTC m=+0.138452517 container init de0f96888bafec752d6060b41466b164606ce99a9899e328f201bf3587ef1e5c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 10:00:44 compute-0 podman[224086]: 2025-10-08 10:00:44.626220369 +0000 UTC m=+0.145489613 container start de0f96888bafec752d6060b41466b164606ce99a9899e328f201bf3587ef1e5c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_chatterjee, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:00:44 compute-0 podman[224086]: 2025-10-08 10:00:44.642072059 +0000 UTC m=+0.161341323 container attach de0f96888bafec752d6060b41466b164606ce99a9899e328f201bf3587ef1e5c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_chatterjee, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 08 10:00:44 compute-0 python3.9[224102]: ansible-ansible.builtin.file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 08 10:00:44 compute-0 sudo[224091]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:44 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v476: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 252 B/s rd, 0 op/s
Oct 08 10:00:45 compute-0 sudo[224323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opjkylelhreqtzrrccgcxyubqwwwlyjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917644.8785224-62-59323788435671/AnsiballZ_file.py'
Oct 08 10:00:45 compute-0 sudo[224323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:00:45 compute-0 lvm[224335]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 08 10:00:45 compute-0 lvm[224335]: VG ceph_vg0 finished
Oct 08 10:00:45 compute-0 hardcore_chatterjee[224108]: {}
Oct 08 10:00:45 compute-0 systemd[1]: libpod-de0f96888bafec752d6060b41466b164606ce99a9899e328f201bf3587ef1e5c.scope: Deactivated successfully.
Oct 08 10:00:45 compute-0 systemd[1]: libpod-de0f96888bafec752d6060b41466b164606ce99a9899e328f201bf3587ef1e5c.scope: Consumed 1.185s CPU time.
Oct 08 10:00:45 compute-0 podman[224086]: 2025-10-08 10:00:45.369100462 +0000 UTC m=+0.888369736 container died de0f96888bafec752d6060b41466b164606ce99a9899e328f201bf3587ef1e5c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_chatterjee, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 10:00:45 compute-0 python3.9[224329]: ansible-ansible.builtin.file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 08 10:00:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e08030753090018ec54c0e00238dc3a27abf120e59b41776574df2d1f8f72c4-merged.mount: Deactivated successfully.
Oct 08 10:00:45 compute-0 sudo[224323]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:45 compute-0 podman[224086]: 2025-10-08 10:00:45.420242988 +0000 UTC m=+0.939512232 container remove de0f96888bafec752d6060b41466b164606ce99a9899e328f201bf3587ef1e5c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_chatterjee, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 10:00:45 compute-0 systemd[1]: libpod-conmon-de0f96888bafec752d6060b41466b164606ce99a9899e328f201bf3587ef1e5c.scope: Deactivated successfully.
Oct 08 10:00:45 compute-0 sudo[223780]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:45 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 10:00:45 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:00:45 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 10:00:45 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:00:45 compute-0 sudo[224427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 08 10:00:45 compute-0 sudo[224427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:00:45 compute-0 sudo[224427]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:45 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3c04001e00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:00:45] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct 08 10:00:45 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:00:45] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct 08 10:00:45 compute-0 sudo[224525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpqzkpbwryasmglulvopqnfaxzpgboex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917645.5400174-62-143932018661663/AnsiballZ_file.py'
Oct 08 10:00:45 compute-0 sudo[224525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:00:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:45 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:45 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:00:45 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:00:45 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:00:45.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:00:46 compute-0 python3.9[224527]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 08 10:00:46 compute-0 ceph-mon[73572]: pgmap v476: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 252 B/s rd, 0 op/s
Oct 08 10:00:46 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:00:46 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:00:46 compute-0 sudo[224525]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:46 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:00:46 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:00:46 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:00:46.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:00:46 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:46 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:46 compute-0 sudo[224678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nebyjexwhchxrtxswgpcbrujfbimmwgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917646.137271-62-250604056292909/AnsiballZ_file.py'
Oct 08 10:00:46 compute-0 sudo[224678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:00:46 compute-0 python3.9[224680]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data/ansible-generated/iscsid setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 08 10:00:46 compute-0 sudo[224678]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:46 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v477: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 251 B/s rd, 0 op/s
Oct 08 10:00:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:00:47.044Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:00:47 compute-0 sudo[224831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzzktbrwtcwhlcusgfqzlhoogbmbgaxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917647.0190866-170-242760349962221/AnsiballZ_stat.py'
Oct 08 10:00:47 compute-0 sudo[224831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:00:47 compute-0 python3.9[224833]: ansible-ansible.builtin.stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 10:00:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:00:47
Oct 08 10:00:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 08 10:00:47 compute-0 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct 08 10:00:47 compute-0 ceph-mgr[73869]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'default.rgw.control', 'backups', 'images', '.nfs', 'cephfs.cephfs.meta', 'vms', 'default.rgw.log', 'default.rgw.meta', '.rgw.root', 'volumes']
Oct 08 10:00:47 compute-0 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct 08 10:00:47 compute-0 sudo[224831]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:47 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bdc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:47 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bd40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:00:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:00:47 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:00:47 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:00:47 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:00:47.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:00:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct 08 10:00:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:00:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 08 10:00:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:00:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:00:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:00:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:00:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:00:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:00:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:00:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:00:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:00:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 08 10:00:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:00:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:00:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:00:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 08 10:00:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:00:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 08 10:00:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:00:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:00:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:00:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 08 10:00:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:00:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 10:00:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:00:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:00:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 08 10:00:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:00:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:00:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:00:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:00:48 compute-0 ceph-mon[73572]: pgmap v477: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 251 B/s rd, 0 op/s
Oct 08 10:00:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:00:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:00:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:00:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:00:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:00:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 08 10:00:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:00:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:00:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:00:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:00:48 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:00:48 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:00:48 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:00:48.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:00:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:48 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bd40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:48 compute-0 sudo[224986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sixgcztldimggcwhhshakcrqqynigzof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917647.8449843-194-50042954868320/AnsiballZ_systemd.py'
Oct 08 10:00:48 compute-0 sudo[224986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:00:48 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:00:48 compute-0 python3.9[224988]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsid.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 10:00:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:48 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:00:48 compute-0 systemd[1]: Reloading.
Oct 08 10:00:48 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v478: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 503 B/s wr, 2 op/s
Oct 08 10:00:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:00:48.915Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:00:48 compute-0 systemd-sysv-generator[225023]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 10:00:48 compute-0 systemd-rc-local-generator[225020]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 10:00:49 compute-0 sudo[224986]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:49 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be0003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:49 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bdc003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:49 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:00:49 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:00:49 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:00:49.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:00:49 compute-0 sudo[225178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ountmnoavhnuhtoiiumxeleswulbwtrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917649.3585343-218-279010318476953/AnsiballZ_service_facts.py'
Oct 08 10:00:49 compute-0 sudo[225178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:00:50 compute-0 python3.9[225180]: ansible-ansible.builtin.service_facts Invoked
Oct 08 10:00:50 compute-0 network[225198]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 08 10:00:50 compute-0 network[225199]: 'network-scripts' will be removed from distribution in near future.
Oct 08 10:00:50 compute-0 network[225200]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 08 10:00:50 compute-0 ceph-mon[73572]: pgmap v478: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 503 B/s wr, 2 op/s
Oct 08 10:00:50 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:00:50 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:00:50 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:00:50.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:00:50 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:50 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bd40016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:50 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v479: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 409 B/s wr, 1 op/s
Oct 08 10:00:51 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:51 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bd40016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:51 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:51 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:00:51 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:51 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:00:51 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:51 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be0003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:51 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:00:51 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:00:51 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:00:51.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:00:52 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:00:52 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:00:52 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:00:52.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:00:52 compute-0 ceph-mon[73572]: pgmap v479: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 409 B/s wr, 1 op/s
Oct 08 10:00:52 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:52 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bdc003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:52 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v480: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 341 B/s wr, 1 op/s
Oct 08 10:00:52 compute-0 podman[225267]: 2025-10-08 10:00:52.924802066 +0000 UTC m=+0.085461101 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 08 10:00:53 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:00:53 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:53 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bd40016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:53 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:53 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bd40016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:53 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:00:53 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:00:53 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:00:53.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:00:54 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:00:54 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:00:54 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:00:54.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:00:54 compute-0 sudo[225178]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:54 compute-0 ceph-mon[73572]: pgmap v480: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 341 B/s wr, 1 op/s
Oct 08 10:00:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:54 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be0003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:00:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:54 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 08 10:00:54 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v481: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 10:00:55 compute-0 sudo[225504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuyxhndeltdstaxazfxxpgnxoppkosco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917654.8284242-242-118820036978120/AnsiballZ_systemd.py'
Oct 08 10:00:55 compute-0 sudo[225504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:00:55 compute-0 ceph-mon[73572]: pgmap v481: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 10:00:55 compute-0 python3.9[225506]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsi-starter.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 10:00:55 compute-0 systemd[1]: Reloading.
Oct 08 10:00:55 compute-0 systemd-sysv-generator[225538]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 10:00:55 compute-0 systemd-rc-local-generator[225535]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 10:00:55 compute-0 kernel: ganesha.nfsd[217647]: segfault at 50 ip 00007f3cb57d932e sp 00007f3c6e7fb210 error 4 in libntirpc.so.5.8[7f3cb57be000+2c000] likely on CPU 4 (core 0, socket 4)
Oct 08 10:00:55 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 08 10:00:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:55 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bdc003c10 fd 48 proxy ignored for local
Oct 08 10:00:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:00:55] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct 08 10:00:55 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:00:55] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct 08 10:00:55 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:00:55 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:00:55 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:00:55.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:00:55 compute-0 sudo[225504]: pam_unix(sudo:session): session closed for user root
Oct 08 10:00:55 compute-0 systemd[1]: Started Process Core Dump (PID 225546/UID 0).
Oct 08 10:00:56 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:00:56 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:00:56 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:00:56.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:00:56 compute-0 python3.9[225698]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 10:00:56 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v482: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Oct 08 10:00:56 compute-0 systemd-coredump[225547]: Process 213708 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 53:
                                                    #0  0x00007f3cb57d932e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Oct 08 10:00:57 compute-0 systemd[1]: systemd-coredump@6-225546-0.service: Deactivated successfully.
Oct 08 10:00:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:00:57.045Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:00:57 compute-0 systemd[1]: systemd-coredump@6-225546-0.service: Consumed 1.116s CPU time.
Oct 08 10:00:57 compute-0 podman[225780]: 2025-10-08 10:00:57.096887769 +0000 UTC m=+0.026036216 container died 2a68b9f1bcb66211021ed9b4fd46add9bf3082d3ff8f1593df68d96a304a7aa2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct 08 10:00:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d95dca391282c1ee9ead62ef4a6924429d82eaa2356c084f9e1be43d78b2b69-merged.mount: Deactivated successfully.
Oct 08 10:00:57 compute-0 podman[225780]: 2025-10-08 10:00:57.137296513 +0000 UTC m=+0.066444950 container remove 2a68b9f1bcb66211021ed9b4fd46add9bf3082d3ff8f1593df68d96a304a7aa2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 08 10:00:57 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Main process exited, code=exited, status=139/n/a
Oct 08 10:00:57 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Failed with result 'exit-code'.
Oct 08 10:00:57 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Consumed 1.531s CPU time.
Oct 08 10:00:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:00:57.399 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:00:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:00:57.399 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:00:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:00:57.399 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:00:57 compute-0 sudo[225897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjxskinoqkzuowpxkkaelstemdfiayrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917656.9489942-293-192081564945785/AnsiballZ_podman_container.py'
Oct 08 10:00:57 compute-0 sudo[225897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:00:57 compute-0 python3.9[225899]: ansible-containers.podman.podman_container Invoked with command=/usr/sbin/iscsi-iname detach=False image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f name=iscsid_config rm=True tty=True executable=podman state=started debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 08 10:00:57 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:00:57 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:00:57 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 08 10:00:57 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:00:57.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:00:57 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 08 10:00:57 compute-0 ceph-mon[73572]: pgmap v482: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Oct 08 10:00:58 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:00:58 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:00:58 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:00:58.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:00:58 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:00:58 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v483: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 08 10:00:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:00:58.916Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:00:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:00:58.917Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:00:59 compute-0 podman[225913]: 2025-10-08 10:00:59.073392546 +0000 UTC m=+1.274837840 image pull 74877095db294c27659f24e7f86074178a6f28eee68561c30e3ce4d18519e09c quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f
Oct 08 10:00:59 compute-0 podman[225973]: 2025-10-08 10:00:59.192167012 +0000 UTC m=+0.037764031 container create 2b1fe15183e147954b465d2ebf518218f6d9e3f0fb841efb85368d2acc8b9ad1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001)
Oct 08 10:00:59 compute-0 NetworkManager[44872]: <info>  [1759917659.2182] manager: (podman0): new Bridge device (/org/freedesktop/NetworkManager/Devices/23)
Oct 08 10:00:59 compute-0 kernel: podman0: port 1(veth0) entered blocking state
Oct 08 10:00:59 compute-0 kernel: podman0: port 1(veth0) entered disabled state
Oct 08 10:00:59 compute-0 kernel: veth0: entered allmulticast mode
Oct 08 10:00:59 compute-0 kernel: veth0: entered promiscuous mode
Oct 08 10:00:59 compute-0 NetworkManager[44872]: <info>  [1759917659.2343] manager: (veth0): new Veth device (/org/freedesktop/NetworkManager/Devices/24)
Oct 08 10:00:59 compute-0 kernel: podman0: port 1(veth0) entered blocking state
Oct 08 10:00:59 compute-0 kernel: podman0: port 1(veth0) entered forwarding state
Oct 08 10:00:59 compute-0 NetworkManager[44872]: <info>  [1759917659.2363] device (veth0): carrier: link connected
Oct 08 10:00:59 compute-0 NetworkManager[44872]: <info>  [1759917659.2365] device (podman0): carrier: link connected
Oct 08 10:00:59 compute-0 systemd-udevd[226001]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 10:00:59 compute-0 systemd-udevd[225998]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 10:00:59 compute-0 NetworkManager[44872]: <info>  [1759917659.2615] device (podman0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 08 10:00:59 compute-0 NetworkManager[44872]: <info>  [1759917659.2624] device (podman0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 08 10:00:59 compute-0 NetworkManager[44872]: <info>  [1759917659.2631] device (podman0): Activation: starting connection 'podman0' (28122bf2-158a-44ac-8889-0997062b69a1)
Oct 08 10:00:59 compute-0 NetworkManager[44872]: <info>  [1759917659.2632] device (podman0): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 08 10:00:59 compute-0 NetworkManager[44872]: <info>  [1759917659.2635] device (podman0): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 08 10:00:59 compute-0 NetworkManager[44872]: <info>  [1759917659.2636] device (podman0): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 08 10:00:59 compute-0 NetworkManager[44872]: <info>  [1759917659.2638] device (podman0): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 08 10:00:59 compute-0 podman[225973]: 2025-10-08 10:00:59.175503058 +0000 UTC m=+0.021100097 image pull 74877095db294c27659f24e7f86074178a6f28eee68561c30e3ce4d18519e09c quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f
Oct 08 10:00:59 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 08 10:00:59 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 08 10:00:59 compute-0 NetworkManager[44872]: <info>  [1759917659.2935] device (podman0): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 08 10:00:59 compute-0 NetworkManager[44872]: <info>  [1759917659.2937] device (podman0): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 08 10:00:59 compute-0 NetworkManager[44872]: <info>  [1759917659.2945] device (podman0): Activation: successful, device activated.
Oct 08 10:00:59 compute-0 systemd[1]: iscsi.service: Unit cannot be reloaded because it is inactive.
Oct 08 10:00:59 compute-0 systemd[1]: Started libpod-conmon-2b1fe15183e147954b465d2ebf518218f6d9e3f0fb841efb85368d2acc8b9ad1.scope.
Oct 08 10:00:59 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:00:59 compute-0 podman[225973]: 2025-10-08 10:00:59.515202936 +0000 UTC m=+0.360799995 container init 2b1fe15183e147954b465d2ebf518218f6d9e3f0fb841efb85368d2acc8b9ad1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 08 10:00:59 compute-0 podman[225973]: 2025-10-08 10:00:59.530172166 +0000 UTC m=+0.375769185 container start 2b1fe15183e147954b465d2ebf518218f6d9e3f0fb841efb85368d2acc8b9ad1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 08 10:00:59 compute-0 podman[225973]: 2025-10-08 10:00:59.533939896 +0000 UTC m=+0.379536915 container attach 2b1fe15183e147954b465d2ebf518218f6d9e3f0fb841efb85368d2acc8b9ad1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 08 10:00:59 compute-0 iscsid_config[226130]: iqn.1994-05.com.redhat:6efeb5c8d262
Oct 08 10:00:59 compute-0 systemd[1]: libpod-2b1fe15183e147954b465d2ebf518218f6d9e3f0fb841efb85368d2acc8b9ad1.scope: Deactivated successfully.
Oct 08 10:00:59 compute-0 podman[225973]: 2025-10-08 10:00:59.536421115 +0000 UTC m=+0.382018134 container died 2b1fe15183e147954b465d2ebf518218f6d9e3f0fb841efb85368d2acc8b9ad1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct 08 10:00:59 compute-0 kernel: podman0: port 1(veth0) entered disabled state
Oct 08 10:00:59 compute-0 kernel: veth0 (unregistering): left allmulticast mode
Oct 08 10:00:59 compute-0 kernel: veth0 (unregistering): left promiscuous mode
Oct 08 10:00:59 compute-0 kernel: podman0: port 1(veth0) entered disabled state
Oct 08 10:00:59 compute-0 NetworkManager[44872]: <info>  [1759917659.6191] device (podman0): state change: activated -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 08 10:00:59 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:00:59 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:00:59 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:00:59.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:00:59 compute-0 ceph-mon[73572]: pgmap v483: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 08 10:00:59 compute-0 systemd[1]: run-netns-netns\x2d625650a8\x2da513\x2d3820\x2d47cc\x2dab8617c44ed3.mount: Deactivated successfully.
Oct 08 10:00:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-bfe4243f301f9575f9bc2270fd17d79ef570034ed85d9e4f1414879539b9ce58-merged.mount: Deactivated successfully.
Oct 08 10:00:59 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2b1fe15183e147954b465d2ebf518218f6d9e3f0fb841efb85368d2acc8b9ad1-userdata-shm.mount: Deactivated successfully.
Oct 08 10:01:00 compute-0 podman[225973]: 2025-10-08 10:01:00.006231074 +0000 UTC m=+0.851828093 container remove 2b1fe15183e147954b465d2ebf518218f6d9e3f0fb841efb85368d2acc8b9ad1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 08 10:01:00 compute-0 python3.9[225899]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman run --name iscsid_config --detach=False --rm --tty=True quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f /usr/sbin/iscsi-iname
Oct 08 10:01:00 compute-0 systemd[1]: libpod-conmon-2b1fe15183e147954b465d2ebf518218f6d9e3f0fb841efb85368d2acc8b9ad1.scope: Deactivated successfully.
Oct 08 10:01:00 compute-0 podman[226199]: 2025-10-08 10:01:00.071295348 +0000 UTC m=+0.073222968 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 08 10:01:00 compute-0 python3.9[225899]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: Error generating systemd: 
                                             DEPRECATED command:
                                             It is recommended to use Quadlets for running containers and pods under systemd.
                                             
                                             Please refer to podman-systemd.unit(5) for details.
                                             Error: iscsid_config does not refer to a container or pod: no pod with name or ID iscsid_config found: no such pod: no container with name or ID "iscsid_config" found: no such container
Oct 08 10:01:00 compute-0 sudo[225897]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:00 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:01:00 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:01:00 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:01:00.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:01:00 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v484: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 682 B/s wr, 2 op/s
Oct 08 10:01:00 compute-0 sudo[226394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uslqrwpinffdwradmiztqhaufdmbhbwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917660.6810737-317-192478424364046/AnsiballZ_stat.py'
Oct 08 10:01:00 compute-0 sudo[226394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:01 compute-0 CROND[226399]: (root) CMD (run-parts /etc/cron.hourly)
Oct 08 10:01:01 compute-0 run-parts[226402]: (/etc/cron.hourly) starting 0anacron
Oct 08 10:01:01 compute-0 python3.9[226396]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 10:01:01 compute-0 run-parts[226408]: (/etc/cron.hourly) finished 0anacron
Oct 08 10:01:01 compute-0 CROND[226398]: (root) CMDEND (run-parts /etc/cron.hourly)
Oct 08 10:01:01 compute-0 sudo[226394]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:01 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/100101 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 08 10:01:01 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/100101 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 08 10:01:01 compute-0 sudo[226529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrkwpkuaamctuuvjjfslhoypcfndipmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917660.6810737-317-192478424364046/AnsiballZ_copy.py'
Oct 08 10:01:01 compute-0 sudo[226529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:01 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:01:01 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:01:01 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:01:01.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:01:01 compute-0 python3.9[226531]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759917660.6810737-317-192478424364046/.source.iscsi _original_basename=._l4d_gi_ follow=False checksum=a8411254db0e7ec3d4d3b5a96191404390dc787f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:01:01 compute-0 sudo[226529]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:01 compute-0 ceph-mon[73572]: pgmap v484: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 682 B/s wr, 2 op/s
Oct 08 10:01:02 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:01:02 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:01:02 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:01:02.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:01:02 compute-0 sudo[226682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mknifkzvduhasuezocopdqzucmhixppn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917662.1667545-362-221680557931647/AnsiballZ_file.py'
Oct 08 10:01:02 compute-0 sudo[226682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:02 compute-0 python3.9[226684]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:01:02 compute-0 sudo[226682]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:01:02 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:01:02 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v485: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 682 B/s wr, 2 op/s
Oct 08 10:01:03 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:01:03 compute-0 python3.9[226835]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/iscsid.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 10:01:03 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:01:03 compute-0 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Oct 08 10:01:03 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:01:03.635158) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 08 10:01:03 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Oct 08 10:01:03 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917663635240, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 575, "num_deletes": 251, "total_data_size": 726235, "memory_usage": 736352, "flush_reason": "Manual Compaction"}
Oct 08 10:01:03 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Oct 08 10:01:03 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917663639935, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 521773, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17397, "largest_seqno": 17971, "table_properties": {"data_size": 518959, "index_size": 786, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7425, "raw_average_key_size": 20, "raw_value_size": 513078, "raw_average_value_size": 1382, "num_data_blocks": 34, "num_entries": 371, "num_filter_entries": 371, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759917627, "oldest_key_time": 1759917627, "file_creation_time": 1759917663, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct 08 10:01:03 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 4882 microseconds, and 2077 cpu microseconds.
Oct 08 10:01:03 compute-0 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 08 10:01:03 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:01:03.640027) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 521773 bytes OK
Oct 08 10:01:03 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:01:03.640071) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Oct 08 10:01:03 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:01:03.641411) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Oct 08 10:01:03 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:01:03.641426) EVENT_LOG_v1 {"time_micros": 1759917663641421, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 08 10:01:03 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:01:03.641446) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 08 10:01:03 compute-0 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 723099, prev total WAL file size 723099, number of live WAL files 2.
Oct 08 10:01:03 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 10:01:03 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:01:03.642095) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323531' seq:72057594037927935, type:22 .. '6D67727374617400353033' seq:0, type:0; will stop at (end)
Oct 08 10:01:03 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 08 10:01:03 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(509KB)], [35(14MB)]
Oct 08 10:01:03 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917663642150, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 16008825, "oldest_snapshot_seqno": -1}
Oct 08 10:01:03 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4929 keys, 12107805 bytes, temperature: kUnknown
Oct 08 10:01:03 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917663710403, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 12107805, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12074279, "index_size": 20104, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12357, "raw_key_size": 124241, "raw_average_key_size": 25, "raw_value_size": 11984073, "raw_average_value_size": 2431, "num_data_blocks": 835, "num_entries": 4929, "num_filter_entries": 4929, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916581, "oldest_key_time": 0, "file_creation_time": 1759917663, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Oct 08 10:01:03 compute-0 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 08 10:01:03 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:01:03.710695) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 12107805 bytes
Oct 08 10:01:03 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:01:03.712105) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 234.3 rd, 177.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 14.8 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(53.9) write-amplify(23.2) OK, records in: 5432, records dropped: 503 output_compression: NoCompression
Oct 08 10:01:03 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:01:03.712136) EVENT_LOG_v1 {"time_micros": 1759917663712122, "job": 16, "event": "compaction_finished", "compaction_time_micros": 68337, "compaction_time_cpu_micros": 25149, "output_level": 6, "num_output_files": 1, "total_output_size": 12107805, "num_input_records": 5432, "num_output_records": 4929, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 08 10:01:03 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 10:01:03 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917663712433, "job": 16, "event": "table_file_deletion", "file_number": 37}
Oct 08 10:01:03 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 10:01:03 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917663717005, "job": 16, "event": "table_file_deletion", "file_number": 35}
Oct 08 10:01:03 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:01:03.641982) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:01:03 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:01:03.717075) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:01:03 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:01:03.717081) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:01:03 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:01:03.717083) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:01:03 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:01:03.717085) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:01:03 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:01:03.717087) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:01:03 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:01:03 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:01:03 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:01:03.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:01:03 compute-0 sudo[226914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:01:03 compute-0 sudo[226914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:01:03 compute-0 sudo[226914]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:04 compute-0 ceph-mon[73572]: pgmap v485: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 682 B/s wr, 2 op/s
Oct 08 10:01:04 compute-0 sudo[227013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axwmrgnqlejusujikejxaiaoaimbebpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917663.7292259-413-81228054269037/AnsiballZ_lineinfile.py'
Oct 08 10:01:04 compute-0 sudo[227013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:04 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:01:04 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:01:04 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:01:04.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:01:04 compute-0 python3.9[227015]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:01:04 compute-0 sudo[227013]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:04 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v486: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 682 B/s wr, 2 op/s
Oct 08 10:01:04 compute-0 sudo[227165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rugqpcnlujwtbpttaonjqsonqfpakxnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917664.6906927-440-205671489553164/AnsiballZ_file.py'
Oct 08 10:01:04 compute-0 sudo[227165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:05 compute-0 python3.9[227167]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 08 10:01:05 compute-0 sudo[227165]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:05 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:01:05] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Oct 08 10:01:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:01:05] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Oct 08 10:01:05 compute-0 sudo[227318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxsvgmywqahforjyvyykzhjydwewzdan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917665.4374514-464-68399194575308/AnsiballZ_stat.py'
Oct 08 10:01:05 compute-0 sudo[227318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:05 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:01:05 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:01:05 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:01:05.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:01:06 compute-0 python3.9[227320]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 10:01:06 compute-0 sudo[227318]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:06 compute-0 ceph-mon[73572]: pgmap v486: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 682 B/s wr, 2 op/s
Oct 08 10:01:06 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:01:06 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:01:06 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:01:06.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:01:06 compute-0 sudo[227397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgzkenbdoqrlnggcywjcydvzjaawljla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917665.4374514-464-68399194575308/AnsiballZ_file.py'
Oct 08 10:01:06 compute-0 sudo[227397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:06 compute-0 python3.9[227399]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 10:01:06 compute-0 sudo[227397]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:06 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v487: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct 08 10:01:06 compute-0 sudo[227549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tiuykeoysyfrdjehocwqxfbienubtjky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917666.6206331-464-73568762739346/AnsiballZ_stat.py'
Oct 08 10:01:06 compute-0 sudo[227549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:01:07.050Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:01:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:01:07.052Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:01:07 compute-0 python3.9[227551]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 10:01:07 compute-0 sudo[227549]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:07 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Scheduled restart job, restart counter is at 7.
Oct 08 10:01:07 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct 08 10:01:07 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Consumed 1.531s CPU time.
Oct 08 10:01:07 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct 08 10:01:07 compute-0 sudo[227629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhuslshqldmblcgyldxggltjuhvbhngb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917666.6206331-464-73568762739346/AnsiballZ_file.py'
Oct 08 10:01:07 compute-0 sudo[227629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:07 compute-0 podman[227678]: 2025-10-08 10:01:07.54059113 +0000 UTC m=+0.046049486 container create 197b32251d649da9ca1a6e7d1df5a7492995554a85c77530709f1630450feb18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Oct 08 10:01:07 compute-0 python3.9[227633]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 10:01:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e7cca84133784ef1b75d79f448773b70403e9a746a9cccf658a15d1c5e16e5a/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 08 10:01:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e7cca84133784ef1b75d79f448773b70403e9a746a9cccf658a15d1c5e16e5a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:01:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e7cca84133784ef1b75d79f448773b70403e9a746a9cccf658a15d1c5e16e5a/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 10:01:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e7cca84133784ef1b75d79f448773b70403e9a746a9cccf658a15d1c5e16e5a/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.uynkmx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 10:01:07 compute-0 sudo[227629]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:07 compute-0 podman[227678]: 2025-10-08 10:01:07.516955313 +0000 UTC m=+0.022413679 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:01:07 compute-0 podman[227678]: 2025-10-08 10:01:07.614747597 +0000 UTC m=+0.120205963 container init 197b32251d649da9ca1a6e7d1df5a7492995554a85c77530709f1630450feb18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Oct 08 10:01:07 compute-0 podman[227678]: 2025-10-08 10:01:07.61920649 +0000 UTC m=+0.124664836 container start 197b32251d649da9ca1a6e7d1df5a7492995554a85c77530709f1630450feb18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:01:07 compute-0 bash[227678]: 197b32251d649da9ca1a6e7d1df5a7492995554a85c77530709f1630450feb18
Oct 08 10:01:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:07 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 08 10:01:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:07 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 08 10:01:07 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct 08 10:01:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:07 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 08 10:01:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:07 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 08 10:01:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:07 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 08 10:01:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:07 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 08 10:01:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:07 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 08 10:01:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:07 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:01:07 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:01:07 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:01:07 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:01:07.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:01:08 compute-0 sudo[227885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtnlcrkhlbtbdejbesqnzylufdwtjuzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917667.7845504-533-159585612896796/AnsiballZ_file.py'
Oct 08 10:01:08 compute-0 sudo[227885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:08 compute-0 ceph-mon[73572]: pgmap v487: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct 08 10:01:08 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:01:08 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:01:08 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:01:08.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:01:08 compute-0 python3.9[227887]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:01:08 compute-0 sudo[227885]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:08 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:01:08 compute-0 sudo[228037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxoakqetvizlxobpgovztswwbkabdrfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917668.5668037-557-66717505744247/AnsiballZ_stat.py'
Oct 08 10:01:08 compute-0 sudo[228037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:08 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v488: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Oct 08 10:01:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:01:08.918Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:01:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:01:08.919Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:01:09 compute-0 python3.9[228039]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 10:01:09 compute-0 sudo[228037]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:09 compute-0 sudo[228116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmnfaizdddzapwwmdbznvxackvyeazbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917668.5668037-557-66717505744247/AnsiballZ_file.py'
Oct 08 10:01:09 compute-0 sudo[228116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:09 compute-0 python3.9[228118]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:01:09 compute-0 sudo[228116]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:09 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 08 10:01:09 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:01:09 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:01:09 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:01:09.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:01:10 compute-0 ceph-mon[73572]: pgmap v488: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Oct 08 10:01:10 compute-0 sudo[228269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fejqneyiqizpcvpghsrejqungtpteffq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917669.902549-593-24815854465712/AnsiballZ_stat.py'
Oct 08 10:01:10 compute-0 sudo[228269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:10 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:01:10 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:01:10 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:01:10.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:01:10 compute-0 python3.9[228271]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 10:01:10 compute-0 sudo[228269]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:10 compute-0 sudo[228347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cguclyvfuqcetnhsckxbakdrtaquhnvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917669.902549-593-24815854465712/AnsiballZ_file.py'
Oct 08 10:01:10 compute-0 sudo[228347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:10 compute-0 python3.9[228349]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:01:10 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v489: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct 08 10:01:10 compute-0 sudo[228347]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:11 compute-0 sudo[228500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibhagppoaxzzfdwyagvkbctycvafqqqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917671.147555-629-222891049460905/AnsiballZ_systemd.py'
Oct 08 10:01:11 compute-0 sudo[228500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:11 compute-0 python3.9[228502]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 10:01:11 compute-0 systemd[1]: Reloading.
Oct 08 10:01:11 compute-0 systemd-sysv-generator[228531]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 10:01:11 compute-0 systemd-rc-local-generator[228528]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 10:01:11 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:01:11 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:01:11 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:01:11.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:01:12 compute-0 sudo[228500]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:12 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:01:12 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:01:12 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:01:12.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:01:12 compute-0 ceph-mon[73572]: pgmap v489: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct 08 10:01:12 compute-0 sudo[228689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfpwutpamdrecfrnrvxvgnstckyanavn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917672.3697023-653-201888454433140/AnsiballZ_stat.py'
Oct 08 10:01:12 compute-0 sudo[228689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:12 compute-0 python3.9[228691]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 10:01:12 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v490: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct 08 10:01:12 compute-0 sudo[228689]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:13 compute-0 sudo[228768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkjbymsjbeagfmkztdpyjjllerqsoayr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917672.3697023-653-201888454433140/AnsiballZ_file.py'
Oct 08 10:01:13 compute-0 sudo[228768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:13 compute-0 python3.9[228770]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:01:13 compute-0 sudo[228768]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:01:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:13 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:01:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:13 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:01:13 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:01:13 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:01:13 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:01:13.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:01:13 compute-0 sudo[228920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhbirhjrlgzylydjwrbvbghxmxxvuquo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917673.6066105-689-2361382572782/AnsiballZ_stat.py'
Oct 08 10:01:13 compute-0 sudo[228920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:14 compute-0 python3.9[228923]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 10:01:14 compute-0 sudo[228920]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:14 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:01:14 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:01:14 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:01:14.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:01:14 compute-0 ceph-mon[73572]: pgmap v490: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct 08 10:01:14 compute-0 sudo[228999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vazdjwreseewuexnzeyrcgsikkyfvgxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917673.6066105-689-2361382572782/AnsiballZ_file.py'
Oct 08 10:01:14 compute-0 sudo[228999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:14 compute-0 python3.9[229001]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:01:14 compute-0 sudo[228999]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:14 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v491: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Oct 08 10:01:15 compute-0 sudo[229152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhhjsnvyvmqklbrmihetlejhryeqqtcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917674.8928792-725-191946212253795/AnsiballZ_systemd.py'
Oct 08 10:01:15 compute-0 sudo[229152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:15 compute-0 ceph-mon[73572]: pgmap v491: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Oct 08 10:01:15 compute-0 python3.9[229154]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 10:01:15 compute-0 systemd[1]: Reloading.
Oct 08 10:01:15 compute-0 systemd-rc-local-generator[229182]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 10:01:15 compute-0 systemd-sysv-generator[229185]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 10:01:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:01:15] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Oct 08 10:01:15 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:01:15] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Oct 08 10:01:15 compute-0 systemd[1]: Starting Create netns directory...
Oct 08 10:01:15 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 08 10:01:15 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 08 10:01:15 compute-0 systemd[1]: Finished Create netns directory.
Oct 08 10:01:15 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:01:15 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:01:15 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:01:15.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:01:15 compute-0 sudo[229152]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:16 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:01:16 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:01:16 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:01:16.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:01:16 compute-0 sudo[229346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgqxuobtnbbphgrkvpnynfqgfnfekbvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917676.368966-755-131695200438831/AnsiballZ_file.py'
Oct 08 10:01:16 compute-0 sudo[229346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:16 compute-0 python3.9[229348]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 10:01:16 compute-0 sudo[229346]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:16 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v492: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Oct 08 10:01:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:01:17.053Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:01:17 compute-0 sudo[229499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbefxhecanvacaejsbkuxmuckkqwfzwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917677.0979683-779-256450014500585/AnsiballZ_stat.py'
Oct 08 10:01:17 compute-0 sudo[229499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:17 compute-0 python3.9[229501]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/iscsid/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 10:01:17 compute-0 sudo[229499]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:17 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:01:17 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:01:17 compute-0 sudo[229622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wosecbnkjwbbxhabmqmtlbqriqnhxjnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917677.0979683-779-256450014500585/AnsiballZ_copy.py'
Oct 08 10:01:17 compute-0 sudo[229622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:17 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:01:17 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:01:17 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:01:17.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:01:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:01:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:01:17 compute-0 ceph-mon[73572]: pgmap v492: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Oct 08 10:01:17 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:01:18 compute-0 python3.9[229624]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/iscsid/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759917677.0979683-779-256450014500585/.source _original_basename=healthcheck follow=False checksum=2e1237e7fe015c809b173c52e24cfb87132f4344 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 08 10:01:18 compute-0 sudo[229622]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:01:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:01:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:01:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:01:18 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:01:18 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:01:18 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:01:18.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:01:18 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:01:18 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v493: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 08 10:01:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:01:18.919Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:01:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:01:18.920Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:01:18 compute-0 sudo[229775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zouayckyjdtbtbikhnrdiuphuoayleaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917678.6959016-830-151200311410549/AnsiballZ_file.py'
Oct 08 10:01:18 compute-0 sudo[229775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:19 compute-0 python3.9[229777]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 08 10:01:19 compute-0 sudo[229775]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:19 compute-0 sudo[229928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyjnxkjrjmegzqfothkzgpdcuhsktvue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917679.4397097-854-162111122523198/AnsiballZ_stat.py'
Oct 08 10:01:19 compute-0 sudo[229928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:19 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 08 10:01:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:19 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 08 10:01:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:19 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 08 10:01:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:19 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 08 10:01:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:19 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 08 10:01:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:19 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 08 10:01:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:19 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 08 10:01:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:19 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 08 10:01:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:19 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 08 10:01:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:19 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 08 10:01:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:19 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 08 10:01:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:19 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 08 10:01:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:19 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 08 10:01:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:19 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 08 10:01:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:19 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 08 10:01:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:19 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 08 10:01:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:19 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 08 10:01:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:19 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 08 10:01:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:19 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 08 10:01:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:19 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 08 10:01:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:19 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 08 10:01:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:19 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 08 10:01:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:19 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 08 10:01:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:19 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 08 10:01:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:19 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 08 10:01:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:19 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 08 10:01:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:19 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 08 10:01:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:19 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab0c000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:01:19 compute-0 python3.9[229930]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/iscsid.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 10:01:19 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:01:19 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:01:19 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:01:19.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:01:19 compute-0 sudo[229928]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:20 compute-0 ceph-mon[73572]: pgmap v493: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 08 10:01:20 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:01:20 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:01:20 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:01:20.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:01:20 compute-0 sudo[230067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrczibneeovjmggbbmgnwasaypopqkyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917679.4397097-854-162111122523198/AnsiballZ_copy.py'
Oct 08 10:01:20 compute-0 sudo[230067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:20 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:20 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab080013a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:01:20 compute-0 python3.9[230069]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/iscsid.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759917679.4397097-854-162111122523198/.source.json _original_basename=.kx3demn1 follow=False checksum=80e4f97460718c7e5c66b21ef8b846eba0e0dbc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:01:20 compute-0 sudo[230067]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:20 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v494: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Oct 08 10:01:20 compute-0 sudo[230219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scivqzcwildvxnykcqureplnzjxsizof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917680.681903-899-172270696408278/AnsiballZ_file.py'
Oct 08 10:01:20 compute-0 sudo[230219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:21 compute-0 python3.9[230221]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/iscsid state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:01:21 compute-0 sudo[230219]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:21 compute-0 sudo[230372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bknynsxfiswrypyhnulkfrftsyzfxdwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917681.399042-923-169896966609483/AnsiballZ_stat.py'
Oct 08 10:01:21 compute-0 sudo[230372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:21 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:21 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:01:21 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:21 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:01:21 compute-0 sudo[230372]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:21 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:01:21 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:01:21 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:01:21.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:01:22 compute-0 ceph-mon[73572]: pgmap v494: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Oct 08 10:01:22 compute-0 sudo[230496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdaenzapvvrjetgmzfmouvqtgvthfdib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917681.399042-923-169896966609483/AnsiballZ_copy.py'
Oct 08 10:01:22 compute-0 sudo[230496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:22 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:01:22 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:01:22 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:01:22.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:01:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:22 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:01:22 compute-0 sudo[230496]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:22 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v495: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Oct 08 10:01:23 compute-0 sudo[230661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpoqhummivpijcfyymiqznbktagkzbqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917682.8210683-974-118142514037725/AnsiballZ_container_config_data.py'
Oct 08 10:01:23 compute-0 sudo[230661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:23 compute-0 podman[230623]: 2025-10-08 10:01:23.29591727 +0000 UTC m=+0.089705046 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 08 10:01:23 compute-0 python3.9[230669]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/iscsid config_pattern=*.json debug=False
Oct 08 10:01:23 compute-0 sudo[230661]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:23 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:01:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/100123 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 08 10:01:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:23 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab08002090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:01:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:23 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:01:23 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:01:23 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:01:23 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:01:23.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:01:23 compute-0 sudo[230754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:01:23 compute-0 sudo[230754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:01:23 compute-0 sudo[230754]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:24 compute-0 ceph-mon[73572]: pgmap v495: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Oct 08 10:01:24 compute-0 sudo[230852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjeoiefxzzseabxxjlyjdhxypxhgsbky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917683.7501142-1001-219112970723250/AnsiballZ_container_config_hash.py'
Oct 08 10:01:24 compute-0 sudo[230852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:24 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:01:24 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:01:24 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:01:24.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:01:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:24 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:01:24 compute-0 python3.9[230854]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 08 10:01:24 compute-0 sudo[230852]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:24 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v496: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Oct 08 10:01:25 compute-0 sudo[231005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqtvhgvflmfbrbtkqwhkklipptnsvaet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917684.6971848-1028-57265324327022/AnsiballZ_podman_container_info.py'
Oct 08 10:01:25 compute-0 sudo[231005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:25 compute-0 python3.9[231007]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 08 10:01:25 compute-0 sudo[231005]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:25 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:01:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:01:25] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct 08 10:01:25 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:01:25] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct 08 10:01:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:25 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab08002090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:01:25 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:01:25 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:01:25 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:01:25.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:01:26 compute-0 ceph-mon[73572]: pgmap v496: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Oct 08 10:01:26 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:01:26 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:01:26 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:01:26.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:01:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:26 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:01:26 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v497: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Oct 08 10:01:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:01:27.054Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:01:27 compute-0 sudo[231186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlmvntnvvmsidueswlwmxokgnptlaynm ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759917686.6104658-1067-142733788762589/AnsiballZ_edpm_container_manage.py'
Oct 08 10:01:27 compute-0 sudo[231186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:27 compute-0 python3[231188]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/iscsid config_id=iscsid config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 08 10:01:27 compute-0 podman[231222]: 2025-10-08 10:01:27.662655464 +0000 UTC m=+0.051688228 container create 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 08 10:01:27 compute-0 podman[231222]: 2025-10-08 10:01:27.634374877 +0000 UTC m=+0.023407651 image pull 74877095db294c27659f24e7f86074178a6f28eee68561c30e3ce4d18519e09c quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f
Oct 08 10:01:27 compute-0 python3[231188]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name iscsid --conmon-pidfile /run/iscsid.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=iscsid --label container_name=iscsid --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run:/run --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:z --volume /etc/target:/etc/target:z --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /var/lib/openstack/healthchecks/iscsid:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f
Oct 08 10:01:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:27 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:01:27 compute-0 sudo[231186]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:27 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:01:27 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:01:27 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:01:27 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:01:27.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:01:28 compute-0 ceph-mon[73572]: pgmap v497: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Oct 08 10:01:28 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:01:28 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:01:28 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:01:28.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:01:28 compute-0 sudo[231411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trcgsxqbnibznksqegqpxafjkuxycizo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917688.0418727-1091-116616965628552/AnsiballZ_stat.py'
Oct 08 10:01:28 compute-0 sudo[231411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:28 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:01:28 compute-0 python3.9[231413]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 10:01:28 compute-0 sudo[231411]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:28 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:01:28 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v498: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Oct 08 10:01:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:01:28.920Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:01:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:01:28.920Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:01:29 compute-0 sudo[231566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhndlbisgvzzszeajpysbzaxgqyrgznq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917688.929719-1118-125474657535313/AnsiballZ_file.py'
Oct 08 10:01:29 compute-0 sudo[231566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:29 compute-0 python3.9[231568]: ansible-file Invoked with path=/etc/systemd/system/edpm_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:01:29 compute-0 sudo[231566]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/100129 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 08 10:01:29 compute-0 sudo[231642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcxxjmjfaqqtkwfqqzkukzgybbpyxvbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917688.929719-1118-125474657535313/AnsiballZ_stat.py'
Oct 08 10:01:29 compute-0 sudo[231642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:29 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:01:29 compute-0 python3.9[231644]: ansible-stat Invoked with path=/etc/systemd/system/edpm_iscsid_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 10:01:29 compute-0 sudo[231642]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:29 compute-0 ceph-mon[73572]: pgmap v498: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Oct 08 10:01:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:29 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:01:29 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:01:29 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:01:29 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:01:29.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:01:30 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:01:30 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:01:30 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:01:30.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:01:30 compute-0 sudo[231805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awihimtnvjllaqkyzyhjfqtekaonlzry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917689.8571365-1118-59160507736162/AnsiballZ_copy.py'
Oct 08 10:01:30 compute-0 sudo[231805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:30 compute-0 podman[231768]: 2025-10-08 10:01:30.296460037 +0000 UTC m=+0.057549646 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct 08 10:01:30 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:30 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab08003130 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:01:30 compute-0 python3.9[231815]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759917689.8571365-1118-59160507736162/source dest=/etc/systemd/system/edpm_iscsid.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:01:30 compute-0 sudo[231805]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:30 compute-0 sudo[231889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvsvrdujaartyyzkdhziwllrwltcqmol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917689.8571365-1118-59160507736162/AnsiballZ_systemd.py'
Oct 08 10:01:30 compute-0 sudo[231889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:30 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v499: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Oct 08 10:01:31 compute-0 python3.9[231891]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 08 10:01:31 compute-0 systemd[1]: Reloading.
Oct 08 10:01:31 compute-0 systemd-rc-local-generator[231917]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 10:01:31 compute-0 systemd-sysv-generator[231922]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 10:01:31 compute-0 sudo[231889]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:31 compute-0 sudo[232000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmtezbxhtyjkapoppswexwpplhtvzadw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917689.8571365-1118-59160507736162/AnsiballZ_systemd.py'
Oct 08 10:01:31 compute-0 sudo[232000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:31 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:01:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:31 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:01:31 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:01:31 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:01:31 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:01:31.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:01:31 compute-0 python3.9[232002]: ansible-systemd Invoked with state=restarted name=edpm_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 10:01:31 compute-0 systemd[1]: Reloading.
Oct 08 10:01:32 compute-0 systemd-sysv-generator[232035]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 10:01:32 compute-0 systemd-rc-local-generator[232032]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 10:01:32 compute-0 ceph-mon[73572]: pgmap v499: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Oct 08 10:01:32 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:01:32 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:01:32 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:01:32.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:01:32 compute-0 systemd[1]: Starting iscsid container...
Oct 08 10:01:32 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:01:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6105164ba49c7ca6d62d445483671ab45469dd2e81be8bb63cfa0d1309aeea3/merged/etc/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 08 10:01:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6105164ba49c7ca6d62d445483671ab45469dd2e81be8bb63cfa0d1309aeea3/merged/etc/target supports timestamps until 2038 (0x7fffffff)
Oct 08 10:01:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6105164ba49c7ca6d62d445483671ab45469dd2e81be8bb63cfa0d1309aeea3/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 08 10:01:32 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:32 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:01:32 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3.
Oct 08 10:01:32 compute-0 podman[232042]: 2025-10-08 10:01:32.419584704 +0000 UTC m=+0.126289818 container init 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 08 10:01:32 compute-0 iscsid[232058]: + sudo -E kolla_set_configs
Oct 08 10:01:32 compute-0 podman[232042]: 2025-10-08 10:01:32.446901969 +0000 UTC m=+0.153607073 container start 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 08 10:01:32 compute-0 sudo[232064]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 08 10:01:32 compute-0 podman[232042]: iscsid
Oct 08 10:01:32 compute-0 systemd[1]: Created slice User Slice of UID 0.
Oct 08 10:01:32 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct 08 10:01:32 compute-0 systemd[1]: Started iscsid container.
Oct 08 10:01:32 compute-0 sudo[232000]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:32 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct 08 10:01:32 compute-0 systemd[1]: Starting User Manager for UID 0...
Oct 08 10:01:32 compute-0 systemd[232080]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Oct 08 10:01:32 compute-0 podman[232065]: 2025-10-08 10:01:32.54270085 +0000 UTC m=+0.076523384 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3)
Oct 08 10:01:32 compute-0 systemd[1]: 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3-5d80cdf7f9c7286c.service: Main process exited, code=exited, status=1/FAILURE
Oct 08 10:01:32 compute-0 systemd[1]: 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3-5d80cdf7f9c7286c.service: Failed with result 'exit-code'.
Oct 08 10:01:32 compute-0 systemd[232080]: Queued start job for default target Main User Target.
Oct 08 10:01:32 compute-0 systemd[232080]: Created slice User Application Slice.
Oct 08 10:01:32 compute-0 systemd[232080]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct 08 10:01:32 compute-0 systemd[232080]: Started Daily Cleanup of User's Temporary Directories.
Oct 08 10:01:32 compute-0 systemd[232080]: Reached target Paths.
Oct 08 10:01:32 compute-0 systemd[232080]: Reached target Timers.
Oct 08 10:01:32 compute-0 systemd[232080]: Starting D-Bus User Message Bus Socket...
Oct 08 10:01:32 compute-0 systemd[232080]: Starting Create User's Volatile Files and Directories...
Oct 08 10:01:32 compute-0 systemd[232080]: Listening on D-Bus User Message Bus Socket.
Oct 08 10:01:32 compute-0 systemd[232080]: Reached target Sockets.
Oct 08 10:01:32 compute-0 systemd[232080]: Finished Create User's Volatile Files and Directories.
Oct 08 10:01:32 compute-0 systemd[232080]: Reached target Basic System.
Oct 08 10:01:32 compute-0 systemd[232080]: Reached target Main User Target.
Oct 08 10:01:32 compute-0 systemd[232080]: Startup finished in 165ms.
Oct 08 10:01:32 compute-0 systemd[1]: Started User Manager for UID 0.
Oct 08 10:01:32 compute-0 systemd[1]: Started Session c3 of User root.
Oct 08 10:01:32 compute-0 sudo[232064]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 08 10:01:32 compute-0 iscsid[232058]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 08 10:01:32 compute-0 iscsid[232058]: INFO:__main__:Validating config file
Oct 08 10:01:32 compute-0 iscsid[232058]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 08 10:01:32 compute-0 iscsid[232058]: INFO:__main__:Writing out command to execute
Oct 08 10:01:32 compute-0 sudo[232064]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:32 compute-0 systemd[1]: session-c3.scope: Deactivated successfully.
Oct 08 10:01:32 compute-0 iscsid[232058]: ++ cat /run_command
Oct 08 10:01:32 compute-0 iscsid[232058]: + CMD='/usr/sbin/iscsid -f'
Oct 08 10:01:32 compute-0 iscsid[232058]: + ARGS=
Oct 08 10:01:32 compute-0 iscsid[232058]: + sudo kolla_copy_cacerts
Oct 08 10:01:32 compute-0 sudo[232127]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 08 10:01:32 compute-0 systemd[1]: Started Session c4 of User root.
Oct 08 10:01:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:01:32 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:01:32 compute-0 sudo[232127]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 08 10:01:32 compute-0 sudo[232127]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:32 compute-0 systemd[1]: session-c4.scope: Deactivated successfully.
Oct 08 10:01:32 compute-0 iscsid[232058]: + [[ ! -n '' ]]
Oct 08 10:01:32 compute-0 iscsid[232058]: + . kolla_extend_start
Oct 08 10:01:32 compute-0 iscsid[232058]: ++ [[ ! -f /etc/iscsi/initiatorname.iscsi ]]
Oct 08 10:01:32 compute-0 iscsid[232058]: + echo 'Running command: '\''/usr/sbin/iscsid -f'\'''
Oct 08 10:01:32 compute-0 iscsid[232058]: Running command: '/usr/sbin/iscsid -f'
Oct 08 10:01:32 compute-0 iscsid[232058]: + umask 0022
Oct 08 10:01:32 compute-0 iscsid[232058]: + exec /usr/sbin/iscsid -f
Oct 08 10:01:32 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Oct 08 10:01:32 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v500: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Oct 08 10:01:33 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:01:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:01:33 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:33 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab08003130 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:01:33 compute-0 python3.9[232264]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.iscsid_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 10:01:33 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:33 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab08003130 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:01:33 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:01:33 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:01:33 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:01:33.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:01:34 compute-0 ceph-mon[73572]: pgmap v500: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Oct 08 10:01:34 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:01:34 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:01:34 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:01:34.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:01:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:34 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:01:34 compute-0 sudo[232415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnmmmcjkvejwpzhykuaepgdsbrxnocnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917694.0879705-1229-210069463678146/AnsiballZ_file.py'
Oct 08 10:01:34 compute-0 sudo[232415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:34 compute-0 python3.9[232417]: ansible-ansible.builtin.file Invoked with path=/etc/iscsi/.iscsid_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:01:34 compute-0 sudo[232415]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:34 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v501: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 08 10:01:35 compute-0 sudo[232568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rddufzeysmyxczasaxunuycgjvzavhab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917695.1292164-1262-108757080956380/AnsiballZ_service_facts.py'
Oct 08 10:01:35 compute-0 sudo[232568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:35 compute-0 python3.9[232570]: ansible-ansible.builtin.service_facts Invoked
Oct 08 10:01:35 compute-0 network[232587]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 08 10:01:35 compute-0 network[232588]: 'network-scripts' will be removed from distribution in near future.
Oct 08 10:01:35 compute-0 network[232589]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 08 10:01:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:35 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:01:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:01:35] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct 08 10:01:35 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:01:35] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct 08 10:01:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:35 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0003340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:01:35 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:01:35 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:01:35 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:01:35.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:01:36 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:01:36 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:01:36 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:01:36.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:01:36 compute-0 ceph-mon[73572]: pgmap v501: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 08 10:01:36 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:36 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab08003130 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:01:36 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v502: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 10:01:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:01:37.056Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:01:37 compute-0 ceph-mon[73572]: pgmap v502: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 10:01:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:37 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:01:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:37 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:01:37 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:01:37 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:01:37 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:01:37.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:01:38 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:01:38 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:01:38 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:01:38.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:01:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:38 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0003340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:01:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:38 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:01:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:01:38 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v503: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Oct 08 10:01:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:01:38.922Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:01:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:39 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab08004620 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:01:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:39 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:01:39 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:01:39 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:01:39 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:01:39.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:01:39 compute-0 ceph-mon[73572]: pgmap v503: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Oct 08 10:01:40 compute-0 sudo[232568]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:40 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:01:40 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:01:40 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:01:40.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:01:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:40 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:01:40 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v504: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Oct 08 10:01:41 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:41 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:01:41 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:41 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:01:41 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:41 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:01:41 compute-0 sudo[232868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpfohafgkkletvmbzjebrejdecqnkykf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917701.4962702-1292-244731748066167/AnsiballZ_file.py'
Oct 08 10:01:41 compute-0 sudo[232868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:41 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:41 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab08004620 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:01:41 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:01:41 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:01:41 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:01:41.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:01:42 compute-0 python3.9[232870]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 08 10:01:42 compute-0 sudo[232868]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:42 compute-0 ceph-mon[73572]: pgmap v504: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Oct 08 10:01:42 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:01:42 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:01:42 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:01:42.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:01:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:42 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:01:42 compute-0 sudo[233021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bihgdwyjezyiqhsirdxviwszodopktdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917702.2612858-1316-40994222198666/AnsiballZ_modprobe.py'
Oct 08 10:01:42 compute-0 sudo[233021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:42 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v505: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Oct 08 10:01:42 compute-0 python3.9[233023]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Oct 08 10:01:43 compute-0 systemd[1]: Stopping User Manager for UID 0...
Oct 08 10:01:43 compute-0 systemd[232080]: Activating special unit Exit the Session...
Oct 08 10:01:43 compute-0 systemd[232080]: Stopped target Main User Target.
Oct 08 10:01:43 compute-0 systemd[232080]: Stopped target Basic System.
Oct 08 10:01:43 compute-0 systemd[232080]: Stopped target Paths.
Oct 08 10:01:43 compute-0 systemd[232080]: Stopped target Sockets.
Oct 08 10:01:43 compute-0 systemd[232080]: Stopped target Timers.
Oct 08 10:01:43 compute-0 systemd[232080]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 08 10:01:43 compute-0 systemd[232080]: Closed D-Bus User Message Bus Socket.
Oct 08 10:01:43 compute-0 systemd[232080]: Stopped Create User's Volatile Files and Directories.
Oct 08 10:01:43 compute-0 systemd[232080]: Removed slice User Application Slice.
Oct 08 10:01:43 compute-0 systemd[232080]: Reached target Shutdown.
Oct 08 10:01:43 compute-0 systemd[232080]: Finished Exit the Session.
Oct 08 10:01:43 compute-0 systemd[232080]: Reached target Exit the Session.
Oct 08 10:01:43 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Oct 08 10:01:43 compute-0 systemd[1]: Stopped User Manager for UID 0.
Oct 08 10:01:43 compute-0 sudo[233021]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:43 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct 08 10:01:43 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct 08 10:01:43 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct 08 10:01:43 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct 08 10:01:43 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Oct 08 10:01:43 compute-0 sudo[233180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cplnlabqyikecutogqsnkwguaopdbbdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917703.2489626-1340-80584199922098/AnsiballZ_stat.py'
Oct 08 10:01:43 compute-0 sudo[233180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:01:43 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:43 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:01:43 compute-0 python3.9[233182]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 10:01:43 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:43 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:01:43 compute-0 sudo[233180]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:43 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:01:43 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:01:43 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:01:43.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:01:44 compute-0 sudo[233211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:01:44 compute-0 sudo[233211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:01:44 compute-0 sudo[233211]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:44 compute-0 ceph-mon[73572]: pgmap v505: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Oct 08 10:01:44 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:01:44 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:01:44 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:01:44.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:01:44 compute-0 sudo[233329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qoijmsaymtveocnmfgubaifdqoaylidf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917703.2489626-1340-80584199922098/AnsiballZ_copy.py'
Oct 08 10:01:44 compute-0 sudo[233329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:44 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab08004620 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:01:44 compute-0 python3.9[233331]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759917703.2489626-1340-80584199922098/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:01:44 compute-0 sudo[233329]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:44 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 08 10:01:44 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v506: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 10:01:45 compute-0 sudo[233482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djygleuifjrgydhciivsdccptxisyrqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917704.824562-1388-279022366371988/AnsiballZ_lineinfile.py'
Oct 08 10:01:45 compute-0 sudo[233482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:45 compute-0 python3.9[233484]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:01:45 compute-0 sudo[233482]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:01:45] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct 08 10:01:45 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:01:45] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct 08 10:01:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:45 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:01:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:45 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:01:45 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:01:45 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:01:45 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:01:45.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:01:45 compute-0 sudo[233608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:01:45 compute-0 sudo[233608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:01:45 compute-0 sudo[233608]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:45 compute-0 sudo[233658]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkpxihnfgfxfjxgyibblhhuonrgicudn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917705.593881-1412-78240313020557/AnsiballZ_systemd.py'
Oct 08 10:01:45 compute-0 sudo[233658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:45 compute-0 sudo[233663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Oct 08 10:01:45 compute-0 sudo[233663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:01:46 compute-0 ceph-mon[73572]: pgmap v506: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 10:01:46 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:01:46 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:01:46 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:01:46.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:01:46 compute-0 python3.9[233662]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 08 10:01:46 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct 08 10:01:46 compute-0 systemd[1]: Stopped Load Kernel Modules.
Oct 08 10:01:46 compute-0 systemd[1]: Stopping Load Kernel Modules...
Oct 08 10:01:46 compute-0 systemd[1]: Starting Load Kernel Modules...
Oct 08 10:01:46 compute-0 systemd[1]: Finished Load Kernel Modules.
Oct 08 10:01:46 compute-0 sudo[233658]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:46 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:46 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:01:46 compute-0 podman[233787]: 2025-10-08 10:01:46.602302542 +0000 UTC m=+0.073512887 container exec 01c666addd8584f02eb7d29040d97e45d8cb7f834fd134029c1f7cca550aeb9d (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:01:46 compute-0 podman[233787]: 2025-10-08 10:01:46.694058793 +0000 UTC m=+0.165269148 container exec_died 01c666addd8584f02eb7d29040d97e45d8cb7f834fd134029c1f7cca550aeb9d (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:01:46 compute-0 sudo[233979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apfzvlschoekhsvyyxfrloivhyxpuqcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917706.5985768-1436-145008846095191/AnsiballZ_file.py'
Oct 08 10:01:46 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v507: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 10:01:46 compute-0 sudo[233979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:01:47.058Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:01:47 compute-0 python3.9[233983]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 08 10:01:47 compute-0 sudo[233979]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:47 compute-0 podman[234077]: 2025-10-08 10:01:47.282188773 +0000 UTC m=+0.058980872 container exec 4eb6b712d09e2820826e6f961f0aba1bff09f13eee1664dbb03edca7615a708b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 10:01:47 compute-0 podman[234077]: 2025-10-08 10:01:47.317567126 +0000 UTC m=+0.094359235 container exec_died 4eb6b712d09e2820826e6f961f0aba1bff09f13eee1664dbb03edca7615a708b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 10:01:47 compute-0 podman[234220]: 2025-10-08 10:01:47.60720869 +0000 UTC m=+0.070727098 container exec 197b32251d649da9ca1a6e7d1df5a7492995554a85c77530709f1630450feb18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct 08 10:01:47 compute-0 podman[234220]: 2025-10-08 10:01:47.64280356 +0000 UTC m=+0.106322038 container exec_died 197b32251d649da9ca1a6e7d1df5a7492995554a85c77530709f1630450feb18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct 08 10:01:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:01:47
Oct 08 10:01:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 08 10:01:47 compute-0 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct 08 10:01:47 compute-0 ceph-mgr[73869]: [balancer INFO root] pools ['default.rgw.log', '.nfs', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', 'images', 'volumes', 'backups', 'cephfs.cephfs.meta', 'vms', '.rgw.root']
Oct 08 10:01:47 compute-0 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct 08 10:01:47 compute-0 sudo[234310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywsdkuaoykuryxsowqhhhtktpubndqzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917707.4063294-1463-32253723114537/AnsiballZ_stat.py'
Oct 08 10:01:47 compute-0 sudo[234310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:47 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab08004620 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:01:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:01:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:01:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:47 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:01:47 compute-0 podman[234341]: 2025-10-08 10:01:47.888074761 +0000 UTC m=+0.072373821 container exec 1c113ffcb0d41c6337c256a4892f13e82bdea6ca8062b10324516aa631301ea5 (image=quay.io/ceph/haproxy:2.3, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp)
Oct 08 10:01:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct 08 10:01:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:01:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 08 10:01:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:01:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:01:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:01:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:01:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:01:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:01:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:01:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:01:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:01:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 08 10:01:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:01:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:01:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:01:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 08 10:01:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:01:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 08 10:01:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:01:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:01:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:01:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 08 10:01:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:01:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 10:01:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:01:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:01:47 compute-0 podman[234341]: 2025-10-08 10:01:47.900434027 +0000 UTC m=+0.084733067 container exec_died 1c113ffcb0d41c6337c256a4892f13e82bdea6ca8062b10324516aa631301ea5 (image=quay.io/ceph/haproxy:2.3, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp)
Oct 08 10:01:47 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:01:47 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:01:47 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:01:47.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:01:47 compute-0 python3.9[234319]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 10:01:47 compute-0 sudo[234310]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 08 10:01:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:01:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:01:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:01:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:01:48 compute-0 podman[234432]: 2025-10-08 10:01:48.09425556 +0000 UTC m=+0.046098659 container exec 5814788fb4b63196d097e0c60082efdca22825a3ba337ddec5c99170a1bfd89d (image=quay.io/ceph/keepalived:2.2.4, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, io.openshift.tags=Ceph keepalived, name=keepalived, vcs-type=git, release=1793, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Oct 08 10:01:48 compute-0 ceph-mon[73572]: pgmap v507: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 10:01:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:01:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:01:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:01:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:01:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:01:48 compute-0 podman[234432]: 2025-10-08 10:01:48.172671813 +0000 UTC m=+0.124514882 container exec_died 5814788fb4b63196d097e0c60082efdca22825a3ba337ddec5c99170a1bfd89d (image=quay.io/ceph/keepalived:2.2.4, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, release=1793, build-date=2023-02-22T09:23:20, distribution-scope=public, io.openshift.expose-services=, name=keepalived, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc.)
Oct 08 10:01:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 08 10:01:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:01:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:01:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:01:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:01:48 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:01:48 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:01:48 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:01:48.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:01:48 compute-0 podman[234560]: 2025-10-08 10:01:48.380254965 +0000 UTC m=+0.062493393 container exec feb968c21a5fda3cd2e7b86379d5576dbe3dadfd4cb62b92318e18ddbaaafb75 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 10:01:48 compute-0 podman[234560]: 2025-10-08 10:01:48.410503965 +0000 UTC m=+0.092742383 container exec_died feb968c21a5fda3cd2e7b86379d5576dbe3dadfd4cb62b92318e18ddbaaafb75 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 10:01:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:48 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:01:48 compute-0 sudo[234668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsddfbgfgvcuhlwgtcigbniwibfobbnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917708.1997862-1490-119648092438292/AnsiballZ_stat.py'
Oct 08 10:01:48 compute-0 sudo[234668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:48 compute-0 podman[234698]: 2025-10-08 10:01:48.623418089 +0000 UTC m=+0.051012136 container exec 73efcf403aa6fcb4b04e1ef714b7bf4169b576e4a4c1b34cdc0b458f62db65fd (image=quay.io/ceph/grafana:10.4.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 08 10:01:48 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:01:48 compute-0 python3.9[234677]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 10:01:48 compute-0 sudo[234668]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:48 compute-0 podman[234698]: 2025-10-08 10:01:48.781151054 +0000 UTC m=+0.208745111 container exec_died 73efcf403aa6fcb4b04e1ef714b7bf4169b576e4a4c1b34cdc0b458f62db65fd (image=quay.io/ceph/grafana:10.4.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 08 10:01:48 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v508: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 08 10:01:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:01:48.923Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:01:49 compute-0 podman[234912]: 2025-10-08 10:01:49.172043943 +0000 UTC m=+0.056975598 container exec 50d7285a7766fc915688c3cc737e186f9ad2230d7b0b7e759880375ad996bb6c (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 10:01:49 compute-0 podman[234912]: 2025-10-08 10:01:49.203722857 +0000 UTC m=+0.088654502 container exec_died 50d7285a7766fc915688c3cc737e186f9ad2230d7b0b7e759880375ad996bb6c (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 10:01:49 compute-0 sudo[235003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqwqdvxnhcgogpzatvbqjqdkwrpmzmrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917708.963324-1514-262657878425471/AnsiballZ_stat.py'
Oct 08 10:01:49 compute-0 sudo[235003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:49 compute-0 sudo[233663]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 10:01:49 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:01:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 10:01:49 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:01:49 compute-0 sudo[235007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:01:49 compute-0 sudo[235007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:01:49 compute-0 sudo[235007]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:49 compute-0 sudo[235032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 08 10:01:49 compute-0 sudo[235032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:01:49 compute-0 python3.9[235006]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 10:01:49 compute-0 sudo[235003]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:49 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:01:49 compute-0 sudo[235194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwuvhrlcgdzbmbmnmsnplkhmppekdyld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917708.963324-1514-262657878425471/AnsiballZ_copy.py'
Oct 08 10:01:49 compute-0 sudo[235194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:49 compute-0 sudo[235032]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:49 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab08004620 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:01:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 10:01:49 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:01:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 08 10:01:49 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 10:01:49 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v509: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 465 B/s wr, 2 op/s
Oct 08 10:01:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 08 10:01:49 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:01:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 08 10:01:49 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:01:49 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:01:49 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:01:49 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:01:49.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:01:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 08 10:01:49 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 10:01:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 08 10:01:49 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 10:01:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 10:01:49 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:01:49 compute-0 python3.9[235198]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759917708.963324-1514-262657878425471/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:01:49 compute-0 sudo[235212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:01:49 compute-0 sudo[235212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:01:49 compute-0 sudo[235212]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:49 compute-0 sudo[235194]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:50 compute-0 sudo[235237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 08 10:01:50 compute-0 sudo[235237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:01:50 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:01:50 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:01:50 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:01:50.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:01:50 compute-0 ceph-mon[73572]: pgmap v508: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 08 10:01:50 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:01:50 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:01:50 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:01:50 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 10:01:50 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:01:50 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:01:50 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 10:01:50 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 10:01:50 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:01:50 compute-0 ceph-mon[73572]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_FAILED_DAEMON (was: 1 failed cephadm daemon(s))
Oct 08 10:01:50 compute-0 ceph-mon[73572]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct 08 10:01:50 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:50 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:01:50 compute-0 podman[235329]: 2025-10-08 10:01:50.518564669 +0000 UTC m=+0.062477564 container create f735babec46bed41943a0dd079c7a30f0f8c523846c91206226d1e99db3160b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_galois, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 10:01:50 compute-0 systemd[1]: Started libpod-conmon-f735babec46bed41943a0dd079c7a30f0f8c523846c91206226d1e99db3160b7.scope.
Oct 08 10:01:50 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:01:50 compute-0 podman[235329]: 2025-10-08 10:01:50.495098566 +0000 UTC m=+0.039011551 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:01:50 compute-0 podman[235329]: 2025-10-08 10:01:50.596229078 +0000 UTC m=+0.140141993 container init f735babec46bed41943a0dd079c7a30f0f8c523846c91206226d1e99db3160b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 08 10:01:50 compute-0 podman[235329]: 2025-10-08 10:01:50.603251902 +0000 UTC m=+0.147164797 container start f735babec46bed41943a0dd079c7a30f0f8c523846c91206226d1e99db3160b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_galois, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 08 10:01:50 compute-0 adoring_galois[235368]: 167 167
Oct 08 10:01:50 compute-0 systemd[1]: libpod-f735babec46bed41943a0dd079c7a30f0f8c523846c91206226d1e99db3160b7.scope: Deactivated successfully.
Oct 08 10:01:50 compute-0 podman[235329]: 2025-10-08 10:01:50.609907606 +0000 UTC m=+0.153820581 container attach f735babec46bed41943a0dd079c7a30f0f8c523846c91206226d1e99db3160b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_galois, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:01:50 compute-0 podman[235329]: 2025-10-08 10:01:50.610412922 +0000 UTC m=+0.154325817 container died f735babec46bed41943a0dd079c7a30f0f8c523846c91206226d1e99db3160b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_galois, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:01:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-a871b92c64683626e6ecc0ab96aff298af7fd416fbe9f540d6df6c652922bcd3-merged.mount: Deactivated successfully.
Oct 08 10:01:50 compute-0 podman[235329]: 2025-10-08 10:01:50.656656144 +0000 UTC m=+0.200569039 container remove f735babec46bed41943a0dd079c7a30f0f8c523846c91206226d1e99db3160b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_galois, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:01:50 compute-0 systemd[1]: libpod-conmon-f735babec46bed41943a0dd079c7a30f0f8c523846c91206226d1e99db3160b7.scope: Deactivated successfully.
Oct 08 10:01:50 compute-0 podman[235423]: 2025-10-08 10:01:50.814747181 +0000 UTC m=+0.051094829 container create 369632d29074bc16f3363972d8b5d71a8f119a6af4ad83e2dff2d953a41232ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:01:50 compute-0 systemd[1]: Started libpod-conmon-369632d29074bc16f3363972d8b5d71a8f119a6af4ad83e2dff2d953a41232ae.scope.
Oct 08 10:01:50 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:01:50 compute-0 podman[235423]: 2025-10-08 10:01:50.791447234 +0000 UTC m=+0.027794902 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:01:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cff25b82e7313c43ac903bd987a355cdde1c9d26358f86bbc04251a957e13d13/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:01:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cff25b82e7313c43ac903bd987a355cdde1c9d26358f86bbc04251a957e13d13/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:01:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cff25b82e7313c43ac903bd987a355cdde1c9d26358f86bbc04251a957e13d13/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:01:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cff25b82e7313c43ac903bd987a355cdde1c9d26358f86bbc04251a957e13d13/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:01:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cff25b82e7313c43ac903bd987a355cdde1c9d26358f86bbc04251a957e13d13/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 10:01:50 compute-0 podman[235423]: 2025-10-08 10:01:50.915017775 +0000 UTC m=+0.151365433 container init 369632d29074bc16f3363972d8b5d71a8f119a6af4ad83e2dff2d953a41232ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True)
Oct 08 10:01:50 compute-0 podman[235423]: 2025-10-08 10:01:50.922493844 +0000 UTC m=+0.158841482 container start 369632d29074bc16f3363972d8b5d71a8f119a6af4ad83e2dff2d953a41232ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_montalcini, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 08 10:01:50 compute-0 podman[235423]: 2025-10-08 10:01:50.925678176 +0000 UTC m=+0.162025844 container attach 369632d29074bc16f3363972d8b5d71a8f119a6af4ad83e2dff2d953a41232ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_montalcini, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:01:50 compute-0 sudo[235517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbnkjjgtesfyglagszodkuyaylijrhdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917710.5619335-1559-38066005180262/AnsiballZ_command.py'
Oct 08 10:01:50 compute-0 sudo[235517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:51 compute-0 python3.9[235519]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 10:01:51 compute-0 sudo[235517]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:51 compute-0 gifted_montalcini[235467]: --> passed data devices: 0 physical, 1 LVM
Oct 08 10:01:51 compute-0 gifted_montalcini[235467]: --> All data devices are unavailable
Oct 08 10:01:51 compute-0 systemd[1]: libpod-369632d29074bc16f3363972d8b5d71a8f119a6af4ad83e2dff2d953a41232ae.scope: Deactivated successfully.
Oct 08 10:01:51 compute-0 podman[235423]: 2025-10-08 10:01:51.272846943 +0000 UTC m=+0.509194591 container died 369632d29074bc16f3363972d8b5d71a8f119a6af4ad83e2dff2d953a41232ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_montalcini, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:01:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-cff25b82e7313c43ac903bd987a355cdde1c9d26358f86bbc04251a957e13d13-merged.mount: Deactivated successfully.
Oct 08 10:01:51 compute-0 ceph-mon[73572]: pgmap v509: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 465 B/s wr, 2 op/s
Oct 08 10:01:51 compute-0 ceph-mon[73572]: Health check cleared: CEPHADM_FAILED_DAEMON (was: 1 failed cephadm daemon(s))
Oct 08 10:01:51 compute-0 ceph-mon[73572]: Cluster is now healthy
Oct 08 10:01:51 compute-0 podman[235423]: 2025-10-08 10:01:51.321227204 +0000 UTC m=+0.557574842 container remove 369632d29074bc16f3363972d8b5d71a8f119a6af4ad83e2dff2d953a41232ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_montalcini, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True)
Oct 08 10:01:51 compute-0 systemd[1]: libpod-conmon-369632d29074bc16f3363972d8b5d71a8f119a6af4ad83e2dff2d953a41232ae.scope: Deactivated successfully.
Oct 08 10:01:51 compute-0 sudo[235237]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:51 compute-0 sudo[235590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:01:51 compute-0 sudo[235590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:01:51 compute-0 sudo[235590]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:51 compute-0 sudo[235644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- lvm list --format json
Oct 08 10:01:51 compute-0 sudo[235644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:01:51 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/100151 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 08 10:01:51 compute-0 sudo[235742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjdhobdsqmenwzpngmhuleqlgnastlqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917711.3756692-1583-79262507207526/AnsiballZ_lineinfile.py'
Oct 08 10:01:51 compute-0 sudo[235742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:51 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:51 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:01:51 compute-0 python3.9[235744]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:01:51 compute-0 sudo[235742]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:51 compute-0 podman[235789]: 2025-10-08 10:01:51.83653036 +0000 UTC m=+0.038208456 container create f2cf00980d208870b0ffa9d4357b0ad542e82c6eee1b941d1d289cb798100f65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Oct 08 10:01:51 compute-0 systemd[1]: Started libpod-conmon-f2cf00980d208870b0ffa9d4357b0ad542e82c6eee1b941d1d289cb798100f65.scope.
Oct 08 10:01:51 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:51 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:01:51 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v510: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 465 B/s wr, 2 op/s
Oct 08 10:01:51 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:01:51 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:01:51 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:01:51 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:01:51.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:01:51 compute-0 podman[235789]: 2025-10-08 10:01:51.820723083 +0000 UTC m=+0.022401189 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:01:51 compute-0 podman[235789]: 2025-10-08 10:01:51.916789512 +0000 UTC m=+0.118467628 container init f2cf00980d208870b0ffa9d4357b0ad542e82c6eee1b941d1d289cb798100f65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct 08 10:01:51 compute-0 podman[235789]: 2025-10-08 10:01:51.924053045 +0000 UTC m=+0.125731151 container start f2cf00980d208870b0ffa9d4357b0ad542e82c6eee1b941d1d289cb798100f65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_ride, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct 08 10:01:51 compute-0 relaxed_ride[235818]: 167 167
Oct 08 10:01:51 compute-0 podman[235789]: 2025-10-08 10:01:51.928968512 +0000 UTC m=+0.130646628 container attach f2cf00980d208870b0ffa9d4357b0ad542e82c6eee1b941d1d289cb798100f65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 10:01:51 compute-0 systemd[1]: libpod-f2cf00980d208870b0ffa9d4357b0ad542e82c6eee1b941d1d289cb798100f65.scope: Deactivated successfully.
Oct 08 10:01:51 compute-0 podman[235789]: 2025-10-08 10:01:51.930715578 +0000 UTC m=+0.132393674 container died f2cf00980d208870b0ffa9d4357b0ad542e82c6eee1b941d1d289cb798100f65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_ride, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:01:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-b52cac6e23e54a15ef15e27f8bdfcd08abb90a89af9cf4868ab4c25f7d03e306-merged.mount: Deactivated successfully.
Oct 08 10:01:51 compute-0 podman[235789]: 2025-10-08 10:01:51.973132847 +0000 UTC m=+0.174810933 container remove f2cf00980d208870b0ffa9d4357b0ad542e82c6eee1b941d1d289cb798100f65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_ride, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:01:51 compute-0 systemd[1]: libpod-conmon-f2cf00980d208870b0ffa9d4357b0ad542e82c6eee1b941d1d289cb798100f65.scope: Deactivated successfully.
Oct 08 10:01:52 compute-0 podman[235876]: 2025-10-08 10:01:52.119967714 +0000 UTC m=+0.039408124 container create b988ed17cacd25a74b3e08ece301942943506a007b35e010d2854d3f78f2325c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_villani, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct 08 10:01:52 compute-0 systemd[1]: Started libpod-conmon-b988ed17cacd25a74b3e08ece301942943506a007b35e010d2854d3f78f2325c.scope.
Oct 08 10:01:52 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:01:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4045e595090949fbc0d044e65b9b49b6d965bafa5fc8a3931ca3765a2f295e4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:01:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4045e595090949fbc0d044e65b9b49b6d965bafa5fc8a3931ca3765a2f295e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:01:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4045e595090949fbc0d044e65b9b49b6d965bafa5fc8a3931ca3765a2f295e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:01:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4045e595090949fbc0d044e65b9b49b6d965bafa5fc8a3931ca3765a2f295e4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:01:52 compute-0 podman[235876]: 2025-10-08 10:01:52.195763423 +0000 UTC m=+0.115203853 container init b988ed17cacd25a74b3e08ece301942943506a007b35e010d2854d3f78f2325c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_villani, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct 08 10:01:52 compute-0 podman[235876]: 2025-10-08 10:01:52.104010402 +0000 UTC m=+0.023450832 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:01:52 compute-0 podman[235876]: 2025-10-08 10:01:52.202160368 +0000 UTC m=+0.121600788 container start b988ed17cacd25a74b3e08ece301942943506a007b35e010d2854d3f78f2325c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_villani, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 08 10:01:52 compute-0 podman[235876]: 2025-10-08 10:01:52.20598567 +0000 UTC m=+0.125426100 container attach b988ed17cacd25a74b3e08ece301942943506a007b35e010d2854d3f78f2325c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_villani, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325)
Oct 08 10:01:52 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:01:52 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:01:52 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:01:52.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:01:52 compute-0 ceph-mon[73572]: pgmap v510: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 465 B/s wr, 2 op/s
Oct 08 10:01:52 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:52 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab0c000df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:01:52 compute-0 determined_villani[235922]: {
Oct 08 10:01:52 compute-0 determined_villani[235922]:     "1": [
Oct 08 10:01:52 compute-0 determined_villani[235922]:         {
Oct 08 10:01:52 compute-0 determined_villani[235922]:             "devices": [
Oct 08 10:01:52 compute-0 determined_villani[235922]:                 "/dev/loop3"
Oct 08 10:01:52 compute-0 determined_villani[235922]:             ],
Oct 08 10:01:52 compute-0 determined_villani[235922]:             "lv_name": "ceph_lv0",
Oct 08 10:01:52 compute-0 determined_villani[235922]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:01:52 compute-0 determined_villani[235922]:             "lv_size": "21470642176",
Oct 08 10:01:52 compute-0 determined_villani[235922]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 08 10:01:52 compute-0 determined_villani[235922]:             "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 10:01:52 compute-0 determined_villani[235922]:             "name": "ceph_lv0",
Oct 08 10:01:52 compute-0 determined_villani[235922]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:01:52 compute-0 determined_villani[235922]:             "tags": {
Oct 08 10:01:52 compute-0 determined_villani[235922]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:01:52 compute-0 determined_villani[235922]:                 "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 10:01:52 compute-0 determined_villani[235922]:                 "ceph.cephx_lockbox_secret": "",
Oct 08 10:01:52 compute-0 determined_villani[235922]:                 "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct 08 10:01:52 compute-0 determined_villani[235922]:                 "ceph.cluster_name": "ceph",
Oct 08 10:01:52 compute-0 determined_villani[235922]:                 "ceph.crush_device_class": "",
Oct 08 10:01:52 compute-0 determined_villani[235922]:                 "ceph.encrypted": "0",
Oct 08 10:01:52 compute-0 determined_villani[235922]:                 "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct 08 10:01:52 compute-0 determined_villani[235922]:                 "ceph.osd_id": "1",
Oct 08 10:01:52 compute-0 determined_villani[235922]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 08 10:01:52 compute-0 determined_villani[235922]:                 "ceph.type": "block",
Oct 08 10:01:52 compute-0 determined_villani[235922]:                 "ceph.vdo": "0",
Oct 08 10:01:52 compute-0 determined_villani[235922]:                 "ceph.with_tpm": "0"
Oct 08 10:01:52 compute-0 determined_villani[235922]:             },
Oct 08 10:01:52 compute-0 determined_villani[235922]:             "type": "block",
Oct 08 10:01:52 compute-0 determined_villani[235922]:             "vg_name": "ceph_vg0"
Oct 08 10:01:52 compute-0 determined_villani[235922]:         }
Oct 08 10:01:52 compute-0 determined_villani[235922]:     ]
Oct 08 10:01:52 compute-0 determined_villani[235922]: }
Oct 08 10:01:52 compute-0 systemd[1]: libpod-b988ed17cacd25a74b3e08ece301942943506a007b35e010d2854d3f78f2325c.scope: Deactivated successfully.
Oct 08 10:01:52 compute-0 conmon[235922]: conmon b988ed17cacd25a74b3e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b988ed17cacd25a74b3e08ece301942943506a007b35e010d2854d3f78f2325c.scope/container/memory.events
Oct 08 10:01:52 compute-0 sudo[236004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpttppspuarusmdqpwcwhfxukqodiblh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917712.0741003-1607-228388517424971/AnsiballZ_replace.py'
Oct 08 10:01:52 compute-0 sudo[236004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:52 compute-0 podman[236005]: 2025-10-08 10:01:52.55665972 +0000 UTC m=+0.024330852 container died b988ed17cacd25a74b3e08ece301942943506a007b35e010d2854d3f78f2325c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_villani, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 08 10:01:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4045e595090949fbc0d044e65b9b49b6d965bafa5fc8a3931ca3765a2f295e4-merged.mount: Deactivated successfully.
Oct 08 10:01:52 compute-0 podman[236005]: 2025-10-08 10:01:52.600161334 +0000 UTC m=+0.067832456 container remove b988ed17cacd25a74b3e08ece301942943506a007b35e010d2854d3f78f2325c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_villani, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:01:52 compute-0 systemd[1]: libpod-conmon-b988ed17cacd25a74b3e08ece301942943506a007b35e010d2854d3f78f2325c.scope: Deactivated successfully.
Oct 08 10:01:52 compute-0 sudo[235644]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:52 compute-0 sudo[236022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:01:52 compute-0 sudo[236022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:01:52 compute-0 sudo[236022]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:52 compute-0 python3.9[236012]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:01:52 compute-0 sudo[236047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- raw list --format json
Oct 08 10:01:52 compute-0 sudo[236047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:01:52 compute-0 sudo[236004]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:53 compute-0 podman[236232]: 2025-10-08 10:01:53.184679727 +0000 UTC m=+0.038930598 container create b07fd854611db22bad9e41d4fab8b1b12dd528b3f02554bace351819e41a02a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_hermann, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 08 10:01:53 compute-0 systemd[1]: Started libpod-conmon-b07fd854611db22bad9e41d4fab8b1b12dd528b3f02554bace351819e41a02a0.scope.
Oct 08 10:01:53 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:01:53 compute-0 sudo[236278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxvfcrmnnusgtkplghwshmxfuxpyfgis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917712.9256096-1631-104413559049548/AnsiballZ_replace.py'
Oct 08 10:01:53 compute-0 sudo[236278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:53 compute-0 podman[236232]: 2025-10-08 10:01:53.253980209 +0000 UTC m=+0.108231120 container init b07fd854611db22bad9e41d4fab8b1b12dd528b3f02554bace351819e41a02a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_hermann, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:01:53 compute-0 podman[236232]: 2025-10-08 10:01:53.262081499 +0000 UTC m=+0.116332360 container start b07fd854611db22bad9e41d4fab8b1b12dd528b3f02554bace351819e41a02a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_hermann, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:01:53 compute-0 podman[236232]: 2025-10-08 10:01:53.166784954 +0000 UTC m=+0.021035825 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:01:53 compute-0 podman[236232]: 2025-10-08 10:01:53.266019964 +0000 UTC m=+0.120270885 container attach b07fd854611db22bad9e41d4fab8b1b12dd528b3f02554bace351819e41a02a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:01:53 compute-0 infallible_hermann[236279]: 167 167
Oct 08 10:01:53 compute-0 systemd[1]: libpod-b07fd854611db22bad9e41d4fab8b1b12dd528b3f02554bace351819e41a02a0.scope: Deactivated successfully.
Oct 08 10:01:53 compute-0 podman[236232]: 2025-10-08 10:01:53.268778703 +0000 UTC m=+0.123029564 container died b07fd854611db22bad9e41d4fab8b1b12dd528b3f02554bace351819e41a02a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_hermann, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 10:01:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba729483357026e0165e847409d59cd109b38652e72144f1a7989a3a779f1e72-merged.mount: Deactivated successfully.
Oct 08 10:01:53 compute-0 podman[236232]: 2025-10-08 10:01:53.304527309 +0000 UTC m=+0.158778190 container remove b07fd854611db22bad9e41d4fab8b1b12dd528b3f02554bace351819e41a02a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_hermann, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 10:01:53 compute-0 systemd[1]: libpod-conmon-b07fd854611db22bad9e41d4fab8b1b12dd528b3f02554bace351819e41a02a0.scope: Deactivated successfully.
Oct 08 10:01:53 compute-0 podman[236299]: 2025-10-08 10:01:53.44249521 +0000 UTC m=+0.089385745 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 08 10:01:53 compute-0 python3.9[236283]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:01:53 compute-0 sudo[236278]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:53 compute-0 podman[236325]: 2025-10-08 10:01:53.473119412 +0000 UTC m=+0.050276222 container create 506db276724e2142e3cea81345a114918c4c84cff1eeef0232ced7f738acd03f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_booth, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 08 10:01:53 compute-0 systemd[1]: Started libpod-conmon-506db276724e2142e3cea81345a114918c4c84cff1eeef0232ced7f738acd03f.scope.
Oct 08 10:01:53 compute-0 podman[236325]: 2025-10-08 10:01:53.453174352 +0000 UTC m=+0.030331172 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:01:53 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:01:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b51617eb92ec3e26aa6266b234a05bb270a23fd6eb47f62e7fddd05278a085c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:01:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b51617eb92ec3e26aa6266b234a05bb270a23fd6eb47f62e7fddd05278a085c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:01:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b51617eb92ec3e26aa6266b234a05bb270a23fd6eb47f62e7fddd05278a085c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:01:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b51617eb92ec3e26aa6266b234a05bb270a23fd6eb47f62e7fddd05278a085c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:01:53 compute-0 podman[236325]: 2025-10-08 10:01:53.56166241 +0000 UTC m=+0.138819200 container init 506db276724e2142e3cea81345a114918c4c84cff1eeef0232ced7f738acd03f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 08 10:01:53 compute-0 podman[236325]: 2025-10-08 10:01:53.569477361 +0000 UTC m=+0.146634131 container start 506db276724e2142e3cea81345a114918c4c84cff1eeef0232ced7f738acd03f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 10:01:53 compute-0 podman[236325]: 2025-10-08 10:01:53.574268064 +0000 UTC m=+0.151424834 container attach 506db276724e2142e3cea81345a114918c4c84cff1eeef0232ced7f738acd03f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_booth, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:01:53 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:01:53 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:53 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:01:53 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:53 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:01:53 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v511: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 465 B/s wr, 2 op/s
Oct 08 10:01:53 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:01:53 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:01:53 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:01:53.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:01:54 compute-0 sudo[236570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wraidkezlouftfsxssjaaepxdsmhlxgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917713.9460695-1658-58066513643047/AnsiballZ_lineinfile.py'
Oct 08 10:01:54 compute-0 sudo[236570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:54 compute-0 lvm[236573]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 08 10:01:54 compute-0 lvm[236573]: VG ceph_vg0 finished
Oct 08 10:01:54 compute-0 recursing_booth[236370]: {}
Oct 08 10:01:54 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:01:54 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:01:54 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:01:54.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:01:54 compute-0 systemd[1]: libpod-506db276724e2142e3cea81345a114918c4c84cff1eeef0232ced7f738acd03f.scope: Deactivated successfully.
Oct 08 10:01:54 compute-0 podman[236325]: 2025-10-08 10:01:54.298629149 +0000 UTC m=+0.875785929 container died 506db276724e2142e3cea81345a114918c4c84cff1eeef0232ced7f738acd03f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_booth, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:01:54 compute-0 systemd[1]: libpod-506db276724e2142e3cea81345a114918c4c84cff1eeef0232ced7f738acd03f.scope: Consumed 1.146s CPU time.
Oct 08 10:01:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b51617eb92ec3e26aa6266b234a05bb270a23fd6eb47f62e7fddd05278a085c-merged.mount: Deactivated successfully.
Oct 08 10:01:54 compute-0 podman[236325]: 2025-10-08 10:01:54.339751098 +0000 UTC m=+0.916907868 container remove 506db276724e2142e3cea81345a114918c4c84cff1eeef0232ced7f738acd03f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_booth, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:01:54 compute-0 systemd[1]: libpod-conmon-506db276724e2142e3cea81345a114918c4c84cff1eeef0232ced7f738acd03f.scope: Deactivated successfully.
Oct 08 10:01:54 compute-0 sudo[236047]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 10:01:54 compute-0 python3.9[236574]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:01:54 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:01:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 10:01:54 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:01:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:54 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:01:54 compute-0 sudo[236570]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:54 compute-0 sudo[236590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 08 10:01:54 compute-0 sudo[236590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:01:54 compute-0 sudo[236590]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:54 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Oct 08 10:01:54 compute-0 sudo[236765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcozxectyeioqmkuqgunfrsynbtfflru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917714.5977144-1658-223872272654158/AnsiballZ_lineinfile.py'
Oct 08 10:01:54 compute-0 sudo[236765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:54 compute-0 ceph-mon[73572]: pgmap v511: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 465 B/s wr, 2 op/s
Oct 08 10:01:54 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:01:54 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:01:55 compute-0 python3.9[236767]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:01:55 compute-0 sudo[236765]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:55 compute-0 sudo[236918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqbnojbfoukseteiejebotmehavfatsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917715.1688411-1658-145134303787423/AnsiballZ_lineinfile.py'
Oct 08 10:01:55 compute-0 sudo[236918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:55 compute-0 python3.9[236920]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:01:55 compute-0 sudo[236918]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:01:55] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Oct 08 10:01:55 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:01:55] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Oct 08 10:01:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:55 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab0c001d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:01:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:55 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:01:55 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v512: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 279 B/s rd, 93 B/s wr, 0 op/s
Oct 08 10:01:55 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:01:55 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:01:55 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:01:55.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:01:55 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct 08 10:01:56 compute-0 sudo[237072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccibkzsafrxbgzrkrylmofpgzyxopsob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917715.8149047-1658-194576866819827/AnsiballZ_lineinfile.py'
Oct 08 10:01:56 compute-0 sudo[237072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:56 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:01:56 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:01:56 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:01:56.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:01:56 compute-0 python3.9[237074]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:01:56 compute-0 sudo[237072]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:56 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:56 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:01:56 compute-0 ceph-mon[73572]: pgmap v512: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 279 B/s rd, 93 B/s wr, 0 op/s
Oct 08 10:01:57 compute-0 sudo[237224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbfjaskvmfwwdrknnxvseggsmoibpsra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917716.7729151-1745-229000129459901/AnsiballZ_stat.py'
Oct 08 10:01:57 compute-0 sudo[237224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:01:57.058Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:01:57 compute-0 python3.9[237226]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 10:01:57 compute-0 sudo[237224]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:01:57.399 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:01:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:01:57.400 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:01:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:01:57.400 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:01:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:57 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:01:57 compute-0 sudo[237379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdkzypxpqkvxvuqweaubgxrnwvpiwzmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917717.5391326-1769-87029423833587/AnsiballZ_file.py'
Oct 08 10:01:57 compute-0 sudo[237379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:57 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab0c001d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:01:57 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v513: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 279 B/s rd, 93 B/s wr, 0 op/s
Oct 08 10:01:57 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:01:57 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:01:57 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:01:57.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:01:58 compute-0 python3.9[237381]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:01:58 compute-0 sudo[237379]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:58 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:01:58 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:01:58 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:01:58.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:01:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:58 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:01:58 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:01:58 compute-0 sudo[237532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inynnpyupnbhxfnrvxfjmftocqfxglos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917718.424973-1796-112092570131734/AnsiballZ_file.py'
Oct 08 10:01:58 compute-0 sudo[237532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:01:58.924Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:01:58 compute-0 python3.9[237534]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 08 10:01:58 compute-0 sudo[237532]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:59 compute-0 ceph-mon[73572]: pgmap v513: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 279 B/s rd, 93 B/s wr, 0 op/s
Oct 08 10:01:59 compute-0 sudo[237685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-peblzhkpsixkrwfkltjjsinwmzgzrnrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917719.1659-1820-111128557024069/AnsiballZ_stat.py'
Oct 08 10:01:59 compute-0 sudo[237685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:01:59 compute-0 python3.9[237687]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 10:01:59 compute-0 sudo[237685]: pam_unix(sudo:session): session closed for user root
Oct 08 10:01:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:59 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:01:59 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v514: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 279 B/s rd, 0 op/s
Oct 08 10:01:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:59 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:01:59 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:01:59 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:01:59 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:01:59.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:01:59 compute-0 sudo[237764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awfjcrfdtrlebcbwzuewwjgvmlnrvted ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917719.1659-1820-111128557024069/AnsiballZ_file.py'
Oct 08 10:01:59 compute-0 sudo[237764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:02:00 compute-0 python3.9[237766]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 10:02:00 compute-0 sudo[237764]: pam_unix(sudo:session): session closed for user root
Oct 08 10:02:00 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:02:00 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:02:00 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:02:00.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:02:00 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:00 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:00 compute-0 sudo[237933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rodpsuwjlduhxxudduorxzwxlpjoclhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917720.3369393-1820-67954969167575/AnsiballZ_stat.py'
Oct 08 10:02:00 compute-0 sudo[237933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:02:00 compute-0 podman[237890]: 2025-10-08 10:02:00.815871208 +0000 UTC m=+0.063014591 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 08 10:02:01 compute-0 python3.9[237937]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 10:02:01 compute-0 ceph-mon[73572]: pgmap v514: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 279 B/s rd, 0 op/s
Oct 08 10:02:01 compute-0 sudo[237933]: pam_unix(sudo:session): session closed for user root
Oct 08 10:02:01 compute-0 sudo[238014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltewbaamaqrbjgpyrnuzzatuweybrrlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917720.3369393-1820-67954969167575/AnsiballZ_file.py'
Oct 08 10:02:01 compute-0 sudo[238014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:02:01 compute-0 python3.9[238016]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 10:02:01 compute-0 sudo[238014]: pam_unix(sudo:session): session closed for user root
Oct 08 10:02:01 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:01 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:01 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v515: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:02:01 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:01 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab0c001d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:01 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:02:01 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:02:01 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:02:01.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:02:02 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:02:02 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:02:02 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:02:02.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:02:02 compute-0 sudo[238167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkoceumljhksfyjpzacgavxqaxzecnct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917722.0081763-1889-239249453996702/AnsiballZ_file.py'
Oct 08 10:02:02 compute-0 sudo[238167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:02:02 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:02 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:02 compute-0 python3.9[238169]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:02:02 compute-0 sudo[238167]: pam_unix(sudo:session): session closed for user root
Oct 08 10:02:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Oct 08 10:02:02 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:02:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:02:02 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:02:02 compute-0 podman[238237]: 2025-10-08 10:02:02.936665549 +0000 UTC m=+0.080722878 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=iscsid)
Oct 08 10:02:03 compute-0 ceph-mon[73572]: pgmap v515: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:02:03 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:02:03 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:02:03 compute-0 sudo[238337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qusjiilcrqtoqbeusydoknlilnrjkyuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917722.7709997-1913-271958240419979/AnsiballZ_stat.py'
Oct 08 10:02:03 compute-0 sudo[238337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:02:03 compute-0 python3.9[238339]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 10:02:03 compute-0 sudo[238337]: pam_unix(sudo:session): session closed for user root
Oct 08 10:02:03 compute-0 sudo[238415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvubkhwswzrhhcrzufdhtslbpehjuswv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917722.7709997-1913-271958240419979/AnsiballZ_file.py'
Oct 08 10:02:03 compute-0 sudo[238415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:02:03 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:02:03 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:03 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab0c001d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:03 compute-0 python3.9[238417]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:02:03 compute-0 sudo[238415]: pam_unix(sudo:session): session closed for user root
Oct 08 10:02:03 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v516: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:02:03 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:03 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:03 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:02:03 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:02:03 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:02:03.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:02:04 compute-0 sudo[238443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:02:04 compute-0 sudo[238443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:02:04 compute-0 sudo[238443]: pam_unix(sudo:session): session closed for user root
Oct 08 10:02:04 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:02:04 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:02:04 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:02:04.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:02:04 compute-0 sudo[238593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxqagqdcggdxloibkonxtfboxyfyoamo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917724.172943-1949-234839566812288/AnsiballZ_stat.py'
Oct 08 10:02:04 compute-0 sudo[238593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:02:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:04 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab0c001d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:04 compute-0 python3.9[238595]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 10:02:04 compute-0 sudo[238593]: pam_unix(sudo:session): session closed for user root
Oct 08 10:02:04 compute-0 sudo[238671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juvniyiecujyogunezcckucrmnyfogrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917724.172943-1949-234839566812288/AnsiballZ_file.py'
Oct 08 10:02:04 compute-0 sudo[238671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:02:05 compute-0 python3.9[238673]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:02:05 compute-0 sudo[238671]: pam_unix(sudo:session): session closed for user root
Oct 08 10:02:05 compute-0 ceph-mon[73572]: pgmap v516: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:02:05 compute-0 sudo[238824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmyciejihclmmuqdbgbcwoskvnnejyua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917725.4637027-1985-34415154434101/AnsiballZ_systemd.py'
Oct 08 10:02:05 compute-0 sudo[238824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:02:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:02:05] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct 08 10:02:05 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:02:05] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct 08 10:02:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:05 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:05 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v517: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:02:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:05 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4003c50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:05 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:02:05 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:02:05 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:02:05.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:02:06 compute-0 python3.9[238826]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 10:02:06 compute-0 systemd[1]: Reloading.
Oct 08 10:02:06 compute-0 systemd-rc-local-generator[238853]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 10:02:06 compute-0 systemd-sysv-generator[238856]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 10:02:06 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:02:06 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:02:06 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:02:06.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:02:06 compute-0 sudo[238824]: pam_unix(sudo:session): session closed for user root
Oct 08 10:02:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:06 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:06 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct 08 10:02:06 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Oct 08 10:02:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:02:07.059Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:02:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:02:07.059Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:02:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:02:07.059Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:02:07 compute-0 ceph-mon[73572]: pgmap v517: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:02:07 compute-0 sudo[239016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buldydzvdrmranuoluxhrmsrvkieiphb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917726.7811894-2009-61392068744399/AnsiballZ_stat.py'
Oct 08 10:02:07 compute-0 sudo[239016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:02:07 compute-0 python3.9[239018]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 10:02:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/100207 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 08 10:02:07 compute-0 sudo[239016]: pam_unix(sudo:session): session closed for user root
Oct 08 10:02:07 compute-0 sudo[239094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhvhklitrturdpwbgaemcfpnfdcivnea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917726.7811894-2009-61392068744399/AnsiballZ_file.py'
Oct 08 10:02:07 compute-0 sudo[239094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:02:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:07 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab0c009990 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:07 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v518: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:02:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:07 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:07 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:02:07 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:02:07 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:02:07.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:02:07 compute-0 python3.9[239096]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:02:07 compute-0 sudo[239094]: pam_unix(sudo:session): session closed for user root
Oct 08 10:02:08 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:02:08 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:02:08 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:02:08.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:02:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:08 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:08 compute-0 sudo[239247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yopzhumonsipskpuuwhrcugzpvxseihn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917728.280818-2045-165668374351024/AnsiballZ_stat.py'
Oct 08 10:02:08 compute-0 sudo[239247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:02:08 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:02:08 compute-0 python3.9[239249]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 10:02:08 compute-0 sudo[239247]: pam_unix(sudo:session): session closed for user root
Oct 08 10:02:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:02:08.925Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:02:08 compute-0 ceph-mgr[73869]: [dashboard INFO request] [192.168.122.100:55410] [POST] [200] [0.002s] [4.0B] [d3cbdf7b-5643-40ed-970d-10daa4db13bd] /api/prometheus_receiver
Oct 08 10:02:08 compute-0 sudo[239325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucjgzstifldorrcukpizqfkrspubsltn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917728.280818-2045-165668374351024/AnsiballZ_file.py'
Oct 08 10:02:08 compute-0 sudo[239325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:02:09 compute-0 python3.9[239327]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:02:09 compute-0 sudo[239325]: pam_unix(sudo:session): session closed for user root
Oct 08 10:02:09 compute-0 ceph-mon[73572]: pgmap v518: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:02:09 compute-0 sudo[239478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjsjoqqrlkncmfbrkoboaobynjkjmsnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917729.4718742-2081-114755876292061/AnsiballZ_systemd.py'
Oct 08 10:02:09 compute-0 sudo[239478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:02:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:09 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab0c009990 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:09 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v519: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 08 10:02:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:09 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:09 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:02:09 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:02:09 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:02:09.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:02:10 compute-0 python3.9[239480]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 10:02:10 compute-0 systemd[1]: Reloading.
Oct 08 10:02:10 compute-0 systemd-rc-local-generator[239508]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 10:02:10 compute-0 systemd-sysv-generator[239511]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 10:02:10 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:02:10 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:02:10 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:02:10.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:02:10 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:10 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:10 compute-0 systemd[1]: Starting Create netns directory...
Oct 08 10:02:10 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 08 10:02:10 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 08 10:02:10 compute-0 systemd[1]: Finished Create netns directory.
Oct 08 10:02:10 compute-0 sudo[239478]: pam_unix(sudo:session): session closed for user root
Oct 08 10:02:11 compute-0 ceph-mon[73572]: pgmap v519: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 08 10:02:11 compute-0 sudo[239674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhznfcycgmwawjrdxzqahylyksjhqcdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917731.1066365-2111-228320651156222/AnsiballZ_file.py'
Oct 08 10:02:11 compute-0 sudo[239674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:02:11 compute-0 python3.9[239676]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 10:02:11 compute-0 sudo[239674]: pam_unix(sudo:session): session closed for user root
Oct 08 10:02:11 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:11 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:11 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v520: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 10:02:11 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:11 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab0c009990 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:11 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:02:11 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:02:11 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:02:11.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:02:12 compute-0 sudo[239827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljaabamsulhesegdkkhczmxvhzwtjgxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917731.8395934-2135-228740585586635/AnsiballZ_stat.py'
Oct 08 10:02:12 compute-0 sudo[239827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:02:12 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:02:12 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:02:12 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:02:12.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:02:12 compute-0 python3.9[239829]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 10:02:12 compute-0 sudo[239827]: pam_unix(sudo:session): session closed for user root
Oct 08 10:02:12 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:12 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab0c009990 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:12 compute-0 sudo[239950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdnwshfkxejdqvyutkrdpcygsrjntkle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917731.8395934-2135-228740585586635/AnsiballZ_copy.py'
Oct 08 10:02:12 compute-0 sudo[239950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:02:12 compute-0 python3.9[239952]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759917731.8395934-2135-228740585586635/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 08 10:02:12 compute-0 sudo[239950]: pam_unix(sudo:session): session closed for user root
Oct 08 10:02:13 compute-0 ceph-mon[73572]: pgmap v520: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 10:02:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:02:13 compute-0 sudo[240103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvihshigsbbjwlpokmhlzhzfaexcegmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917733.4887252-2186-147087175983726/AnsiballZ_file.py'
Oct 08 10:02:13 compute-0 sudo[240103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:02:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:13 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:13 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v521: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 08 10:02:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:13 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:13 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:02:13 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:02:13 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:02:13.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:02:13 compute-0 python3.9[240105]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 08 10:02:13 compute-0 sudo[240103]: pam_unix(sudo:session): session closed for user root
Oct 08 10:02:14 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:02:14 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:02:14 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:02:14.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:02:14 compute-0 ceph-mon[73572]: pgmap v521: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 08 10:02:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:14 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab0c009990 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:14 compute-0 sudo[240256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdzbqiqnzvszwtkipoyxpswwnpgrkfsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917734.2465703-2210-172891251407674/AnsiballZ_stat.py'
Oct 08 10:02:14 compute-0 sudo[240256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:02:14 compute-0 python3.9[240258]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 10:02:14 compute-0 sudo[240256]: pam_unix(sudo:session): session closed for user root
Oct 08 10:02:15 compute-0 sudo[240380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvutddlrpquigvrjiwyaxehzdipiazqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917734.2465703-2210-172891251407674/AnsiballZ_copy.py'
Oct 08 10:02:15 compute-0 sudo[240380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:02:15 compute-0 python3.9[240382]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759917734.2465703-2210-172891251407674/.source.json _original_basename=.f22asbx2 follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:02:15 compute-0 sudo[240380]: pam_unix(sudo:session): session closed for user root
Oct 08 10:02:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:02:15] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct 08 10:02:15 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:02:15] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct 08 10:02:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:15 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab0c009990 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:15 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v522: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 08 10:02:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:15 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003c30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:15 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:02:15 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:02:15 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:02:15 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:02:15.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:02:16 compute-0 sudo[240533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsfuycpckfkmrwshhxytgbkemmgbaqfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917735.808014-2255-111719870661426/AnsiballZ_file.py'
Oct 08 10:02:16 compute-0 sudo[240533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:02:16 compute-0 python3.9[240535]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:02:16 compute-0 sudo[240533]: pam_unix(sudo:session): session closed for user root
Oct 08 10:02:16 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:02:16 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:02:16 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:02:16.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:02:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:16 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003c30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:16 compute-0 sudo[240685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvmwzsrdsrnwbqxjbrmkfmdcmitwyioz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917736.6102543-2279-86269770749095/AnsiballZ_stat.py'
Oct 08 10:02:16 compute-0 sudo[240685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:02:16 compute-0 ceph-mon[73572]: pgmap v522: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 08 10:02:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:02:17.060Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:02:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:02:17.061Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:02:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:02:17.061Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:02:17 compute-0 sudo[240685]: pam_unix(sudo:session): session closed for user root
Oct 08 10:02:17 compute-0 sudo[240809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-diujnieflwxrjrdncavpjpdruegxrgjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917736.6102543-2279-86269770749095/AnsiballZ_copy.py'
Oct 08 10:02:17 compute-0 sudo[240809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:02:17 compute-0 sudo[240809]: pam_unix(sudo:session): session closed for user root
Oct 08 10:02:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:17 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab0c009990 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:17 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:02:17 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:02:17 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v523: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 08 10:02:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:02:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:02:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:17 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4003ed0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:17 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:02:17 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:02:17 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:02:17.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:02:18 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:02:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:02:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:02:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:02:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:02:18 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:02:18 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:02:18 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:02:18.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:02:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:18 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:18 compute-0 sudo[240962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtjeatkgfuvtghomexrbsldzhgmbggvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917738.1827738-2330-97672080985968/AnsiballZ_container_config_data.py'
Oct 08 10:02:18 compute-0 sudo[240962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:02:18 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:02:18 compute-0 python3.9[240964]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Oct 08 10:02:18 compute-0 sudo[240962]: pam_unix(sudo:session): session closed for user root
Oct 08 10:02:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:18 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:02:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:18 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:02:19 compute-0 ceph-mon[73572]: pgmap v523: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 08 10:02:19 compute-0 sudo[241115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggsbrazjhezuftjxyexbdopoucfrbayq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917739.2332249-2357-165809735998905/AnsiballZ_container_config_hash.py'
Oct 08 10:02:19 compute-0 sudo[241115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:02:19 compute-0 python3.9[241117]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 08 10:02:19 compute-0 sudo[241115]: pam_unix(sudo:session): session closed for user root
Oct 08 10:02:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:19 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:19 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v524: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 08 10:02:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:19 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab0c009990 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:19 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:02:19 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:02:19 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:02:19.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:02:20 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:02:20 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:02:20 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:02:20.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:02:20 compute-0 sudo[241268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqsddulhfrkmneulqkvenmwtvbopmcmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917740.1118119-2384-231293020091826/AnsiballZ_podman_container_info.py'
Oct 08 10:02:20 compute-0 sudo[241268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:02:20 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:20 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4003ef0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:20 compute-0 python3.9[241270]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 08 10:02:20 compute-0 sudo[241268]: pam_unix(sudo:session): session closed for user root
Oct 08 10:02:21 compute-0 ceph-mon[73572]: pgmap v524: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 08 10:02:21 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:21 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:21 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v525: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Oct 08 10:02:21 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:21 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:21 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:02:21 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:02:21 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:02:21.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:02:21 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:21 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 08 10:02:22 compute-0 sudo[241450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmvsyqcbgemucscxsgjorbmnpjnsborn ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759917742.0502567-2423-93303219271344/AnsiballZ_edpm_container_manage.py'
Oct 08 10:02:22 compute-0 sudo[241450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:02:22 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:02:22 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:02:22 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:02:22.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:02:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:22 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab08001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:22 compute-0 python3[241452]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 08 10:02:23 compute-0 ceph-mon[73572]: pgmap v525: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Oct 08 10:02:23 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:02:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:23 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faadc000b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:23 compute-0 podman[241465]: 2025-10-08 10:02:23.850685815 +0000 UTC m=+1.208772763 image pull f541ff382622bd8bc9ad206129d2a8e74c239ff4503fa3b67d3bdf6d5b50b511 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43
Oct 08 10:02:23 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v526: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 08 10:02:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:23 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4003fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:23 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:02:23 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:02:23 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:02:23.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:02:23 compute-0 podman[241501]: 2025-10-08 10:02:23.969712339 +0000 UTC m=+0.127038293 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 08 10:02:24 compute-0 podman[241549]: 2025-10-08 10:02:24.017326125 +0000 UTC m=+0.050493299 container create 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd)
Oct 08 10:02:24 compute-0 podman[241549]: 2025-10-08 10:02:23.985533996 +0000 UTC m=+0.018701190 image pull f541ff382622bd8bc9ad206129d2a8e74c239ff4503fa3b67d3bdf6d5b50b511 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43
Oct 08 10:02:24 compute-0 python3[241452]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43
Oct 08 10:02:24 compute-0 sudo[241450]: pam_unix(sudo:session): session closed for user root
Oct 08 10:02:24 compute-0 sudo[241591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:02:24 compute-0 sudo[241591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:02:24 compute-0 sudo[241591]: pam_unix(sudo:session): session closed for user root
Oct 08 10:02:24 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:02:24 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:02:24 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:02:24.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:02:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:24 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4003fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:24 compute-0 sudo[241765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmrztnafxjynduqvrhwslkkaeuvsbkfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917744.513838-2447-104282999769729/AnsiballZ_stat.py'
Oct 08 10:02:24 compute-0 sudo[241765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:02:24 compute-0 python3.9[241767]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 10:02:24 compute-0 sudo[241765]: pam_unix(sudo:session): session closed for user root
Oct 08 10:02:25 compute-0 ceph-mon[73572]: pgmap v526: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 08 10:02:25 compute-0 sudo[241920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fooqkrfjfeigqqepeqjhzmdtrvwkbxjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917745.399527-2474-64343457823727/AnsiballZ_file.py'
Oct 08 10:02:25 compute-0 sudo[241920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:02:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:02:25] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Oct 08 10:02:25 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:02:25] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Oct 08 10:02:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:25 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:25 compute-0 python3.9[241922]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:02:25 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v527: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Oct 08 10:02:25 compute-0 sudo[241920]: pam_unix(sudo:session): session closed for user root
Oct 08 10:02:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:25 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faad8000b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:25 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:02:25 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.002000065s ======
Oct 08 10:02:25 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:02:25.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000065s
Oct 08 10:02:26 compute-0 sudo[241997]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnwwhdzbqkpsaipsjdpkkuumuatusjkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917745.399527-2474-64343457823727/AnsiballZ_stat.py'
Oct 08 10:02:26 compute-0 sudo[241997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:02:26 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:02:26 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:02:26 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:02:26.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:02:26 compute-0 python3.9[241999]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 10:02:26 compute-0 sudo[241997]: pam_unix(sudo:session): session closed for user root
Oct 08 10:02:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:26 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab08002980 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:26 compute-0 sudo[242148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvitchklmwbqyyurbcdwzqmkaiyhavdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917746.403927-2474-196243196101786/AnsiballZ_copy.py'
Oct 08 10:02:26 compute-0 sudo[242148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:02:26 compute-0 python3.9[242150]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759917746.403927-2474-196243196101786/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:02:27 compute-0 sudo[242148]: pam_unix(sudo:session): session closed for user root
Oct 08 10:02:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:02:27.062Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:02:27 compute-0 sudo[242225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxdngncamtprjsvlyahmoxjhcqmtefie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917746.403927-2474-196243196101786/AnsiballZ_systemd.py'
Oct 08 10:02:27 compute-0 sudo[242225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:02:27 compute-0 ceph-mon[73572]: pgmap v527: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Oct 08 10:02:27 compute-0 python3.9[242227]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 08 10:02:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/100227 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 08 10:02:27 compute-0 systemd[1]: Reloading.
Oct 08 10:02:27 compute-0 systemd-rc-local-generator[242253]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 10:02:27 compute-0 systemd-sysv-generator[242257]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 10:02:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:27 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:27 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v528: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Oct 08 10:02:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:27 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:27 compute-0 sudo[242225]: pam_unix(sudo:session): session closed for user root
Oct 08 10:02:27 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:02:27 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:02:27 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:02:27.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:02:28 compute-0 sudo[242337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmskwiyboqjytycoqnndvicuxiftqavv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917746.403927-2474-196243196101786/AnsiballZ_systemd.py'
Oct 08 10:02:28 compute-0 sudo[242337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:02:28 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:02:28 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:02:28 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:02:28.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:02:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:28 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faad80016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:28 compute-0 python3.9[242339]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 10:02:28 compute-0 systemd[1]: Reloading.
Oct 08 10:02:28 compute-0 systemd-rc-local-generator[242367]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 10:02:28 compute-0 systemd-sysv-generator[242371]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 10:02:28 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:02:28 compute-0 systemd[1]: Starting multipathd container...
Oct 08 10:02:28 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:02:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46965a17b0411b431c812e5ed3182c2ddf67383ce640a60244bca04244814713/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 08 10:02:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46965a17b0411b431c812e5ed3182c2ddf67383ce640a60244bca04244814713/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 08 10:02:29 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311.
Oct 08 10:02:29 compute-0 podman[242378]: 2025-10-08 10:02:29.035458087 +0000 UTC m=+0.131316511 container init 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3)
Oct 08 10:02:29 compute-0 multipathd[242394]: + sudo -E kolla_set_configs
Oct 08 10:02:29 compute-0 sudo[242400]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 08 10:02:29 compute-0 sudo[242400]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 08 10:02:29 compute-0 sudo[242400]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 08 10:02:29 compute-0 podman[242378]: 2025-10-08 10:02:29.080790889 +0000 UTC m=+0.176649293 container start 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 08 10:02:29 compute-0 podman[242378]: multipathd
Oct 08 10:02:29 compute-0 systemd[1]: Started multipathd container.
Oct 08 10:02:29 compute-0 multipathd[242394]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 08 10:02:29 compute-0 multipathd[242394]: INFO:__main__:Validating config file
Oct 08 10:02:29 compute-0 multipathd[242394]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 08 10:02:29 compute-0 multipathd[242394]: INFO:__main__:Writing out command to execute
Oct 08 10:02:29 compute-0 sudo[242400]: pam_unix(sudo:session): session closed for user root
Oct 08 10:02:29 compute-0 multipathd[242394]: ++ cat /run_command
Oct 08 10:02:29 compute-0 multipathd[242394]: + CMD='/usr/sbin/multipathd -d'
Oct 08 10:02:29 compute-0 multipathd[242394]: + ARGS=
Oct 08 10:02:29 compute-0 multipathd[242394]: + sudo kolla_copy_cacerts
Oct 08 10:02:29 compute-0 sudo[242418]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 08 10:02:29 compute-0 sudo[242418]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 08 10:02:29 compute-0 sudo[242418]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 08 10:02:29 compute-0 sudo[242337]: pam_unix(sudo:session): session closed for user root
Oct 08 10:02:29 compute-0 sudo[242418]: pam_unix(sudo:session): session closed for user root
Oct 08 10:02:29 compute-0 multipathd[242394]: Running command: '/usr/sbin/multipathd -d'
Oct 08 10:02:29 compute-0 multipathd[242394]: + [[ ! -n '' ]]
Oct 08 10:02:29 compute-0 multipathd[242394]: + . kolla_extend_start
Oct 08 10:02:29 compute-0 multipathd[242394]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct 08 10:02:29 compute-0 multipathd[242394]: + umask 0022
Oct 08 10:02:29 compute-0 multipathd[242394]: + exec /usr/sbin/multipathd -d
Oct 08 10:02:29 compute-0 podman[242402]: 2025-10-08 10:02:29.157701724 +0000 UTC m=+0.065516031 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd)
Oct 08 10:02:29 compute-0 multipathd[242394]: 3707.848948 | --------start up--------
Oct 08 10:02:29 compute-0 multipathd[242394]: 3707.848966 | read /etc/multipath.conf
Oct 08 10:02:29 compute-0 systemd[1]: 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311-2482321632df7677.service: Main process exited, code=exited, status=1/FAILURE
Oct 08 10:02:29 compute-0 systemd[1]: 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311-2482321632df7677.service: Failed with result 'exit-code'.
Oct 08 10:02:29 compute-0 multipathd[242394]: 3707.855794 | path checkers start up
Oct 08 10:02:29 compute-0 ceph-mon[73572]: pgmap v528: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Oct 08 10:02:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:29 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab08002980 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:29 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v529: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Oct 08 10:02:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:29 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:29 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:02:29 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:02:29 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:02:29.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:02:30 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:02:30 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:02:30 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:02:30.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:02:30 compute-0 python3.9[242586]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 10:02:30 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:30 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:30 compute-0 sudo[242749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqwyduflhjiexsfjgjetjyswwiwlztny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917750.6704757-2582-32795128098340/AnsiballZ_command.py'
Oct 08 10:02:30 compute-0 podman[242712]: 2025-10-08 10:02:30.933957093 +0000 UTC m=+0.051727928 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:02:30 compute-0 sudo[242749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:02:31 compute-0 python3.9[242757]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 10:02:31 compute-0 sudo[242749]: pam_unix(sudo:session): session closed for user root
Oct 08 10:02:31 compute-0 ceph-mon[73572]: pgmap v529: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Oct 08 10:02:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:31 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:31 compute-0 sudo[242921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whgyvkzhglckuxgzqocdpgufxuiremcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917751.533129-2606-153734389692588/AnsiballZ_systemd.py'
Oct 08 10:02:31 compute-0 sudo[242921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:02:31 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v530: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 08 10:02:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:31 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab08002980 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:31 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:02:31 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:02:31 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:02:31.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:02:32 compute-0 python3.9[242923]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 08 10:02:32 compute-0 systemd[1]: Stopping multipathd container...
Oct 08 10:02:32 compute-0 multipathd[242394]: 3710.957285 | exit (signal)
Oct 08 10:02:32 compute-0 multipathd[242394]: 3710.957900 | --------shut down-------
Oct 08 10:02:32 compute-0 systemd[1]: libpod-1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311.scope: Deactivated successfully.
Oct 08 10:02:32 compute-0 podman[242928]: 2025-10-08 10:02:32.29890489 +0000 UTC m=+0.075140379 container died 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 08 10:02:32 compute-0 systemd[1]: 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311-2482321632df7677.timer: Deactivated successfully.
Oct 08 10:02:32 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311.
Oct 08 10:02:32 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311-userdata-shm.mount: Deactivated successfully.
Oct 08 10:02:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-46965a17b0411b431c812e5ed3182c2ddf67383ce640a60244bca04244814713-merged.mount: Deactivated successfully.
Oct 08 10:02:32 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:02:32 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:02:32 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:02:32.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:02:32 compute-0 podman[242928]: 2025-10-08 10:02:32.43025583 +0000 UTC m=+0.206491329 container cleanup 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 08 10:02:32 compute-0 podman[242928]: multipathd
Oct 08 10:02:32 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:32 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4004020 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:32 compute-0 podman[242956]: multipathd
Oct 08 10:02:32 compute-0 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Oct 08 10:02:32 compute-0 systemd[1]: Stopped multipathd container.
Oct 08 10:02:32 compute-0 systemd[1]: Starting multipathd container...
Oct 08 10:02:32 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:02:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46965a17b0411b431c812e5ed3182c2ddf67383ce640a60244bca04244814713/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 08 10:02:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46965a17b0411b431c812e5ed3182c2ddf67383ce640a60244bca04244814713/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 08 10:02:32 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311.
Oct 08 10:02:32 compute-0 podman[242969]: 2025-10-08 10:02:32.650949213 +0000 UTC m=+0.122027771 container init 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Oct 08 10:02:32 compute-0 multipathd[242982]: + sudo -E kolla_set_configs
Oct 08 10:02:32 compute-0 podman[242969]: 2025-10-08 10:02:32.681773671 +0000 UTC m=+0.152852219 container start 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 08 10:02:32 compute-0 sudo[242988]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 08 10:02:32 compute-0 podman[242969]: multipathd
Oct 08 10:02:32 compute-0 sudo[242988]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 08 10:02:32 compute-0 sudo[242988]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 08 10:02:32 compute-0 systemd[1]: Started multipathd container.
Oct 08 10:02:32 compute-0 multipathd[242982]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 08 10:02:32 compute-0 multipathd[242982]: INFO:__main__:Validating config file
Oct 08 10:02:32 compute-0 multipathd[242982]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 08 10:02:32 compute-0 multipathd[242982]: INFO:__main__:Writing out command to execute
Oct 08 10:02:32 compute-0 ceph-mon[73572]: pgmap v530: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 08 10:02:32 compute-0 sudo[242988]: pam_unix(sudo:session): session closed for user root
Oct 08 10:02:32 compute-0 multipathd[242982]: ++ cat /run_command
Oct 08 10:02:32 compute-0 multipathd[242982]: + CMD='/usr/sbin/multipathd -d'
Oct 08 10:02:32 compute-0 multipathd[242982]: + ARGS=
Oct 08 10:02:32 compute-0 multipathd[242982]: + sudo kolla_copy_cacerts
Oct 08 10:02:32 compute-0 sudo[243003]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 08 10:02:32 compute-0 sudo[243003]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 08 10:02:32 compute-0 sudo[243003]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 08 10:02:32 compute-0 sudo[243003]: pam_unix(sudo:session): session closed for user root
Oct 08 10:02:32 compute-0 sudo[242921]: pam_unix(sudo:session): session closed for user root
Oct 08 10:02:32 compute-0 multipathd[242982]: + [[ ! -n '' ]]
Oct 08 10:02:32 compute-0 multipathd[242982]: + . kolla_extend_start
Oct 08 10:02:32 compute-0 multipathd[242982]: Running command: '/usr/sbin/multipathd -d'
Oct 08 10:02:32 compute-0 multipathd[242982]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct 08 10:02:32 compute-0 multipathd[242982]: + umask 0022
Oct 08 10:02:32 compute-0 multipathd[242982]: + exec /usr/sbin/multipathd -d
Oct 08 10:02:32 compute-0 multipathd[242982]: 3711.473663 | --------start up--------
Oct 08 10:02:32 compute-0 multipathd[242982]: 3711.473688 | read /etc/multipath.conf
Oct 08 10:02:32 compute-0 multipathd[242982]: 3711.480874 | path checkers start up
Oct 08 10:02:32 compute-0 podman[242989]: 2025-10-08 10:02:32.805001041 +0000 UTC m=+0.111724692 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 08 10:02:32 compute-0 systemd[1]: 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311-38b7d3dee84a03a5.service: Main process exited, code=exited, status=1/FAILURE
Oct 08 10:02:32 compute-0 systemd[1]: 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311-38b7d3dee84a03a5.service: Failed with result 'exit-code'.
Oct 08 10:02:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:02:32 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:02:33 compute-0 sudo[243182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwmzldhysjpcjtwefwpgiwwxkqipyjfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917753.2426627-2630-169390874897507/AnsiballZ_file.py'
Oct 08 10:02:33 compute-0 sudo[243182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:02:33 compute-0 podman[243145]: 2025-10-08 10:02:33.572887792 +0000 UTC m=+0.075730768 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 08 10:02:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:02:33 compute-0 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Oct 08 10:02:33 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:02:33.669605) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 08 10:02:33 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Oct 08 10:02:33 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917753669711, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 998, "num_deletes": 256, "total_data_size": 1736031, "memory_usage": 1765024, "flush_reason": "Manual Compaction"}
Oct 08 10:02:33 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Oct 08 10:02:33 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917753688304, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 1696587, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17972, "largest_seqno": 18969, "table_properties": {"data_size": 1691732, "index_size": 2379, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 9945, "raw_average_key_size": 18, "raw_value_size": 1682072, "raw_average_value_size": 3126, "num_data_blocks": 107, "num_entries": 538, "num_filter_entries": 538, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759917664, "oldest_key_time": 1759917664, "file_creation_time": 1759917753, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Oct 08 10:02:33 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 18744 microseconds, and 4948 cpu microseconds.
Oct 08 10:02:33 compute-0 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 08 10:02:33 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:02:33.688361) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 1696587 bytes OK
Oct 08 10:02:33 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:02:33.688390) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Oct 08 10:02:33 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:02:33.691197) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Oct 08 10:02:33 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:02:33.691213) EVENT_LOG_v1 {"time_micros": 1759917753691207, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 08 10:02:33 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:02:33.691232) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 08 10:02:33 compute-0 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 1731429, prev total WAL file size 1731429, number of live WAL files 2.
Oct 08 10:02:33 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 10:02:33 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:02:33.691817) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323533' seq:0, type:0; will stop at (end)
Oct 08 10:02:33 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 08 10:02:33 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(1656KB)], [38(11MB)]
Oct 08 10:02:33 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917753691850, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 13804392, "oldest_snapshot_seqno": -1}
Oct 08 10:02:33 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4940 keys, 13315656 bytes, temperature: kUnknown
Oct 08 10:02:33 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917753738086, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 13315656, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13281381, "index_size": 20853, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12357, "raw_key_size": 125645, "raw_average_key_size": 25, "raw_value_size": 13190279, "raw_average_value_size": 2670, "num_data_blocks": 853, "num_entries": 4940, "num_filter_entries": 4940, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916581, "oldest_key_time": 0, "file_creation_time": 1759917753, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Oct 08 10:02:33 compute-0 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 08 10:02:33 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:02:33.738360) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 13315656 bytes
Oct 08 10:02:33 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:02:33.739303) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 297.9 rd, 287.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 11.5 +0.0 blob) out(12.7 +0.0 blob), read-write-amplify(16.0) write-amplify(7.8) OK, records in: 5467, records dropped: 527 output_compression: NoCompression
Oct 08 10:02:33 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:02:33.739323) EVENT_LOG_v1 {"time_micros": 1759917753739313, "job": 18, "event": "compaction_finished", "compaction_time_micros": 46333, "compaction_time_cpu_micros": 23261, "output_level": 6, "num_output_files": 1, "total_output_size": 13315656, "num_input_records": 5467, "num_output_records": 4940, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 08 10:02:33 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 10:02:33 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917753739698, "job": 18, "event": "table_file_deletion", "file_number": 40}
Oct 08 10:02:33 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 10:02:33 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917753741347, "job": 18, "event": "table_file_deletion", "file_number": 38}
Oct 08 10:02:33 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:02:33.691739) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:02:33 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:02:33.741392) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:02:33 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:02:33.741396) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:02:33 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:02:33.741398) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:02:33 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:02:33.741400) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:02:33 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:02:33.741401) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:02:33 compute-0 python3.9[243188]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:02:33 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:33 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faad8001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:33 compute-0 sudo[243182]: pam_unix(sudo:session): session closed for user root
Oct 08 10:02:33 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:02:33 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v531: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 08 10:02:33 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:33 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faad8001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:33 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:02:33 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:02:33 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:02:33.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:02:34 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:02:34 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:02:34 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:02:34.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:02:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:34 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab08002980 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:34 compute-0 sudo[243342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewwqtwxcppjlnhizrbpcplzkmvdjxwwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917754.252472-2666-130593993961691/AnsiballZ_file.py'
Oct 08 10:02:34 compute-0 sudo[243342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:02:34 compute-0 python3.9[243344]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 08 10:02:34 compute-0 sudo[243342]: pam_unix(sudo:session): session closed for user root
Oct 08 10:02:34 compute-0 ceph-mon[73572]: pgmap v531: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 08 10:02:35 compute-0 sudo[243495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-basxgoncxvqxlrzufrfugicenagjeipj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917754.9863186-2690-74470638311413/AnsiballZ_modprobe.py'
Oct 08 10:02:35 compute-0 sudo[243495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:02:35 compute-0 python3.9[243497]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Oct 08 10:02:35 compute-0 kernel: Key type psk registered
Oct 08 10:02:35 compute-0 sudo[243495]: pam_unix(sudo:session): session closed for user root
Oct 08 10:02:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:02:35] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Oct 08 10:02:35 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:02:35] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Oct 08 10:02:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:35 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4004040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:35 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v532: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 08 10:02:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:35 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faad8001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:35 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:02:35 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:02:35 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:02:35.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:02:36 compute-0 sudo[243657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxfintrwzktgbcetnzikyexetambnnlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917755.8180587-2714-172181750369367/AnsiballZ_stat.py'
Oct 08 10:02:36 compute-0 sudo[243657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:02:36 compute-0 python3.9[243659]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 10:02:36 compute-0 sudo[243657]: pam_unix(sudo:session): session closed for user root
Oct 08 10:02:36 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:02:36 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:02:36 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:02:36.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:02:36 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:36 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:36 compute-0 sudo[243780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfznhswwvkohayhtndibntbxvxkndirn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917755.8180587-2714-172181750369367/AnsiballZ_copy.py'
Oct 08 10:02:36 compute-0 sudo[243780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:02:36 compute-0 python3.9[243782]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759917755.8180587-2714-172181750369367/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:02:36 compute-0 sudo[243780]: pam_unix(sudo:session): session closed for user root
Oct 08 10:02:36 compute-0 ceph-mon[73572]: pgmap v532: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 08 10:02:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:02:37.063Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:02:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:02:37.063Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:02:37 compute-0 sudo[243933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvwbsnhijnvcbkllghvagxaqesohqmfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917757.2912269-2762-192483159685093/AnsiballZ_lineinfile.py'
Oct 08 10:02:37 compute-0 sudo[243933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:02:37 compute-0 python3.9[243935]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:02:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:37 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab08002980 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:37 compute-0 sudo[243933]: pam_unix(sudo:session): session closed for user root
Oct 08 10:02:37 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v533: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 08 10:02:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:37 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4004060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:37 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:02:37 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:02:37 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:02:37.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:02:38 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:02:38 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:02:38 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:02:38.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:02:38 compute-0 sudo[244086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmastsuiqrlzruxltxeoojsknvmqxool ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917758.1166043-2786-199964542620451/AnsiballZ_systemd.py'
Oct 08 10:02:38 compute-0 sudo[244086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:02:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:38 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faad8001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:38 compute-0 python3.9[244088]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 08 10:02:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:02:38 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct 08 10:02:38 compute-0 systemd[1]: Stopped Load Kernel Modules.
Oct 08 10:02:38 compute-0 systemd[1]: Stopping Load Kernel Modules...
Oct 08 10:02:38 compute-0 systemd[1]: Starting Load Kernel Modules...
Oct 08 10:02:38 compute-0 systemd[1]: Finished Load Kernel Modules.
Oct 08 10:02:38 compute-0 sudo[244086]: pam_unix(sudo:session): session closed for user root
Oct 08 10:02:38 compute-0 ceph-mon[73572]: pgmap v533: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 08 10:02:39 compute-0 sudo[244243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-levdkwqgmyitcfdiuqnvcnhkmhuztudt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917759.1193974-2810-136503364985956/AnsiballZ_setup.py'
Oct 08 10:02:39 compute-0 sudo[244243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:02:39 compute-0 python3.9[244245]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 08 10:02:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:39 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:39 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v534: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Oct 08 10:02:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:39 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab08004320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:39 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:02:39 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:02:39 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:02:39.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:02:39 compute-0 sudo[244243]: pam_unix(sudo:session): session closed for user root
Oct 08 10:02:40 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:02:40 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:02:40 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:02:40.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:02:40 compute-0 sudo[244328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xngjvswyopyqdcsbyhabwnoxnufbusgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917759.1193974-2810-136503364985956/AnsiballZ_dnf.py'
Oct 08 10:02:40 compute-0 sudo[244328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:02:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:40 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4004080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:40 compute-0 python3.9[244330]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 08 10:02:41 compute-0 ceph-mon[73572]: pgmap v534: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Oct 08 10:02:41 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:41 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4004080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:41 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v535: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:02:41 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:41 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:41 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:02:41 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:02:41 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:02:41.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:02:42 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:02:42 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:02:42 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:02:42.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:02:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:42 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab080044c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:43 compute-0 ceph-mon[73572]: pgmap v535: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:02:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:02:43 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:43 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faad80036e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:43 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v536: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 08 10:02:43 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:43 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae40040a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:43 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:02:43 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:02:43 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:02:43.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:02:44 compute-0 sudo[244336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:02:44 compute-0 sudo[244336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:02:44 compute-0 sudo[244336]: pam_unix(sudo:session): session closed for user root
Oct 08 10:02:44 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:02:44 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:02:44 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:02:44.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:02:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:44 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:45 compute-0 ceph-mon[73572]: pgmap v536: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 08 10:02:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:02:45] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Oct 08 10:02:45 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:02:45] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Oct 08 10:02:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:45 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab080044c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:45 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v537: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:02:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:45 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faad80036e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:45 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:02:45 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:02:45 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:02:45.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:02:46 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:02:46 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:02:46 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:02:46.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:02:46 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:46 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae40040c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:46 compute-0 systemd[1]: Reloading.
Oct 08 10:02:46 compute-0 systemd-rc-local-generator[244393]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 10:02:46 compute-0 systemd-sysv-generator[244397]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 10:02:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:02:47.064Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:02:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:02:47.067Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:02:47 compute-0 ceph-mon[73572]: pgmap v537: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:02:47 compute-0 systemd[1]: Reloading.
Oct 08 10:02:47 compute-0 systemd-sysv-generator[244430]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 10:02:47 compute-0 systemd-rc-local-generator[244426]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 10:02:47 compute-0 systemd-logind[798]: Watching system buttons on /dev/input/event0 (Power Button)
Oct 08 10:02:47 compute-0 systemd-logind[798]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct 08 10:02:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:02:47
Oct 08 10:02:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 08 10:02:47 compute-0 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct 08 10:02:47 compute-0 ceph-mgr[73869]: [balancer INFO root] pools ['volumes', '.nfs', 'default.rgw.log', 'vms', '.mgr', '.rgw.root', 'cephfs.cephfs.data', 'backups', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta', 'images']
Oct 08 10:02:47 compute-0 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct 08 10:02:47 compute-0 lvm[244476]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 08 10:02:47 compute-0 lvm[244476]: VG ceph_vg0 finished
Oct 08 10:02:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:47 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:47 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 08 10:02:47 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 08 10:02:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:02:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:02:47 compute-0 systemd[1]: Reloading.
Oct 08 10:02:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:02:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:02:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct 08 10:02:47 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v538: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:02:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:02:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 08 10:02:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:02:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:02:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:02:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:02:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:02:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:02:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:02:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:02:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:02:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 08 10:02:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:02:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:02:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:02:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 08 10:02:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:02:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 08 10:02:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:02:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:02:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:02:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 08 10:02:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:02:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 10:02:47 compute-0 systemd-sysv-generator[244529]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 10:02:47 compute-0 systemd-rc-local-generator[244526]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 10:02:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:47 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab080044c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:47 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:02:47 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:02:47 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:02:47.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:02:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 08 10:02:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:02:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:02:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:02:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:02:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:02:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:02:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:02:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:02:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:02:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 08 10:02:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:02:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:02:48 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 08 10:02:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:02:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:02:48 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:02:48 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:02:48 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:02:48.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:02:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:48 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faad80036e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:48 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:02:48 compute-0 sudo[244328]: pam_unix(sudo:session): session closed for user root
Oct 08 10:02:49 compute-0 ceph-mon[73572]: pgmap v538: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:02:49 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 08 10:02:49 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 08 10:02:49 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.454s CPU time.
Oct 08 10:02:49 compute-0 systemd[1]: run-rc5b44645e0fc4f51b473728b08cf1e56.service: Deactivated successfully.
Oct 08 10:02:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/100249 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 08 10:02:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:49 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003f90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:49 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v539: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:02:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:49 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae40040e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:49 compute-0 sudo[245817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dognvckuprjcknonlsqbsevmmwylfbhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917769.6859367-2846-52251462702556/AnsiballZ_file.py'
Oct 08 10:02:49 compute-0 sudo[245817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:02:49 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:02:49 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:02:49 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:02:49.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:02:50 compute-0 python3.9[245819]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.iscsid_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:02:50 compute-0 sudo[245817]: pam_unix(sudo:session): session closed for user root
Oct 08 10:02:50 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:02:50 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:02:50 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:02:50.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:02:50 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:50 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab080044c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:51 compute-0 python3.9[245969]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 08 10:02:51 compute-0 ceph-mon[73572]: pgmap v539: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:02:51 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:51 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faad80036e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:51 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v540: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 10:02:51 compute-0 sudo[246125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndjpovoiatjpefqryxjnrjusnuaqcuwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917771.6557832-2898-241136685484732/AnsiballZ_file.py'
Oct 08 10:02:51 compute-0 sudo[246125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:02:51 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:51 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003fb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:51 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:02:51 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:02:51 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:02:51.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:02:52 compute-0 python3.9[246127]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:02:52 compute-0 sudo[246125]: pam_unix(sudo:session): session closed for user root
Oct 08 10:02:52 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:02:52 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:02:52 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:02:52.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:02:52 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:52 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4004100 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:53 compute-0 ceph-mon[73572]: pgmap v540: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 10:02:53 compute-0 sudo[246278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crsenadoohlebxtnrmcfbnqckdncydwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917772.7230716-2931-236547639023657/AnsiballZ_systemd_service.py'
Oct 08 10:02:53 compute-0 sudo[246278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:02:53 compute-0 python3.9[246280]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 08 10:02:53 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:02:53 compute-0 systemd[1]: Reloading.
Oct 08 10:02:53 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:53 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab080044c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:53 compute-0 systemd-rc-local-generator[246302]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 10:02:53 compute-0 systemd-sysv-generator[246309]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 10:02:53 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v541: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 08 10:02:53 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:53 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faad80036e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:53 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:02:53 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:02:53 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:02:53.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:02:54 compute-0 sudo[246278]: pam_unix(sudo:session): session closed for user root
Oct 08 10:02:54 compute-0 podman[246317]: 2025-10-08 10:02:54.192109678 +0000 UTC m=+0.139458352 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 08 10:02:54 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:02:54 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:02:54 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:02:54.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:02:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:54 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003fd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:54 compute-0 python3.9[246492]: ansible-ansible.builtin.service_facts Invoked
Oct 08 10:02:54 compute-0 sudo[246493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:02:54 compute-0 sudo[246493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:02:54 compute-0 sudo[246493]: pam_unix(sudo:session): session closed for user root
Oct 08 10:02:54 compute-0 network[246538]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 08 10:02:54 compute-0 network[246541]: 'network-scripts' will be removed from distribution in near future.
Oct 08 10:02:54 compute-0 network[246544]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 08 10:02:54 compute-0 sudo[246529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 08 10:02:54 compute-0 sudo[246529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:02:55 compute-0 ceph-mon[73572]: pgmap v541: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 08 10:02:55 compute-0 sudo[246529]: pam_unix(sudo:session): session closed for user root
Oct 08 10:02:55 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 10:02:55 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:02:55 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 08 10:02:55 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 10:02:55 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 08 10:02:55 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:02:55 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 08 10:02:55 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:02:55 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 08 10:02:55 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 10:02:55 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 08 10:02:55 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 10:02:55 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 10:02:55 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:02:55 compute-0 sudo[246600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:02:55 compute-0 sudo[246600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:02:55 compute-0 sudo[246600]: pam_unix(sudo:session): session closed for user root
Oct 08 10:02:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:02:55] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Oct 08 10:02:55 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:02:55] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Oct 08 10:02:55 compute-0 sudo[246626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 08 10:02:55 compute-0 sudo[246626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:02:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:55 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4004120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:55 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v542: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 10:02:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:55 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab080044c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:55 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:02:55 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:02:55 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:02:55.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:02:56 compute-0 podman[246713]: 2025-10-08 10:02:56.155090011 +0000 UTC m=+0.040952074 container create 0dbeb1260cef6713073030800a1166276ce8e477b12d440b0b8cc38bf201b77f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:02:56 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:02:56 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 10:02:56 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:02:56 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:02:56 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 10:02:56 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 10:02:56 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:02:56 compute-0 systemd[1]: Started libpod-conmon-0dbeb1260cef6713073030800a1166276ce8e477b12d440b0b8cc38bf201b77f.scope.
Oct 08 10:02:56 compute-0 podman[246713]: 2025-10-08 10:02:56.13697542 +0000 UTC m=+0.022837503 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:02:56 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:02:56 compute-0 podman[246713]: 2025-10-08 10:02:56.253805395 +0000 UTC m=+0.139667458 container init 0dbeb1260cef6713073030800a1166276ce8e477b12d440b0b8cc38bf201b77f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct 08 10:02:56 compute-0 podman[246713]: 2025-10-08 10:02:56.260348295 +0000 UTC m=+0.146210358 container start 0dbeb1260cef6713073030800a1166276ce8e477b12d440b0b8cc38bf201b77f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_hodgkin, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 10:02:56 compute-0 podman[246713]: 2025-10-08 10:02:56.263233537 +0000 UTC m=+0.149095600 container attach 0dbeb1260cef6713073030800a1166276ce8e477b12d440b0b8cc38bf201b77f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_hodgkin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 08 10:02:56 compute-0 laughing_hodgkin[246735]: 167 167
Oct 08 10:02:56 compute-0 systemd[1]: libpod-0dbeb1260cef6713073030800a1166276ce8e477b12d440b0b8cc38bf201b77f.scope: Deactivated successfully.
Oct 08 10:02:56 compute-0 podman[246713]: 2025-10-08 10:02:56.266262684 +0000 UTC m=+0.152124747 container died 0dbeb1260cef6713073030800a1166276ce8e477b12d440b0b8cc38bf201b77f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_hodgkin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct 08 10:02:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-995344a136dc704a55da24d57aa03f80dd7e2c147215736938c6f320743a0751-merged.mount: Deactivated successfully.
Oct 08 10:02:56 compute-0 podman[246713]: 2025-10-08 10:02:56.312537657 +0000 UTC m=+0.198399720 container remove 0dbeb1260cef6713073030800a1166276ce8e477b12d440b0b8cc38bf201b77f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_hodgkin, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 10:02:56 compute-0 systemd[1]: libpod-conmon-0dbeb1260cef6713073030800a1166276ce8e477b12d440b0b8cc38bf201b77f.scope: Deactivated successfully.
Oct 08 10:02:56 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:02:56 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:02:56 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:02:56.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:02:56 compute-0 podman[246769]: 2025-10-08 10:02:56.455727996 +0000 UTC m=+0.035508948 container create 0b803e0094d22f3f59204520e6f5c8d42d06e3bb77a381f69034df91533ef896 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_ishizaka, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 08 10:02:56 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:56 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab0c001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:56 compute-0 systemd[1]: Started libpod-conmon-0b803e0094d22f3f59204520e6f5c8d42d06e3bb77a381f69034df91533ef896.scope.
Oct 08 10:02:56 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:02:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b74d2fca6ad4bf74a5459362d6bf233683720e08ed81aa228dd475b274b80010/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:02:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b74d2fca6ad4bf74a5459362d6bf233683720e08ed81aa228dd475b274b80010/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:02:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b74d2fca6ad4bf74a5459362d6bf233683720e08ed81aa228dd475b274b80010/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:02:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b74d2fca6ad4bf74a5459362d6bf233683720e08ed81aa228dd475b274b80010/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:02:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b74d2fca6ad4bf74a5459362d6bf233683720e08ed81aa228dd475b274b80010/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 10:02:56 compute-0 podman[246769]: 2025-10-08 10:02:56.439838717 +0000 UTC m=+0.019619689 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:02:56 compute-0 podman[246769]: 2025-10-08 10:02:56.536920339 +0000 UTC m=+0.116701301 container init 0b803e0094d22f3f59204520e6f5c8d42d06e3bb77a381f69034df91533ef896 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_ishizaka, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:02:56 compute-0 podman[246769]: 2025-10-08 10:02:56.545157333 +0000 UTC m=+0.124938285 container start 0b803e0094d22f3f59204520e6f5c8d42d06e3bb77a381f69034df91533ef896 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:02:56 compute-0 podman[246769]: 2025-10-08 10:02:56.548067916 +0000 UTC m=+0.127848898 container attach 0b803e0094d22f3f59204520e6f5c8d42d06e3bb77a381f69034df91533ef896 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:02:56 compute-0 admiring_ishizaka[246788]: --> passed data devices: 0 physical, 1 LVM
Oct 08 10:02:56 compute-0 admiring_ishizaka[246788]: --> All data devices are unavailable
Oct 08 10:02:56 compute-0 systemd[1]: libpod-0b803e0094d22f3f59204520e6f5c8d42d06e3bb77a381f69034df91533ef896.scope: Deactivated successfully.
Oct 08 10:02:56 compute-0 podman[246769]: 2025-10-08 10:02:56.912024381 +0000 UTC m=+0.491805353 container died 0b803e0094d22f3f59204520e6f5c8d42d06e3bb77a381f69034df91533ef896 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 10:02:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-b74d2fca6ad4bf74a5459362d6bf233683720e08ed81aa228dd475b274b80010-merged.mount: Deactivated successfully.
Oct 08 10:02:56 compute-0 podman[246769]: 2025-10-08 10:02:56.958756938 +0000 UTC m=+0.538537880 container remove 0b803e0094d22f3f59204520e6f5c8d42d06e3bb77a381f69034df91533ef896 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_ishizaka, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Oct 08 10:02:56 compute-0 systemd[1]: libpod-conmon-0b803e0094d22f3f59204520e6f5c8d42d06e3bb77a381f69034df91533ef896.scope: Deactivated successfully.
Oct 08 10:02:57 compute-0 sudo[246626]: pam_unix(sudo:session): session closed for user root
Oct 08 10:02:57 compute-0 sudo[246843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:02:57 compute-0 sudo[246843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:02:57 compute-0 sudo[246843]: pam_unix(sudo:session): session closed for user root
Oct 08 10:02:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:02:57.068Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:02:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:02:57.071Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:02:57 compute-0 sudo[246872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- lvm list --format json
Oct 08 10:02:57 compute-0 sudo[246872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:02:57 compute-0 ceph-mon[73572]: pgmap v542: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 10:02:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:02:57.400 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:02:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:02:57.401 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:02:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:02:57.401 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:02:57 compute-0 podman[246963]: 2025-10-08 10:02:57.50888771 +0000 UTC m=+0.040392676 container create b2871e101fefdc37f762a5e3c85cb63b68eb4d4f9a27b57cf73d49c0a622ba97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_albattani, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 08 10:02:57 compute-0 systemd[1]: Started libpod-conmon-b2871e101fefdc37f762a5e3c85cb63b68eb4d4f9a27b57cf73d49c0a622ba97.scope.
Oct 08 10:02:57 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:02:57 compute-0 podman[246963]: 2025-10-08 10:02:57.489603593 +0000 UTC m=+0.021108579 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:02:57 compute-0 podman[246963]: 2025-10-08 10:02:57.603184572 +0000 UTC m=+0.134689558 container init b2871e101fefdc37f762a5e3c85cb63b68eb4d4f9a27b57cf73d49c0a622ba97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Oct 08 10:02:57 compute-0 podman[246963]: 2025-10-08 10:02:57.612891284 +0000 UTC m=+0.144396260 container start b2871e101fefdc37f762a5e3c85cb63b68eb4d4f9a27b57cf73d49c0a622ba97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Oct 08 10:02:57 compute-0 condescending_albattani[246982]: 167 167
Oct 08 10:02:57 compute-0 systemd[1]: libpod-b2871e101fefdc37f762a5e3c85cb63b68eb4d4f9a27b57cf73d49c0a622ba97.scope: Deactivated successfully.
Oct 08 10:02:57 compute-0 podman[246963]: 2025-10-08 10:02:57.675548572 +0000 UTC m=+0.207053568 container attach b2871e101fefdc37f762a5e3c85cb63b68eb4d4f9a27b57cf73d49c0a622ba97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_albattani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:02:57 compute-0 podman[246963]: 2025-10-08 10:02:57.67613353 +0000 UTC m=+0.207638526 container died b2871e101fefdc37f762a5e3c85cb63b68eb4d4f9a27b57cf73d49c0a622ba97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_albattani, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:02:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-63cca2c18112661f3e72f5dec0cc2006c79d0a4af22fcc6bfb3f7a3026aff853-merged.mount: Deactivated successfully.
Oct 08 10:02:57 compute-0 podman[246963]: 2025-10-08 10:02:57.726194265 +0000 UTC m=+0.257699231 container remove b2871e101fefdc37f762a5e3c85cb63b68eb4d4f9a27b57cf73d49c0a622ba97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_albattani, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct 08 10:02:57 compute-0 systemd[1]: libpod-conmon-b2871e101fefdc37f762a5e3c85cb63b68eb4d4f9a27b57cf73d49c0a622ba97.scope: Deactivated successfully.
Oct 08 10:02:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:57 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003fd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:57 compute-0 podman[247022]: 2025-10-08 10:02:57.894662594 +0000 UTC m=+0.052797473 container create 00aab81ad43bdc02b8c251843c6391d33c2791345784203a678fae7fc76b2040 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct 08 10:02:57 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v543: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 10:02:57 compute-0 systemd[1]: Started libpod-conmon-00aab81ad43bdc02b8c251843c6391d33c2791345784203a678fae7fc76b2040.scope.
Oct 08 10:02:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:57 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4004120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:57 compute-0 podman[247022]: 2025-10-08 10:02:57.872779973 +0000 UTC m=+0.030914862 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:02:57 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:02:57 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:02:57 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:02:57 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:02:57.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:02:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f787e5e5b64ebd96e17cbe4b2a7a6073560bb05bfb7b1b3d413498d9d9395e64/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:02:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f787e5e5b64ebd96e17cbe4b2a7a6073560bb05bfb7b1b3d413498d9d9395e64/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:02:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f787e5e5b64ebd96e17cbe4b2a7a6073560bb05bfb7b1b3d413498d9d9395e64/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:02:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f787e5e5b64ebd96e17cbe4b2a7a6073560bb05bfb7b1b3d413498d9d9395e64/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:02:57 compute-0 podman[247022]: 2025-10-08 10:02:57.992242132 +0000 UTC m=+0.150377001 container init 00aab81ad43bdc02b8c251843c6391d33c2791345784203a678fae7fc76b2040 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_bartik, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 10:02:58 compute-0 podman[247022]: 2025-10-08 10:02:58.000663382 +0000 UTC m=+0.158798281 container start 00aab81ad43bdc02b8c251843c6391d33c2791345784203a678fae7fc76b2040 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_bartik, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid)
Oct 08 10:02:58 compute-0 podman[247022]: 2025-10-08 10:02:58.004725172 +0000 UTC m=+0.162860081 container attach 00aab81ad43bdc02b8c251843c6391d33c2791345784203a678fae7fc76b2040 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 10:02:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:58 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:02:58 compute-0 priceless_bartik[247062]: {
Oct 08 10:02:58 compute-0 priceless_bartik[247062]:     "1": [
Oct 08 10:02:58 compute-0 priceless_bartik[247062]:         {
Oct 08 10:02:58 compute-0 priceless_bartik[247062]:             "devices": [
Oct 08 10:02:58 compute-0 priceless_bartik[247062]:                 "/dev/loop3"
Oct 08 10:02:58 compute-0 priceless_bartik[247062]:             ],
Oct 08 10:02:58 compute-0 priceless_bartik[247062]:             "lv_name": "ceph_lv0",
Oct 08 10:02:58 compute-0 priceless_bartik[247062]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:02:58 compute-0 priceless_bartik[247062]:             "lv_size": "21470642176",
Oct 08 10:02:58 compute-0 priceless_bartik[247062]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 08 10:02:58 compute-0 priceless_bartik[247062]:             "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 10:02:58 compute-0 priceless_bartik[247062]:             "name": "ceph_lv0",
Oct 08 10:02:58 compute-0 priceless_bartik[247062]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:02:58 compute-0 priceless_bartik[247062]:             "tags": {
Oct 08 10:02:58 compute-0 priceless_bartik[247062]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:02:58 compute-0 priceless_bartik[247062]:                 "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 10:02:58 compute-0 priceless_bartik[247062]:                 "ceph.cephx_lockbox_secret": "",
Oct 08 10:02:58 compute-0 priceless_bartik[247062]:                 "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct 08 10:02:58 compute-0 priceless_bartik[247062]:                 "ceph.cluster_name": "ceph",
Oct 08 10:02:58 compute-0 priceless_bartik[247062]:                 "ceph.crush_device_class": "",
Oct 08 10:02:58 compute-0 priceless_bartik[247062]:                 "ceph.encrypted": "0",
Oct 08 10:02:58 compute-0 priceless_bartik[247062]:                 "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct 08 10:02:58 compute-0 priceless_bartik[247062]:                 "ceph.osd_id": "1",
Oct 08 10:02:58 compute-0 priceless_bartik[247062]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 08 10:02:58 compute-0 priceless_bartik[247062]:                 "ceph.type": "block",
Oct 08 10:02:58 compute-0 priceless_bartik[247062]:                 "ceph.vdo": "0",
Oct 08 10:02:58 compute-0 priceless_bartik[247062]:                 "ceph.with_tpm": "0"
Oct 08 10:02:58 compute-0 priceless_bartik[247062]:             },
Oct 08 10:02:58 compute-0 priceless_bartik[247062]:             "type": "block",
Oct 08 10:02:58 compute-0 priceless_bartik[247062]:             "vg_name": "ceph_vg0"
Oct 08 10:02:58 compute-0 priceless_bartik[247062]:         }
Oct 08 10:02:58 compute-0 priceless_bartik[247062]:     ]
Oct 08 10:02:58 compute-0 priceless_bartik[247062]: }
Oct 08 10:02:58 compute-0 systemd[1]: libpod-00aab81ad43bdc02b8c251843c6391d33c2791345784203a678fae7fc76b2040.scope: Deactivated successfully.
Oct 08 10:02:58 compute-0 podman[247022]: 2025-10-08 10:02:58.295178411 +0000 UTC m=+0.453313290 container died 00aab81ad43bdc02b8c251843c6391d33c2791345784203a678fae7fc76b2040 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:02:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-f787e5e5b64ebd96e17cbe4b2a7a6073560bb05bfb7b1b3d413498d9d9395e64-merged.mount: Deactivated successfully.
Oct 08 10:02:58 compute-0 podman[247022]: 2025-10-08 10:02:58.33726517 +0000 UTC m=+0.495400049 container remove 00aab81ad43bdc02b8c251843c6391d33c2791345784203a678fae7fc76b2040 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_bartik, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 08 10:02:58 compute-0 systemd[1]: libpod-conmon-00aab81ad43bdc02b8c251843c6391d33c2791345784203a678fae7fc76b2040.scope: Deactivated successfully.
Oct 08 10:02:58 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:02:58 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:02:58 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:02:58.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:02:58 compute-0 sudo[246872]: pam_unix(sudo:session): session closed for user root
Oct 08 10:02:58 compute-0 sudo[247084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:02:58 compute-0 sudo[247084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:02:58 compute-0 sudo[247084]: pam_unix(sudo:session): session closed for user root
Oct 08 10:02:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:58 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab080044c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:58 compute-0 sudo[247109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- raw list --format json
Oct 08 10:02:58 compute-0 sudo[247109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:02:58 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:02:58 compute-0 podman[247174]: 2025-10-08 10:02:58.929347457 +0000 UTC m=+0.039625962 container create 2cc76111378f2bfd6ea60218b0b412368270f800cf08292965361afe2c64ad24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_noether, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:02:58 compute-0 systemd[1]: Started libpod-conmon-2cc76111378f2bfd6ea60218b0b412368270f800cf08292965361afe2c64ad24.scope.
Oct 08 10:02:58 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:02:58 compute-0 podman[247174]: 2025-10-08 10:02:58.996931402 +0000 UTC m=+0.107209927 container init 2cc76111378f2bfd6ea60218b0b412368270f800cf08292965361afe2c64ad24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_noether, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct 08 10:02:59 compute-0 podman[247174]: 2025-10-08 10:02:59.002852792 +0000 UTC m=+0.113131297 container start 2cc76111378f2bfd6ea60218b0b412368270f800cf08292965361afe2c64ad24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_noether, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 08 10:02:59 compute-0 podman[247174]: 2025-10-08 10:02:58.910277665 +0000 UTC m=+0.020556190 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:02:59 compute-0 podman[247174]: 2025-10-08 10:02:59.006146678 +0000 UTC m=+0.116425183 container attach 2cc76111378f2bfd6ea60218b0b412368270f800cf08292965361afe2c64ad24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 10:02:59 compute-0 nervous_noether[247190]: 167 167
Oct 08 10:02:59 compute-0 systemd[1]: libpod-2cc76111378f2bfd6ea60218b0b412368270f800cf08292965361afe2c64ad24.scope: Deactivated successfully.
Oct 08 10:02:59 compute-0 conmon[247190]: conmon 2cc76111378f2bfd6ea6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2cc76111378f2bfd6ea60218b0b412368270f800cf08292965361afe2c64ad24.scope/container/memory.events
Oct 08 10:02:59 compute-0 podman[247174]: 2025-10-08 10:02:59.008661108 +0000 UTC m=+0.118939623 container died 2cc76111378f2bfd6ea60218b0b412368270f800cf08292965361afe2c64ad24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_noether, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct 08 10:02:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-75768573f91abb87aa93d00c09b333c609e70cb598dbd90210195b1f0787446c-merged.mount: Deactivated successfully.
Oct 08 10:02:59 compute-0 podman[247174]: 2025-10-08 10:02:59.048399952 +0000 UTC m=+0.158678457 container remove 2cc76111378f2bfd6ea60218b0b412368270f800cf08292965361afe2c64ad24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_noether, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 10:02:59 compute-0 systemd[1]: libpod-conmon-2cc76111378f2bfd6ea60218b0b412368270f800cf08292965361afe2c64ad24.scope: Deactivated successfully.
Oct 08 10:02:59 compute-0 podman[247215]: 2025-10-08 10:02:59.224366491 +0000 UTC m=+0.054801457 container create c2e55facf1d571e965e8f2fac8b6bb890893fc1c03a6e251b74f53cab7d5c3c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_taussig, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct 08 10:02:59 compute-0 ceph-mon[73572]: pgmap v543: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 10:02:59 compute-0 systemd[1]: Started libpod-conmon-c2e55facf1d571e965e8f2fac8b6bb890893fc1c03a6e251b74f53cab7d5c3c8.scope.
Oct 08 10:02:59 compute-0 podman[247215]: 2025-10-08 10:02:59.194809085 +0000 UTC m=+0.025244091 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:02:59 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:02:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/740717655ff4b47b333ce37578c5943645a30aace5a69ec8d4c337914306dcc5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:02:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/740717655ff4b47b333ce37578c5943645a30aace5a69ec8d4c337914306dcc5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:02:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/740717655ff4b47b333ce37578c5943645a30aace5a69ec8d4c337914306dcc5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:02:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/740717655ff4b47b333ce37578c5943645a30aace5a69ec8d4c337914306dcc5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:02:59 compute-0 podman[247215]: 2025-10-08 10:02:59.325482662 +0000 UTC m=+0.155917638 container init c2e55facf1d571e965e8f2fac8b6bb890893fc1c03a6e251b74f53cab7d5c3c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_taussig, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 08 10:02:59 compute-0 podman[247215]: 2025-10-08 10:02:59.332676673 +0000 UTC m=+0.163111629 container start c2e55facf1d571e965e8f2fac8b6bb890893fc1c03a6e251b74f53cab7d5c3c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_taussig, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:02:59 compute-0 podman[247215]: 2025-10-08 10:02:59.350807754 +0000 UTC m=+0.181242730 container attach c2e55facf1d571e965e8f2fac8b6bb890893fc1c03a6e251b74f53cab7d5c3c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_taussig, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:02:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:59 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab0c001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:59 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v544: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 08 10:02:59 compute-0 lvm[247338]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 08 10:02:59 compute-0 lvm[247338]: VG ceph_vg0 finished
Oct 08 10:02:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:59 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003fd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:02:59 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:02:59 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:02:59 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:02:59.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:03:00 compute-0 romantic_taussig[247232]: {}
Oct 08 10:03:00 compute-0 systemd[1]: libpod-c2e55facf1d571e965e8f2fac8b6bb890893fc1c03a6e251b74f53cab7d5c3c8.scope: Deactivated successfully.
Oct 08 10:03:00 compute-0 podman[247215]: 2025-10-08 10:03:00.039726914 +0000 UTC m=+0.870161870 container died c2e55facf1d571e965e8f2fac8b6bb890893fc1c03a6e251b74f53cab7d5c3c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Oct 08 10:03:00 compute-0 systemd[1]: libpod-c2e55facf1d571e965e8f2fac8b6bb890893fc1c03a6e251b74f53cab7d5c3c8.scope: Consumed 1.080s CPU time.
Oct 08 10:03:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-740717655ff4b47b333ce37578c5943645a30aace5a69ec8d4c337914306dcc5-merged.mount: Deactivated successfully.
Oct 08 10:03:00 compute-0 podman[247215]: 2025-10-08 10:03:00.091796583 +0000 UTC m=+0.922231539 container remove c2e55facf1d571e965e8f2fac8b6bb890893fc1c03a6e251b74f53cab7d5c3c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:03:00 compute-0 systemd[1]: libpod-conmon-c2e55facf1d571e965e8f2fac8b6bb890893fc1c03a6e251b74f53cab7d5c3c8.scope: Deactivated successfully.
Oct 08 10:03:00 compute-0 sudo[247109]: pam_unix(sudo:session): session closed for user root
Oct 08 10:03:00 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 10:03:00 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:03:00 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 10:03:00 compute-0 sudo[247448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkfkboddmmicflakyhhlxxhkimoagtuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917779.8857672-2988-195278182562900/AnsiballZ_systemd_service.py'
Oct 08 10:03:00 compute-0 sudo[247448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:03:00 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:03:00 compute-0 sudo[247451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 08 10:03:00 compute-0 sudo[247451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:03:00 compute-0 sudo[247451]: pam_unix(sudo:session): session closed for user root
Oct 08 10:03:00 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:03:00 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:03:00 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:03:00.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:03:00 compute-0 python3.9[247450]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 10:03:00 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:03:00 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4004120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:03:00 compute-0 sudo[247448]: pam_unix(sudo:session): session closed for user root
Oct 08 10:03:00 compute-0 sudo[247626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbnapesngxwmkxsvkeujglvegchsscwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917780.6684966-2988-157033098674239/AnsiballZ_systemd_service.py'
Oct 08 10:03:00 compute-0 sudo[247626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:03:01 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:03:01 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:03:01 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:03:01 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:03:01 compute-0 ceph-mon[73572]: pgmap v544: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 08 10:03:01 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:03:01 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:03:01 compute-0 python3.9[247628]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 10:03:01 compute-0 sudo[247626]: pam_unix(sudo:session): session closed for user root
Oct 08 10:03:01 compute-0 podman[247631]: 2025-10-08 10:03:01.314321055 +0000 UTC m=+0.057675350 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 08 10:03:01 compute-0 sudo[247802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swvnddndxjfyokgsneyxlnjhhptqlvhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917781.4190292-2988-120869510239363/AnsiballZ_systemd_service.py'
Oct 08 10:03:01 compute-0 sudo[247802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:03:01 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:03:01 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab080044c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:03:01 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v545: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 08 10:03:01 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:03:01 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab0c008dc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:03:01 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:03:01 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:03:01 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:03:01.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:03:02 compute-0 python3.9[247804]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 10:03:02 compute-0 sudo[247802]: pam_unix(sudo:session): session closed for user root
Oct 08 10:03:02 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:03:02 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:03:02 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:03:02.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:03:02 compute-0 sudo[247956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-piisqwuxuhegqrpgujjqmjnxfiaokikl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917782.205283-2988-30544769762582/AnsiballZ_systemd_service.py'
Oct 08 10:03:02 compute-0 sudo[247956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:03:02 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:03:02 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003fd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:03:02 compute-0 python3.9[247958]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 10:03:02 compute-0 sudo[247956]: pam_unix(sudo:session): session closed for user root
Oct 08 10:03:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:03:02 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:03:03 compute-0 ceph-mon[73572]: pgmap v545: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 08 10:03:03 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:03:03 compute-0 sudo[248121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsivjzjhairysrzdolsqqqiwepruzquw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917782.9647257-2988-134448846564091/AnsiballZ_systemd_service.py'
Oct 08 10:03:03 compute-0 sudo[248121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:03:03 compute-0 podman[248084]: 2025-10-08 10:03:03.290911485 +0000 UTC m=+0.067251047 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:03:03 compute-0 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 08 10:03:03 compute-0 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 4238 writes, 19K keys, 4238 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.03 MB/s
                                           Cumulative WAL: 4238 writes, 4238 syncs, 1.00 writes per sync, written: 0.03 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1480 writes, 6020 keys, 1480 commit groups, 1.0 writes per commit group, ingest: 11.06 MB, 0.02 MB/s
                                           Interval WAL: 1480 writes, 1480 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    110.2      0.28              0.08         9    0.031       0      0       0.0       0.0
                                             L6      1/0   12.70 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.3    153.3    129.7      0.78              0.23         8    0.098     38K   4352       0.0       0.0
                                            Sum      1/0   12.70 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.3    112.7    124.5      1.07              0.31        17    0.063     38K   4352       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.9    191.2    193.8      0.25              0.11         6    0.042     16K   2052       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    153.3    129.7      0.78              0.23         8    0.098     38K   4352       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    111.4      0.28              0.08         8    0.035       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     16.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.030, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.13 GB write, 0.11 MB/s write, 0.12 GB read, 0.10 MB/s read, 1.1 seconds
                                           Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.3 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f7a1ce3350#2 capacity: 304.00 MB usage: 6.35 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 0.000102 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(363,6.02 MB,1.9787%) FilterBlock(18,115.73 KB,0.0371782%) IndexBlock(18,225.48 KB,0.0724341%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 08 10:03:03 compute-0 python3.9[248129]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 10:03:03 compute-0 sudo[248121]: pam_unix(sudo:session): session closed for user root
Oct 08 10:03:03 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:03:03 compute-0 podman[248132]: 2025-10-08 10:03:03.676140511 +0000 UTC m=+0.055363775 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid)
Oct 08 10:03:03 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:03:03 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4004120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:03:03 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v546: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 10:03:03 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:03:03 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab080044c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:03:03 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:03:03 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:03:03 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:03:03.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:03:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:03:04 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 08 10:03:04 compute-0 sudo[248302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yomeddjhhplionbmeotfzlwxumcyvfoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917783.8039536-2988-170733930038927/AnsiballZ_systemd_service.py'
Oct 08 10:03:04 compute-0 sudo[248302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:03:04 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:03:04 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:03:04 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:03:04.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:03:04 compute-0 python3.9[248304]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 10:03:04 compute-0 sudo[248305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:03:04 compute-0 sudo[248305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:03:04 compute-0 sudo[248305]: pam_unix(sudo:session): session closed for user root
Oct 08 10:03:04 compute-0 sudo[248302]: pam_unix(sudo:session): session closed for user root
Oct 08 10:03:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:03:04 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab0c008f40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:03:04 compute-0 sudo[248480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbwikzrjetjkwqqonadkkvvaccqfovin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917784.5827541-2988-32612891052175/AnsiballZ_systemd_service.py'
Oct 08 10:03:04 compute-0 sudo[248480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:03:05 compute-0 python3.9[248482]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 10:03:05 compute-0 sudo[248480]: pam_unix(sudo:session): session closed for user root
Oct 08 10:03:05 compute-0 ceph-mon[73572]: pgmap v546: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 10:03:05 compute-0 sudo[248634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idcpjcvokisfetupcpelfbnvkjsdojpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917785.3200338-2988-217487203420093/AnsiballZ_systemd_service.py'
Oct 08 10:03:05 compute-0 sudo[248634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:03:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:03:05] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Oct 08 10:03:05 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:03:05] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Oct 08 10:03:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:03:05 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003fd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:03:05 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v547: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 10:03:05 compute-0 python3.9[248636]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 10:03:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:03:05 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4004120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:03:05 compute-0 sudo[248634]: pam_unix(sudo:session): session closed for user root
Oct 08 10:03:05 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:03:05 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:03:05 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:03:05.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:03:06 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:03:06 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:03:06 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:03:06.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:03:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:03:06 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4004120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:03:06 compute-0 sudo[248788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-siwzxbxkckehnwibeuucwvvptxlhhceg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917786.6497402-3165-279637709995061/AnsiballZ_file.py'
Oct 08 10:03:06 compute-0 sudo[248788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:03:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:03:07.072Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:03:07 compute-0 python3.9[248790]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:03:07 compute-0 sudo[248788]: pam_unix(sudo:session): session closed for user root
Oct 08 10:03:07 compute-0 ceph-mon[73572]: pgmap v547: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 10:03:07 compute-0 sudo[248941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfyiouvskxtyyqtpyivesdflksqeboqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917787.2249684-3165-232529408207240/AnsiballZ_file.py'
Oct 08 10:03:07 compute-0 sudo[248941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:03:07 compute-0 python3.9[248943]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:03:07 compute-0 sudo[248941]: pam_unix(sudo:session): session closed for user root
Oct 08 10:03:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:03:07 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab0c009860 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:03:07 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v548: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 10:03:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:03:07 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003fd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:03:07 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:03:07 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:03:07 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:03:07.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:03:08 compute-0 sudo[249094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-foeajecxkavgjbxdhsmdrwmqppszyczc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917787.8464746-3165-101314375724935/AnsiballZ_file.py'
Oct 08 10:03:08 compute-0 sudo[249094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:03:08 compute-0 python3.9[249096]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:03:08 compute-0 sudo[249094]: pam_unix(sudo:session): session closed for user root
Oct 08 10:03:08 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:03:08 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:03:08 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:03:08.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:03:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:03:08 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab080044c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:03:08 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:03:08 compute-0 sudo[249246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzmusvvjvkrhpdxsbtibhhnwutikutkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917788.4297955-3165-67019759057409/AnsiballZ_file.py'
Oct 08 10:03:08 compute-0 sudo[249246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:03:08 compute-0 python3.9[249248]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:03:08 compute-0 sudo[249246]: pam_unix(sudo:session): session closed for user root
Oct 08 10:03:09 compute-0 sudo[249399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjumcbhyzgmnvxemhgnywkcsximbcnme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917789.0428774-3165-150297607820792/AnsiballZ_file.py'
Oct 08 10:03:09 compute-0 sudo[249399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:03:09 compute-0 ceph-mon[73572]: pgmap v548: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 10:03:09 compute-0 python3.9[249401]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:03:09 compute-0 sudo[249399]: pam_unix(sudo:session): session closed for user root
Oct 08 10:03:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/100309 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 08 10:03:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:03:09 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4004120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:03:09 compute-0 sudo[249551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thlnjaohbletuoldqwbnqsxckmmajgps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917789.6298242-3165-257389970457611/AnsiballZ_file.py'
Oct 08 10:03:09 compute-0 sudo[249551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:03:09 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v549: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 08 10:03:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:03:09 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab0c009860 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:03:09 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:03:09 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:03:09 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:03:09.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:03:10 compute-0 python3.9[249553]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:03:10 compute-0 sudo[249551]: pam_unix(sudo:session): session closed for user root
Oct 08 10:03:10 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:03:10 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:03:10 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:03:10.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:03:10 compute-0 sudo[249704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijutxoxbewotqmyxalingzhmsmbghwtk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917790.2495656-3165-96463790044104/AnsiballZ_file.py'
Oct 08 10:03:10 compute-0 sudo[249704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:03:10 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:03:10 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003fd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:03:10 compute-0 python3.9[249706]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:03:10 compute-0 sudo[249704]: pam_unix(sudo:session): session closed for user root
Oct 08 10:03:11 compute-0 sudo[249856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgmtuuxtqvbtvpczcuzncadloqzrmdwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917790.819703-3165-12252000887384/AnsiballZ_file.py'
Oct 08 10:03:11 compute-0 sudo[249856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:03:11 compute-0 python3.9[249859]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:03:11 compute-0 sudo[249856]: pam_unix(sudo:session): session closed for user root
Oct 08 10:03:11 compute-0 ceph-mon[73572]: pgmap v549: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 08 10:03:11 compute-0 kernel: ganesha.nfsd[229945]: segfault at 50 ip 00007fabbb7f232e sp 00007fab6f7fd210 error 4 in libntirpc.so.5.8[7fabbb7d7000+2c000] likely on CPU 2 (core 0, socket 2)
Oct 08 10:03:11 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 08 10:03:11 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:03:11 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003fd0 fd 48 proxy ignored for local
Oct 08 10:03:11 compute-0 systemd[1]: Started Process Core Dump (PID 249884/UID 0).
Oct 08 10:03:11 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v550: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 08 10:03:11 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:03:11 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:03:11 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:03:11.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:03:12 compute-0 sudo[250012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zooyqotkhnguoqynjmonioegkfceezzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917791.9640799-3336-242585002733079/AnsiballZ_file.py'
Oct 08 10:03:12 compute-0 sudo[250012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:03:12 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:03:12 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:03:12 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:03:12.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:03:12 compute-0 python3.9[250014]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:03:12 compute-0 sudo[250012]: pam_unix(sudo:session): session closed for user root
Oct 08 10:03:12 compute-0 sudo[250164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmrvlljgpvnaeuvzisiigozlyaloxfig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917792.5988638-3336-247655820064731/AnsiballZ_file.py'
Oct 08 10:03:12 compute-0 sudo[250164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:03:12 compute-0 systemd-coredump[249885]: Process 227697 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 54:
                                                    #0  0x00007fabbb7f232e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Oct 08 10:03:13 compute-0 systemd[1]: systemd-coredump@7-249884-0.service: Deactivated successfully.
Oct 08 10:03:13 compute-0 systemd[1]: systemd-coredump@7-249884-0.service: Consumed 1.129s CPU time.
Oct 08 10:03:13 compute-0 python3.9[250166]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:03:13 compute-0 sudo[250164]: pam_unix(sudo:session): session closed for user root
Oct 08 10:03:13 compute-0 podman[250171]: 2025-10-08 10:03:13.058718825 +0000 UTC m=+0.024097353 container died 197b32251d649da9ca1a6e7d1df5a7492995554a85c77530709f1630450feb18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 10:03:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e7cca84133784ef1b75d79f448773b70403e9a746a9cccf658a15d1c5e16e5a-merged.mount: Deactivated successfully.
Oct 08 10:03:13 compute-0 podman[250171]: 2025-10-08 10:03:13.121844779 +0000 UTC m=+0.087223307 container remove 197b32251d649da9ca1a6e7d1df5a7492995554a85c77530709f1630450feb18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 10:03:13 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Main process exited, code=exited, status=139/n/a
Oct 08 10:03:13 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Failed with result 'exit-code'.
Oct 08 10:03:13 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Consumed 1.599s CPU time.
Oct 08 10:03:13 compute-0 ceph-mon[73572]: pgmap v550: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 08 10:03:13 compute-0 sudo[250366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdlooftatfhomxnpojymnvtkccorrdgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917793.2062216-3336-220712937712183/AnsiballZ_file.py'
Oct 08 10:03:13 compute-0 sudo[250366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:03:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:03:13 compute-0 python3.9[250368]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:03:13 compute-0 sudo[250366]: pam_unix(sudo:session): session closed for user root
Oct 08 10:03:13 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v551: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 08 10:03:13 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:03:13 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:03:13 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:03:13.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:03:14 compute-0 sudo[250519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzvpbsypljvdzbjxlkwbdtnhkbotipmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917793.901775-3336-215073915926722/AnsiballZ_file.py'
Oct 08 10:03:14 compute-0 sudo[250519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:03:14 compute-0 python3.9[250521]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:03:14 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:03:14 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:03:14 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:03:14.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:03:14 compute-0 sudo[250519]: pam_unix(sudo:session): session closed for user root
Oct 08 10:03:14 compute-0 ceph-mon[73572]: pgmap v551: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 08 10:03:14 compute-0 sudo[250671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frarhfxhcanbtxhermkgqityntnaqlhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917794.5322587-3336-23592406424903/AnsiballZ_file.py'
Oct 08 10:03:14 compute-0 sudo[250671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:03:14 compute-0 python3.9[250673]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:03:14 compute-0 sudo[250671]: pam_unix(sudo:session): session closed for user root
Oct 08 10:03:15 compute-0 sudo[250824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjfkkynpgrkyseieckkducfbqqmqvfjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917795.121711-3336-275721078241354/AnsiballZ_file.py'
Oct 08 10:03:15 compute-0 sudo[250824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:03:15 compute-0 python3.9[250826]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:03:15 compute-0 sudo[250824]: pam_unix(sudo:session): session closed for user root
Oct 08 10:03:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:03:15] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Oct 08 10:03:15 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:03:15] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Oct 08 10:03:15 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v552: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct 08 10:03:15 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:03:15 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:03:15 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:03:15.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:03:16 compute-0 sudo[250977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-andzoelenyiofpctwebtsrzclugtkfli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917795.7411098-3336-157468603156142/AnsiballZ_file.py'
Oct 08 10:03:16 compute-0 sudo[250977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:03:16 compute-0 python3.9[250979]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:03:16 compute-0 sudo[250977]: pam_unix(sudo:session): session closed for user root
Oct 08 10:03:16 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:03:16 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:03:16 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:03:16.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:03:16 compute-0 sudo[251129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-namkjvhhraeipbcgqahkbmkdxzuyvzpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917796.3552284-3336-190192070444428/AnsiballZ_file.py'
Oct 08 10:03:16 compute-0 sudo[251129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:03:16 compute-0 python3.9[251131]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:03:16 compute-0 sudo[251129]: pam_unix(sudo:session): session closed for user root
Oct 08 10:03:16 compute-0 ceph-mon[73572]: pgmap v552: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct 08 10:03:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:03:17.073Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:03:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/100317 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 08 10:03:17 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:03:17 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:03:17 compute-0 sudo[251282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvrkspbygreboxbxysucdrevoznjlsao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917797.6333487-3510-24306475257580/AnsiballZ_command.py'
Oct 08 10:03:17 compute-0 sudo[251282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:03:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:03:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:03:17 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v553: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct 08 10:03:17 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:03:17 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:03:17 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:03:17.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:03:18 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:03:18 compute-0 python3.9[251284]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 10:03:18 compute-0 sudo[251282]: pam_unix(sudo:session): session closed for user root
Oct 08 10:03:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:03:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:03:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:03:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:03:18 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:03:18 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:03:18 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:03:18.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:03:18 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:03:19 compute-0 python3.9[251437]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 08 10:03:19 compute-0 ceph-mon[73572]: pgmap v553: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct 08 10:03:19 compute-0 sudo[251588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-guemfqkhofktvjmqajjiikwhpujkqibo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917799.4449685-3564-91876262479210/AnsiballZ_systemd_service.py'
Oct 08 10:03:19 compute-0 sudo[251588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:03:19 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v554: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Oct 08 10:03:19 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:03:19 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:03:19 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:03:19.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:03:20 compute-0 python3.9[251590]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 08 10:03:20 compute-0 systemd[1]: Reloading.
Oct 08 10:03:20 compute-0 systemd-sysv-generator[251623]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 10:03:20 compute-0 systemd-rc-local-generator[251620]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 10:03:20 compute-0 sudo[251588]: pam_unix(sudo:session): session closed for user root
Oct 08 10:03:20 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:03:20 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:03:20 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:03:20.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:03:20 compute-0 sudo[251777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnvpgpcqmxfnuoexmprccycsxaqldqyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917800.6851575-3588-8669743841898/AnsiballZ_command.py'
Oct 08 10:03:20 compute-0 sudo[251777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:03:21 compute-0 ceph-mon[73572]: pgmap v554: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Oct 08 10:03:21 compute-0 python3.9[251779]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 10:03:21 compute-0 sudo[251777]: pam_unix(sudo:session): session closed for user root
Oct 08 10:03:21 compute-0 sudo[251931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xucgeboxgsorrprxrrgowiyrzdmtzmhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917801.2930427-3588-133576771210153/AnsiballZ_command.py'
Oct 08 10:03:21 compute-0 sudo[251931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:03:21 compute-0 python3.9[251933]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 10:03:21 compute-0 sudo[251931]: pam_unix(sudo:session): session closed for user root
Oct 08 10:03:21 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v555: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 10:03:21 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:03:21 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:03:21 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:03:21.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:03:22 compute-0 sudo[252085]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgorqjarifawjaeorblmogglneohxrnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917801.8957393-3588-212738430480966/AnsiballZ_command.py'
Oct 08 10:03:22 compute-0 sudo[252085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:03:22 compute-0 python3.9[252087]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 10:03:22 compute-0 sudo[252085]: pam_unix(sudo:session): session closed for user root
Oct 08 10:03:22 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:03:22 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:03:22 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:03:22.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:03:22 compute-0 sudo[252238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eoptjluayyatcmpbfizkkgitmbquimla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917802.5260932-3588-252786664234811/AnsiballZ_command.py'
Oct 08 10:03:22 compute-0 sudo[252238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:03:22 compute-0 python3.9[252240]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 10:03:22 compute-0 sudo[252238]: pam_unix(sudo:session): session closed for user root
Oct 08 10:03:23 compute-0 ceph-mon[73572]: pgmap v555: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 10:03:23 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Scheduled restart job, restart counter is at 8.
Oct 08 10:03:23 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct 08 10:03:23 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Consumed 1.599s CPU time.
Oct 08 10:03:23 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct 08 10:03:23 compute-0 sudo[252404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhhlrotndrdwmshvavsktbjzzqnbaqhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917803.1197152-3588-210565901617229/AnsiballZ_command.py'
Oct 08 10:03:23 compute-0 sudo[252404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:03:23 compute-0 podman[252443]: 2025-10-08 10:03:23.550708693 +0000 UTC m=+0.042247810 container create 66d9754add4f5d233bac2b4e75d179001d45850627c14f21452b2ea76b1c37dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 08 10:03:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/058d28317895a40b586ee5c061b698a4e724c357ad19ff7c07b4d7b8ef4bea6d/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 08 10:03:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/058d28317895a40b586ee5c061b698a4e724c357ad19ff7c07b4d7b8ef4bea6d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:03:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/058d28317895a40b586ee5c061b698a4e724c357ad19ff7c07b4d7b8ef4bea6d/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 10:03:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/058d28317895a40b586ee5c061b698a4e724c357ad19ff7c07b4d7b8ef4bea6d/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.uynkmx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 10:03:23 compute-0 python3.9[252412]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 10:03:23 compute-0 podman[252443]: 2025-10-08 10:03:23.617224808 +0000 UTC m=+0.108763945 container init 66d9754add4f5d233bac2b4e75d179001d45850627c14f21452b2ea76b1c37dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 08 10:03:23 compute-0 podman[252443]: 2025-10-08 10:03:23.623389027 +0000 UTC m=+0.114928144 container start 66d9754add4f5d233bac2b4e75d179001d45850627c14f21452b2ea76b1c37dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 08 10:03:23 compute-0 podman[252443]: 2025-10-08 10:03:23.528361209 +0000 UTC m=+0.019900336 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:03:23 compute-0 bash[252443]: 66d9754add4f5d233bac2b4e75d179001d45850627c14f21452b2ea76b1c37dc
Oct 08 10:03:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:23 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 08 10:03:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:23 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 08 10:03:23 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct 08 10:03:23 compute-0 sudo[252404]: pam_unix(sudo:session): session closed for user root
Oct 08 10:03:23 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:03:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:23 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 08 10:03:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:23 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 08 10:03:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:23 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 08 10:03:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:23 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 08 10:03:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:23 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 08 10:03:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:23 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:03:23 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v556: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 08 10:03:23 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:03:24 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:03:24 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:03:23.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:03:24 compute-0 sudo[252650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-helfazkptyrzqzdgrysvecjewfdonxfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917803.7908485-3588-81187852587690/AnsiballZ_command.py'
Oct 08 10:03:24 compute-0 sudo[252650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:03:24 compute-0 python3.9[252652]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 10:03:24 compute-0 sudo[252650]: pam_unix(sudo:session): session closed for user root
Oct 08 10:03:24 compute-0 podman[252654]: 2025-10-08 10:03:24.356707313 +0000 UTC m=+0.076908933 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible)
Oct 08 10:03:24 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:03:24 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:03:24 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:03:24.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:03:24 compute-0 sudo[252750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:03:24 compute-0 sudo[252750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:03:24 compute-0 sudo[252750]: pam_unix(sudo:session): session closed for user root
Oct 08 10:03:24 compute-0 sudo[252854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uduvsnxigjhycmxyyvjkskmtptnurrkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917804.4077978-3588-252907008815057/AnsiballZ_command.py'
Oct 08 10:03:24 compute-0 sudo[252854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:03:24 compute-0 python3.9[252856]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 10:03:24 compute-0 sudo[252854]: pam_unix(sudo:session): session closed for user root
Oct 08 10:03:25 compute-0 ceph-mon[73572]: pgmap v556: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 08 10:03:25 compute-0 sudo[253008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uludbfumanwouxmqzdqoatphkgyjoszl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917804.9899163-3588-112852997351912/AnsiballZ_command.py'
Oct 08 10:03:25 compute-0 sudo[253008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:03:25 compute-0 python3.9[253010]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 10:03:25 compute-0 sudo[253008]: pam_unix(sudo:session): session closed for user root
Oct 08 10:03:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:03:25] "GET /metrics HTTP/1.1" 200 48339 "" "Prometheus/2.51.0"
Oct 08 10:03:25 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:03:25] "GET /metrics HTTP/1.1" 200 48339 "" "Prometheus/2.51.0"
Oct 08 10:03:25 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v557: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 08 10:03:26 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:03:26 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:03:26 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:03:25.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:03:26 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:03:26 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:03:26 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:03:26.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:03:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:03:27.075Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:03:27 compute-0 ceph-mon[73572]: pgmap v557: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 08 10:03:27 compute-0 sudo[253163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsgchkujrnnluekdqkuijqehwsehuwvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917807.5170753-3795-178176592982639/AnsiballZ_file.py'
Oct 08 10:03:27 compute-0 sudo[253163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:03:27 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v558: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 08 10:03:27 compute-0 python3.9[253165]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 10:03:28 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:03:28 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:03:28 compute-0 sudo[253163]: pam_unix(sudo:session): session closed for user root
Oct 08 10:03:28 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:03:28.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:03:28 compute-0 sudo[253316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmqrrocmwfvvviguisjmmwfhekkcowdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917808.1411202-3795-40049892696061/AnsiballZ_file.py'
Oct 08 10:03:28 compute-0 sudo[253316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:03:28 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:03:28 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:03:28 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:03:28.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:03:28 compute-0 python3.9[253318]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 10:03:28 compute-0 sudo[253316]: pam_unix(sudo:session): session closed for user root
Oct 08 10:03:28 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:03:29 compute-0 sudo[253468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzhdzzxxwpmgjwjbcqucojlztlcvuioy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917808.7738025-3795-279907016454991/AnsiballZ_file.py'
Oct 08 10:03:29 compute-0 sudo[253468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:03:29 compute-0 ceph-mon[73572]: pgmap v558: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 08 10:03:29 compute-0 python3.9[253470]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 10:03:29 compute-0 sudo[253468]: pam_unix(sudo:session): session closed for user root
Oct 08 10:03:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:29 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:03:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:29 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:03:29 compute-0 sudo[253622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evkbandaceyrbqglsfaxjtvohbxqfnzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917809.6737125-3861-257541321651675/AnsiballZ_file.py'
Oct 08 10:03:29 compute-0 sudo[253622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:03:29 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v559: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 08 10:03:30 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:03:30 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:03:30 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:03:30.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:03:30 compute-0 python3.9[253624]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 10:03:30 compute-0 sudo[253622]: pam_unix(sudo:session): session closed for user root
Oct 08 10:03:30 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:03:30 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:03:30 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:03:30.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:03:30 compute-0 sudo[253774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqjmtbsehrvwzfswmbialdbdhtnfftlc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917810.2508295-3861-66988945573135/AnsiballZ_file.py'
Oct 08 10:03:30 compute-0 sudo[253774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:03:30 compute-0 python3.9[253776]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 10:03:30 compute-0 sudo[253774]: pam_unix(sudo:session): session closed for user root
Oct 08 10:03:31 compute-0 sudo[253927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irxilhitecwtxoklnihzkkdopcxapima ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917810.83766-3861-171807169452112/AnsiballZ_file.py'
Oct 08 10:03:31 compute-0 sudo[253927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:03:31 compute-0 ceph-mon[73572]: pgmap v559: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 08 10:03:31 compute-0 python3.9[253929]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 10:03:31 compute-0 sudo[253927]: pam_unix(sudo:session): session closed for user root
Oct 08 10:03:31 compute-0 sudo[254091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifyyfkjhaygjgnslxfuwreuklntqndsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917811.4174402-3861-24136205144101/AnsiballZ_file.py'
Oct 08 10:03:31 compute-0 sudo[254091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:03:31 compute-0 podman[254053]: 2025-10-08 10:03:31.694115792 +0000 UTC m=+0.080432407 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 08 10:03:31 compute-0 python3.9[254097]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 10:03:31 compute-0 sudo[254091]: pam_unix(sudo:session): session closed for user root
Oct 08 10:03:31 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v560: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Oct 08 10:03:32 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:03:32 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:03:32 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:03:32.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:03:32 compute-0 sudo[254248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nttwjohsvotiasssrcybeyaxrnzimilb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917812.0239673-3861-189176185927132/AnsiballZ_file.py'
Oct 08 10:03:32 compute-0 sudo[254248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:03:32 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:03:32 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:03:32 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:03:32.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:03:32 compute-0 python3.9[254250]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 08 10:03:32 compute-0 sudo[254248]: pam_unix(sudo:session): session closed for user root
Oct 08 10:03:32 compute-0 sudo[254400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okmrihpurdrjvetkkxalatjhzfpeayro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917812.5739524-3861-96610838686506/AnsiballZ_file.py'
Oct 08 10:03:32 compute-0 sudo[254400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:03:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:03:32 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:03:33 compute-0 python3.9[254402]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 08 10:03:33 compute-0 sudo[254400]: pam_unix(sudo:session): session closed for user root
Oct 08 10:03:33 compute-0 ceph-mon[73572]: pgmap v560: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Oct 08 10:03:33 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:03:33 compute-0 sudo[254566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjffcccltkvdxbjxvyfsqfmmcrmqsfjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917813.1699367-3861-116132161179237/AnsiballZ_file.py'
Oct 08 10:03:33 compute-0 sudo[254566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:03:33 compute-0 podman[254527]: 2025-10-08 10:03:33.444182644 +0000 UTC m=+0.047101327 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, org.label-schema.build-date=20251001)
Oct 08 10:03:33 compute-0 python3.9[254575]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 08 10:03:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:03:33 compute-0 sudo[254566]: pam_unix(sudo:session): session closed for user root
Oct 08 10:03:33 compute-0 podman[254628]: 2025-10-08 10:03:33.884931301 +0000 UTC m=+0.044534204 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 08 10:03:33 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v561: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 08 10:03:34 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:03:34 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:03:34 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:03:34.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:03:34 compute-0 sudo[254749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oblbxupkekzqujzdrxlwxvsbevaatnbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917813.8018386-3861-20827101083772/AnsiballZ_file.py'
Oct 08 10:03:34 compute-0 sudo[254749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:03:34 compute-0 python3.9[254751]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 08 10:03:34 compute-0 sudo[254749]: pam_unix(sudo:session): session closed for user root
Oct 08 10:03:34 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:03:34 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:03:34 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:03:34.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:03:34 compute-0 sudo[254901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljusgxnizhjqiopxwmfatixylefiaiib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917814.4271498-3861-112746262152297/AnsiballZ_file.py'
Oct 08 10:03:34 compute-0 sudo[254901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:03:34 compute-0 python3.9[254903]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 08 10:03:34 compute-0 sudo[254901]: pam_unix(sudo:session): session closed for user root
Oct 08 10:03:35 compute-0 ceph-mon[73572]: pgmap v561: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 08 10:03:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:03:35] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Oct 08 10:03:35 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:03:35] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Oct 08 10:03:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:35 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 08 10:03:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:35 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 08 10:03:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:35 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 08 10:03:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:35 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 08 10:03:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:35 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 08 10:03:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:35 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 08 10:03:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:35 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 08 10:03:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:35 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 08 10:03:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:35 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 08 10:03:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:35 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 08 10:03:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:35 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 08 10:03:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:35 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 08 10:03:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:35 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 08 10:03:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:35 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 08 10:03:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:35 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 08 10:03:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:35 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 08 10:03:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:35 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 08 10:03:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:35 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 08 10:03:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:35 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 08 10:03:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:35 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 08 10:03:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:35 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 08 10:03:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:35 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 08 10:03:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:35 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 08 10:03:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:35 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 08 10:03:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:35 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 08 10:03:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:35 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 08 10:03:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:35 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 08 10:03:35 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v562: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Oct 08 10:03:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:35 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95e4000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:03:36 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:03:36 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:03:36 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:03:36.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:03:36 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:03:36 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:03:36 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:03:36.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:03:36 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:36 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95d80014d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:03:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:03:37.076Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:03:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:03:37.076Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:03:37 compute-0 ceph-mon[73572]: pgmap v562: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Oct 08 10:03:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:37 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95b8000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:03:37 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v563: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Oct 08 10:03:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:37 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95c4000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:03:38 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:03:38 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:03:38 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:03:38.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:03:38 compute-0 ceph-mon[73572]: pgmap v563: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Oct 08 10:03:38 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:03:38 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.002000066s ======
Oct 08 10:03:38 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:03:38.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000066s
Oct 08 10:03:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:38 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95bc000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:03:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:03:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/100339 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 08 10:03:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:39 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95d80021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:03:39 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v564: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 10:03:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:39 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95b80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:03:40 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:03:40 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:03:40 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:03:40.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:03:40 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:03:40 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:03:40 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:03:40.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:03:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:40 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95c4001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:03:40 compute-0 sudo[255074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-achitiuxzdxnofnprvutnotdqwuulszh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917820.3912241-4228-40309412311094/AnsiballZ_getent.py'
Oct 08 10:03:40 compute-0 sudo[255074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:03:40 compute-0 python3.9[255076]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Oct 08 10:03:40 compute-0 ceph-mon[73572]: pgmap v564: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 10:03:41 compute-0 sudo[255074]: pam_unix(sudo:session): session closed for user root
Oct 08 10:03:41 compute-0 sudo[255228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pexvykcngaabffqokmrfiwhysaswrplx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917821.2424834-4252-122566783287459/AnsiballZ_group.py'
Oct 08 10:03:41 compute-0 sudo[255228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:03:41 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:41 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95bc001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:03:41 compute-0 python3.9[255230]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 08 10:03:41 compute-0 groupadd[255231]: group added to /etc/group: name=nova, GID=42436
Oct 08 10:03:41 compute-0 groupadd[255231]: group added to /etc/gshadow: name=nova
Oct 08 10:03:41 compute-0 groupadd[255231]: new group: name=nova, GID=42436
Oct 08 10:03:41 compute-0 sudo[255228]: pam_unix(sudo:session): session closed for user root
Oct 08 10:03:41 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v565: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 08 10:03:41 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:41 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95d80021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:03:42 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:03:42 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:03:42 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:03:42.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:03:42 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:03:42 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:03:42 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:03:42.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:03:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:42 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95b80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:03:42 compute-0 sudo[255387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocovhasbsoksvvccyeudhwhnqutpwmqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917822.109123-4276-107169715858359/AnsiballZ_user.py'
Oct 08 10:03:42 compute-0 sudo[255387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:03:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-crash-compute-0[78863]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Oct 08 10:03:42 compute-0 python3.9[255389]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 08 10:03:42 compute-0 useradd[255391]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Oct 08 10:03:42 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 08 10:03:42 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 08 10:03:42 compute-0 useradd[255391]: add 'nova' to group 'libvirt'
Oct 08 10:03:42 compute-0 useradd[255391]: add 'nova' to shadow group 'libvirt'
Oct 08 10:03:42 compute-0 sudo[255387]: pam_unix(sudo:session): session closed for user root
Oct 08 10:03:43 compute-0 ceph-mon[73572]: pgmap v565: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 08 10:03:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:03:43 compute-0 sshd-session[255424]: Accepted publickey for zuul from 192.168.122.30 port 38780 ssh2: ECDSA SHA256:7LvTHAj52RCSkKXOsIbzSDrEw7lwj23D0dAJ0Qgx0Rg
Oct 08 10:03:43 compute-0 systemd-logind[798]: New session 57 of user zuul.
Oct 08 10:03:43 compute-0 systemd[1]: Started Session 57 of User zuul.
Oct 08 10:03:43 compute-0 sshd-session[255424]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 08 10:03:43 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:43 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95c4001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:03:43 compute-0 sshd-session[255427]: Received disconnect from 192.168.122.30 port 38780:11: disconnected by user
Oct 08 10:03:43 compute-0 sshd-session[255427]: Disconnected from user zuul 192.168.122.30 port 38780
Oct 08 10:03:43 compute-0 sshd-session[255424]: pam_unix(sshd:session): session closed for user zuul
Oct 08 10:03:43 compute-0 systemd[1]: session-57.scope: Deactivated successfully.
Oct 08 10:03:43 compute-0 systemd-logind[798]: Session 57 logged out. Waiting for processes to exit.
Oct 08 10:03:43 compute-0 systemd-logind[798]: Removed session 57.
Oct 08 10:03:43 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v566: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 08 10:03:43 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:43 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95bc001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:03:44 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:03:44 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:03:44 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:03:44.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:03:44 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:03:44 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:03:44 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:03:44.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:03:44 compute-0 python3.9[255578]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 10:03:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:44 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95d80021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:03:44 compute-0 sudo[255586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:03:44 compute-0 sudo[255586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:03:44 compute-0 sudo[255586]: pam_unix(sudo:session): session closed for user root
Oct 08 10:03:45 compute-0 python3.9[255724]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759917824.0350053-4351-149218111771006/.source.json follow=False _original_basename=config.json.j2 checksum=2c2474b5f24ef7c9ed37f49680082593e0d1100b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 08 10:03:45 compute-0 ceph-mon[73572]: pgmap v566: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 08 10:03:45 compute-0 python3.9[255875]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 10:03:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:03:45] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Oct 08 10:03:45 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:03:45] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Oct 08 10:03:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:45 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95b80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:03:45 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v567: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 08 10:03:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:45 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95c4001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:03:46 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:03:46 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:03:46 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:03:46.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:03:46 compute-0 python3.9[255952]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 08 10:03:46 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:03:46 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:03:46 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:03:46.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:03:46 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:46 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95bc001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:03:46 compute-0 python3.9[256102]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 10:03:47 compute-0 ceph-mon[73572]: pgmap v567: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 08 10:03:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:03:47.077Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:03:47 compute-0 python3.9[256224]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759917826.3231611-4351-95914248744709/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 08 10:03:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:03:47
Oct 08 10:03:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 08 10:03:47 compute-0 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct 08 10:03:47 compute-0 ceph-mgr[73869]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', '.nfs', 'backups', '.rgw.root', '.mgr', 'default.rgw.log', 'default.rgw.meta', 'volumes', 'images', 'cephfs.cephfs.meta', 'vms']
Oct 08 10:03:47 compute-0 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct 08 10:03:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:47 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95bc001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:03:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:03:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:03:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:03:47 compute-0 python3.9[256374]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 10:03:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:03:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct 08 10:03:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:03:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 08 10:03:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:03:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:03:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:03:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:03:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:03:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:03:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:03:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:03:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:03:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 08 10:03:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:03:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:03:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:03:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 08 10:03:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:03:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 08 10:03:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:03:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:03:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:03:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 08 10:03:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:03:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 10:03:47 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v568: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 08 10:03:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 08 10:03:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:03:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:03:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:03:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:03:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:48 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95b8002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:03:48 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:03:48 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:03:48 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:03:48.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:03:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:03:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:03:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:03:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:03:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:03:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 08 10:03:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:03:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:03:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:03:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:03:48 compute-0 python3.9[256496]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759917827.448843-4351-106523650856419/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 08 10:03:48 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:03:48 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:03:48 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:03:48.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:03:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:48 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95c4002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:03:48 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:03:49 compute-0 python3.9[256646]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 10:03:49 compute-0 ceph-mon[73572]: pgmap v568: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 08 10:03:49 compute-0 python3.9[256768]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759917828.5669715-4351-82408603192714/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 08 10:03:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:49 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95bc001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:03:49 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v569: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Oct 08 10:03:50 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:50 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95d80021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:03:50 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:03:50 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:03:50 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:03:50.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:03:50 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:03:50 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:03:50 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:03:50.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:03:50 compute-0 sudo[256919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvzowtbanqftuhovbuqcsdlgotwvubwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917830.201787-4558-258848793759881/AnsiballZ_file.py'
Oct 08 10:03:50 compute-0 sudo[256919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:03:50 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:50 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95b8002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:03:50 compute-0 python3.9[256921]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:03:50 compute-0 sudo[256919]: pam_unix(sudo:session): session closed for user root
Oct 08 10:03:51 compute-0 sudo[257072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wukdxwcdumqqdsxcxfpezgfhzgusepga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917830.9301388-4582-39430347897443/AnsiballZ_copy.py'
Oct 08 10:03:51 compute-0 sudo[257072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:03:51 compute-0 ceph-mon[73572]: pgmap v569: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Oct 08 10:03:51 compute-0 python3.9[257074]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:03:51 compute-0 sudo[257072]: pam_unix(sudo:session): session closed for user root
Oct 08 10:03:51 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/100351 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 08 10:03:51 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:51 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95c4002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:03:51 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v570: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:03:51 compute-0 sudo[257225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsuekylkshuxupryailjljyxfaxzseyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917831.7260072-4606-233021377421696/AnsiballZ_stat.py'
Oct 08 10:03:51 compute-0 sudo[257225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:03:52 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:52 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95bc003730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:03:52 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:03:52 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:03:52 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:03:52.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:03:52 compute-0 python3.9[257227]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 10:03:52 compute-0 sudo[257225]: pam_unix(sudo:session): session closed for user root
Oct 08 10:03:52 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:03:52 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:03:52 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:03:52.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:03:52 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:52 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95d80021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:03:52 compute-0 sudo[257377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swuoogllulmbcmkcdqzgnbapmltefwdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917832.5153425-4630-255554052726038/AnsiballZ_stat.py'
Oct 08 10:03:52 compute-0 sudo[257377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:03:52 compute-0 python3.9[257379]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 10:03:52 compute-0 sudo[257377]: pam_unix(sudo:session): session closed for user root
Oct 08 10:03:53 compute-0 ceph-mon[73572]: pgmap v570: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:03:53 compute-0 sudo[257501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsrxuqgobvqwuuglgdrhkrurxblqqqdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917832.5153425-4630-255554052726038/AnsiballZ_copy.py'
Oct 08 10:03:53 compute-0 sudo[257501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:03:53 compute-0 python3.9[257503]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1759917832.5153425-4630-255554052726038/.source _original_basename=.a0azf5v0 follow=False checksum=aa5b6f2aeb9b9f06df5d35930eb43189722f291f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Oct 08 10:03:53 compute-0 sudo[257501]: pam_unix(sudo:session): session closed for user root
Oct 08 10:03:53 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:03:53 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:53 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95b8002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:03:53 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v571: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 08 10:03:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:54 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95c4002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:03:54 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:03:54 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:03:54 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:03:54.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:03:54 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:03:54 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:03:54 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:03:54.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:03:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:54 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95bc003730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:03:54 compute-0 podman[257531]: 2025-10-08 10:03:54.918977671 +0000 UTC m=+0.078479964 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 08 10:03:55 compute-0 ceph-mon[73572]: pgmap v571: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 08 10:03:55 compute-0 python3.9[257683]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 10:03:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:03:55] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Oct 08 10:03:55 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:03:55] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Oct 08 10:03:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:55 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95d80021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:03:55 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v572: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 10:03:56 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:56 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95b8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:03:56 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:03:56 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:03:56 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:03:56.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:03:56 compute-0 python3.9[257836]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 10:03:56 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:03:56 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:03:56 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:03:56.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:03:56 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:56 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95c4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:03:56 compute-0 python3.9[257957]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759917835.990281-4708-8248816668540/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=837ffd9c004e5987a2e117698c56827ebbfeb5b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 08 10:03:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:03:57.078Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:03:57 compute-0 ceph-mon[73572]: pgmap v572: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 10:03:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:03:57.401 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:03:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:03:57.401 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:03:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:03:57.402 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:03:57 compute-0 python3.9[258108]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 10:03:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:57 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95bc003730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:03:57 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v573: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 10:03:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:58 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95d80021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:03:58 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:03:58 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:03:58 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:03:58.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:03:58 compute-0 python3.9[258230]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759917837.320607-4753-264369837493437/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=722ab36345f3375cbdcf911ce8f6e1a8083d7e59 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 08 10:03:58 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:03:58 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:03:58 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:03:58.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:03:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:58 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95b8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:03:58 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:03:59 compute-0 sudo[258381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyxmdbmbpqlbzvkbkceimwgruocvlrrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917838.7883105-4804-157218437845865/AnsiballZ_container_config_data.py'
Oct 08 10:03:59 compute-0 sudo[258381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:03:59 compute-0 python3.9[258383]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Oct 08 10:03:59 compute-0 ceph-mon[73572]: pgmap v573: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 10:03:59 compute-0 sudo[258381]: pam_unix(sudo:session): session closed for user root
Oct 08 10:03:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:59 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95b8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:03:59 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v574: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Oct 08 10:04:00 compute-0 sudo[258534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oahgxpejgrfzlkqwlejtzvsakqewdxvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917839.7068233-4831-272091115140085/AnsiballZ_container_config_hash.py'
Oct 08 10:04:00 compute-0 sudo[258534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:04:00 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:04:00 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95bc003730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:04:00 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:04:00 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:04:00 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:04:00.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:04:00 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:04:00 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:04:00 compute-0 python3.9[258536]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 08 10:04:00 compute-0 sudo[258534]: pam_unix(sudo:session): session closed for user root
Oct 08 10:04:00 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:04:00 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:04:00 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:04:00.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:04:00 compute-0 sudo[258561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:04:00 compute-0 sudo[258561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:04:00 compute-0 sudo[258561]: pam_unix(sudo:session): session closed for user root
Oct 08 10:04:00 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:04:00 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95d80021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:04:00 compute-0 sudo[258586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Oct 08 10:04:00 compute-0 sudo[258586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:04:00 compute-0 sudo[258586]: pam_unix(sudo:session): session closed for user root
Oct 08 10:04:00 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 10:04:00 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:04:00 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 10:04:00 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:04:00 compute-0 sudo[258774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjjlyotpaaldnigvuzdmrrlbfpfswcsh ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759917840.7027154-4861-120327439913316/AnsiballZ_edpm_container_manage.py'
Oct 08 10:04:00 compute-0 sudo[258774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:04:00 compute-0 sudo[258740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:04:00 compute-0 sudo[258740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:04:00 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 08 10:04:00 compute-0 sudo[258740]: pam_unix(sudo:session): session closed for user root
Oct 08 10:04:01 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:04:01 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 08 10:04:01 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:04:01 compute-0 sudo[258784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 08 10:04:01 compute-0 sudo[258784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:04:01 compute-0 python3[258782]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Oct 08 10:04:01 compute-0 ceph-mon[73572]: pgmap v574: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Oct 08 10:04:01 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:04:01 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:04:01 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:04:01 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:04:01 compute-0 sudo[258784]: pam_unix(sudo:session): session closed for user root
Oct 08 10:04:01 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 10:04:01 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:04:01 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 08 10:04:01 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 10:04:01 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 08 10:04:01 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:04:01 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 08 10:04:01 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:04:01 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 08 10:04:01 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 10:04:01 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 08 10:04:01 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 10:04:01 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 10:04:01 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:04:01 compute-0 sudo[258868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:04:01 compute-0 sudo[258868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:04:01 compute-0 sudo[258868]: pam_unix(sudo:session): session closed for user root
Oct 08 10:04:01 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:04:01 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95c4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:04:01 compute-0 sudo[258899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 08 10:04:01 compute-0 sudo[258899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:04:01 compute-0 podman[258892]: 2025-10-08 10:04:01.821962626 +0000 UTC m=+0.052181171 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct 08 10:04:01 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v575: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 08 10:04:02 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:04:02 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95c4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:04:02 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:04:02 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:04:02 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:04:02.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:04:02 compute-0 podman[258985]: 2025-10-08 10:04:02.30179117 +0000 UTC m=+0.071855458 container create 7af71b2119af77dfe8766b81dfd62877f37e80d42d80e08f5e79fa1a57a09dc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 10:04:02 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:04:02 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 10:04:02 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:04:02 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:04:02 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 10:04:02 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 10:04:02 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:04:02 compute-0 systemd[1]: Started libpod-conmon-7af71b2119af77dfe8766b81dfd62877f37e80d42d80e08f5e79fa1a57a09dc2.scope.
Oct 08 10:04:02 compute-0 podman[258985]: 2025-10-08 10:04:02.263745178 +0000 UTC m=+0.033809486 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:04:02 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:04:02 compute-0 podman[258985]: 2025-10-08 10:04:02.394223565 +0000 UTC m=+0.164287873 container init 7af71b2119af77dfe8766b81dfd62877f37e80d42d80e08f5e79fa1a57a09dc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_cannon, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 08 10:04:02 compute-0 podman[258985]: 2025-10-08 10:04:02.402621657 +0000 UTC m=+0.172685945 container start 7af71b2119af77dfe8766b81dfd62877f37e80d42d80e08f5e79fa1a57a09dc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_cannon, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 10:04:02 compute-0 podman[258985]: 2025-10-08 10:04:02.40858418 +0000 UTC m=+0.178648488 container attach 7af71b2119af77dfe8766b81dfd62877f37e80d42d80e08f5e79fa1a57a09dc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_cannon, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct 08 10:04:02 compute-0 systemd[1]: libpod-7af71b2119af77dfe8766b81dfd62877f37e80d42d80e08f5e79fa1a57a09dc2.scope: Deactivated successfully.
Oct 08 10:04:02 compute-0 bold_cannon[259002]: 167 167
Oct 08 10:04:02 compute-0 conmon[259002]: conmon 7af71b2119af77dfe876 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7af71b2119af77dfe8766b81dfd62877f37e80d42d80e08f5e79fa1a57a09dc2.scope/container/memory.events
Oct 08 10:04:02 compute-0 podman[258985]: 2025-10-08 10:04:02.412889709 +0000 UTC m=+0.182953997 container died 7af71b2119af77dfe8766b81dfd62877f37e80d42d80e08f5e79fa1a57a09dc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_cannon, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:04:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-e831925f32963ea8b6e8d3adc36d814671f0d156e880f19e481750d4762484ab-merged.mount: Deactivated successfully.
Oct 08 10:04:02 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:04:02 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:04:02 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:04:02.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:04:02 compute-0 podman[258985]: 2025-10-08 10:04:02.490890856 +0000 UTC m=+0.260955144 container remove 7af71b2119af77dfe8766b81dfd62877f37e80d42d80e08f5e79fa1a57a09dc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:04:02 compute-0 systemd[1]: libpod-conmon-7af71b2119af77dfe8766b81dfd62877f37e80d42d80e08f5e79fa1a57a09dc2.scope: Deactivated successfully.
Oct 08 10:04:02 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:04:02 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95c4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:04:02 compute-0 podman[259025]: 2025-10-08 10:04:02.682207563 +0000 UTC m=+0.070117832 container create 00aaac3fe02eefd7dc286835a6002cd076939b4f1a4a2c1de35aae7fe8414dd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 08 10:04:02 compute-0 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Oct 08 10:04:02 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:04:02.709062) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 08 10:04:02 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Oct 08 10:04:02 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917842709096, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 1031, "num_deletes": 251, "total_data_size": 1789867, "memory_usage": 1815304, "flush_reason": "Manual Compaction"}
Oct 08 10:04:02 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Oct 08 10:04:02 compute-0 systemd[1]: Started libpod-conmon-00aaac3fe02eefd7dc286835a6002cd076939b4f1a4a2c1de35aae7fe8414dd7.scope.
Oct 08 10:04:02 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917842717499, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 1752255, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18970, "largest_seqno": 20000, "table_properties": {"data_size": 1747236, "index_size": 2543, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10804, "raw_average_key_size": 19, "raw_value_size": 1737251, "raw_average_value_size": 3164, "num_data_blocks": 113, "num_entries": 549, "num_filter_entries": 549, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759917753, "oldest_key_time": 1759917753, "file_creation_time": 1759917842, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Oct 08 10:04:02 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 8472 microseconds, and 3658 cpu microseconds.
Oct 08 10:04:02 compute-0 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 08 10:04:02 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:04:02.717534) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 1752255 bytes OK
Oct 08 10:04:02 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:04:02.717552) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Oct 08 10:04:02 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:04:02.719629) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Oct 08 10:04:02 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:04:02.719641) EVENT_LOG_v1 {"time_micros": 1759917842719637, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 08 10:04:02 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:04:02.719657) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 08 10:04:02 compute-0 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 1785159, prev total WAL file size 1785159, number of live WAL files 2.
Oct 08 10:04:02 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 10:04:02 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:04:02.720135) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Oct 08 10:04:02 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 08 10:04:02 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(1711KB)], [41(12MB)]
Oct 08 10:04:02 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917842720171, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 15067911, "oldest_snapshot_seqno": -1}
Oct 08 10:04:02 compute-0 podman[259025]: 2025-10-08 10:04:02.637559977 +0000 UTC m=+0.025470276 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:04:02 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:04:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a66aae17311de4f14e56d78efe454e8c9f44ba64de5817e24c395359e7942755/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:04:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a66aae17311de4f14e56d78efe454e8c9f44ba64de5817e24c395359e7942755/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:04:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a66aae17311de4f14e56d78efe454e8c9f44ba64de5817e24c395359e7942755/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:04:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a66aae17311de4f14e56d78efe454e8c9f44ba64de5817e24c395359e7942755/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:04:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a66aae17311de4f14e56d78efe454e8c9f44ba64de5817e24c395359e7942755/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 10:04:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:04:02 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:04:02 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4971 keys, 12860429 bytes, temperature: kUnknown
Oct 08 10:04:02 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917842888931, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 12860429, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12826366, "index_size": 20513, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12485, "raw_key_size": 126829, "raw_average_key_size": 25, "raw_value_size": 12735195, "raw_average_value_size": 2561, "num_data_blocks": 838, "num_entries": 4971, "num_filter_entries": 4971, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916581, "oldest_key_time": 0, "file_creation_time": 1759917842, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Oct 08 10:04:02 compute-0 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 08 10:04:02 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:04:02.889242) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 12860429 bytes
Oct 08 10:04:02 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:04:02.942554) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 89.2 rd, 76.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 12.7 +0.0 blob) out(12.3 +0.0 blob), read-write-amplify(15.9) write-amplify(7.3) OK, records in: 5489, records dropped: 518 output_compression: NoCompression
Oct 08 10:04:02 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:04:02.942595) EVENT_LOG_v1 {"time_micros": 1759917842942578, "job": 20, "event": "compaction_finished", "compaction_time_micros": 168835, "compaction_time_cpu_micros": 22288, "output_level": 6, "num_output_files": 1, "total_output_size": 12860429, "num_input_records": 5489, "num_output_records": 4971, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 08 10:04:02 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 10:04:02 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917842943138, "job": 20, "event": "table_file_deletion", "file_number": 43}
Oct 08 10:04:02 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 10:04:02 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917842945197, "job": 20, "event": "table_file_deletion", "file_number": 41}
Oct 08 10:04:02 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:04:02.720100) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:04:02 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:04:02.945228) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:04:02 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:04:02.945234) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:04:02 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:04:02.945236) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:04:02 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:04:02.945238) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:04:02 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:04:02.945240) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:04:02 compute-0 podman[259025]: 2025-10-08 10:04:02.945884085 +0000 UTC m=+0.333794384 container init 00aaac3fe02eefd7dc286835a6002cd076939b4f1a4a2c1de35aae7fe8414dd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Oct 08 10:04:02 compute-0 podman[259025]: 2025-10-08 10:04:02.954485594 +0000 UTC m=+0.342395863 container start 00aaac3fe02eefd7dc286835a6002cd076939b4f1a4a2c1de35aae7fe8414dd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct 08 10:04:02 compute-0 podman[259025]: 2025-10-08 10:04:02.988449964 +0000 UTC m=+0.376360233 container attach 00aaac3fe02eefd7dc286835a6002cd076939b4f1a4a2c1de35aae7fe8414dd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_swartz, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:04:03 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:04:03 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:04:03 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:04:03 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:04:03 compute-0 blissful_swartz[259040]: --> passed data devices: 0 physical, 1 LVM
Oct 08 10:04:03 compute-0 blissful_swartz[259040]: --> All data devices are unavailable
Oct 08 10:04:03 compute-0 ceph-mon[73572]: pgmap v575: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 08 10:04:03 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:04:03 compute-0 systemd[1]: libpod-00aaac3fe02eefd7dc286835a6002cd076939b4f1a4a2c1de35aae7fe8414dd7.scope: Deactivated successfully.
Oct 08 10:04:03 compute-0 podman[259025]: 2025-10-08 10:04:03.338509074 +0000 UTC m=+0.726419333 container died 00aaac3fe02eefd7dc286835a6002cd076939b4f1a4a2c1de35aae7fe8414dd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_swartz, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 10:04:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-a66aae17311de4f14e56d78efe454e8c9f44ba64de5817e24c395359e7942755-merged.mount: Deactivated successfully.
Oct 08 10:04:03 compute-0 podman[259025]: 2025-10-08 10:04:03.382976785 +0000 UTC m=+0.770887054 container remove 00aaac3fe02eefd7dc286835a6002cd076939b4f1a4a2c1de35aae7fe8414dd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_swartz, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:04:03 compute-0 systemd[1]: libpod-conmon-00aaac3fe02eefd7dc286835a6002cd076939b4f1a4a2c1de35aae7fe8414dd7.scope: Deactivated successfully.
Oct 08 10:04:03 compute-0 sudo[258899]: pam_unix(sudo:session): session closed for user root
Oct 08 10:04:03 compute-0 sudo[259068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:04:03 compute-0 sudo[259068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:04:03 compute-0 sudo[259068]: pam_unix(sudo:session): session closed for user root
Oct 08 10:04:03 compute-0 podman[259092]: 2025-10-08 10:04:03.61745841 +0000 UTC m=+0.068792199 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001)
Oct 08 10:04:03 compute-0 sudo[259100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- lvm list --format json
Oct 08 10:04:03 compute-0 sudo[259100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:04:03 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:04:03 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:04:03 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95d80021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:04:03 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v576: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 10:04:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:04:04 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95bc004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:04:04 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:04:04 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:04:04 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:04:04.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:04:04 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:04:04 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:04:04 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:04:04.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:04:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:04:04 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95bc004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:04:04 compute-0 sudo[259167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:04:04 compute-0 sudo[259167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:04:04 compute-0 sudo[259167]: pam_unix(sudo:session): session closed for user root
Oct 08 10:04:05 compute-0 ceph-mon[73572]: pgmap v576: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 10:04:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:04:05] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct 08 10:04:05 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:04:05] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct 08 10:04:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:04:05 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95bc004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:04:05 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v577: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 10:04:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:04:06 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95b0000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:04:06 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:04:06 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:04:06 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:04:06.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:04:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:04:06 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 08 10:04:06 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:04:06 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:04:06 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:04:06.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:04:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:04:06 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95d80021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:04:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:04:07.078Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:04:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:04:07.079Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:04:07 compute-0 ceph-mon[73572]: pgmap v577: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 10:04:07 compute-0 podman[259191]: 2025-10-08 10:04:07.679741774 +0000 UTC m=+2.948483934 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 08 10:04:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:04:07 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95b8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:04:07 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v578: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 10:04:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:04:08 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95bc004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:04:08 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:04:08 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:04:08 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:04:08.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:04:08 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:04:08 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:04:08 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:04:08.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:04:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:04:08 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95d0000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:04:08 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:04:08 compute-0 ceph-mon[73572]: pgmap v578: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 10:04:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:04:09 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95b00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:04:09 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v579: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 08 10:04:10 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:04:10 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95b8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:04:10 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:04:10 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:04:10 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:04:10.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:04:10 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:04:10 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:04:10 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:04:10.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:04:10 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:04:10 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95bc004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:04:11 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/100411 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 08 10:04:11 compute-0 ceph-mon[73572]: pgmap v579: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 08 10:04:11 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:04:11 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95d0001a60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:04:11 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v580: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 10:04:12 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:04:12 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95b00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:04:12 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:04:12 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:04:12 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:04:12.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:04:12 compute-0 podman[258838]: 2025-10-08 10:04:12.156256358 +0000 UTC m=+10.840067356 image pull 7ac362f4e836cf46e10a309acb4abf774df9481a1d6404c213437495cfb42f5d quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844
Oct 08 10:04:12 compute-0 podman[259283]: 2025-10-08 10:04:12.247351049 +0000 UTC m=+0.039217901 container create e2b1fddfa4dd2c912668739b987a953a001a98cd47462f4fd12f63726f1522d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True)
Oct 08 10:04:12 compute-0 systemd[1]: Started libpod-conmon-e2b1fddfa4dd2c912668739b987a953a001a98cd47462f4fd12f63726f1522d3.scope.
Oct 08 10:04:12 compute-0 podman[259310]: 2025-10-08 10:04:12.297566136 +0000 UTC m=+0.050000912 container create 17713851e79d1ffcd0dd8102ba919b86e4d786f034fb81664009fcbd548787f8 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=edpm, container_name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 08 10:04:12 compute-0 podman[259310]: 2025-10-08 10:04:12.272744091 +0000 UTC m=+0.025178897 image pull 7ac362f4e836cf46e10a309acb4abf774df9481a1d6404c213437495cfb42f5d quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844
Oct 08 10:04:12 compute-0 python3[258782]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844 bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Oct 08 10:04:12 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:04:12 compute-0 podman[259283]: 2025-10-08 10:04:12.22761626 +0000 UTC m=+0.019483132 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:04:12 compute-0 podman[259283]: 2025-10-08 10:04:12.326888065 +0000 UTC m=+0.118754937 container init e2b1fddfa4dd2c912668739b987a953a001a98cd47462f4fd12f63726f1522d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:04:12 compute-0 podman[259283]: 2025-10-08 10:04:12.33412199 +0000 UTC m=+0.125988842 container start e2b1fddfa4dd2c912668739b987a953a001a98cd47462f4fd12f63726f1522d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_chebyshev, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct 08 10:04:12 compute-0 podman[259283]: 2025-10-08 10:04:12.33785485 +0000 UTC m=+0.129721702 container attach e2b1fddfa4dd2c912668739b987a953a001a98cd47462f4fd12f63726f1522d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_chebyshev, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct 08 10:04:12 compute-0 sad_chebyshev[259325]: 167 167
Oct 08 10:04:12 compute-0 systemd[1]: libpod-e2b1fddfa4dd2c912668739b987a953a001a98cd47462f4fd12f63726f1522d3.scope: Deactivated successfully.
Oct 08 10:04:12 compute-0 podman[259283]: 2025-10-08 10:04:12.340835677 +0000 UTC m=+0.132702539 container died e2b1fddfa4dd2c912668739b987a953a001a98cd47462f4fd12f63726f1522d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_chebyshev, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct 08 10:04:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-535185971cf6724a5965de38d47e1771e774d3c85a9f40b609e63e3de4eff4ab-merged.mount: Deactivated successfully.
Oct 08 10:04:12 compute-0 podman[259283]: 2025-10-08 10:04:12.387111987 +0000 UTC m=+0.178978839 container remove e2b1fddfa4dd2c912668739b987a953a001a98cd47462f4fd12f63726f1522d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 08 10:04:12 compute-0 systemd[1]: libpod-conmon-e2b1fddfa4dd2c912668739b987a953a001a98cd47462f4fd12f63726f1522d3.scope: Deactivated successfully.
Oct 08 10:04:12 compute-0 sudo[258774]: pam_unix(sudo:session): session closed for user root
Oct 08 10:04:12 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:04:12 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:04:12 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:04:12.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:04:12 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:04:12 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95b8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:04:12 compute-0 podman[259382]: 2025-10-08 10:04:12.589694309 +0000 UTC m=+0.068633394 container create 263199f3fe24b920bd1df3631db6417d67c0fffa3c425949ce43d72e1506d754 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 10:04:12 compute-0 podman[259382]: 2025-10-08 10:04:12.542070796 +0000 UTC m=+0.021009901 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:04:12 compute-0 systemd[1]: Started libpod-conmon-263199f3fe24b920bd1df3631db6417d67c0fffa3c425949ce43d72e1506d754.scope.
Oct 08 10:04:12 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:04:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a85e9308449801274e0acc2b2f2e3ca58161c77cbe6136e406b4656d6dd86ac4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:04:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a85e9308449801274e0acc2b2f2e3ca58161c77cbe6136e406b4656d6dd86ac4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:04:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a85e9308449801274e0acc2b2f2e3ca58161c77cbe6136e406b4656d6dd86ac4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:04:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a85e9308449801274e0acc2b2f2e3ca58161c77cbe6136e406b4656d6dd86ac4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:04:13 compute-0 podman[259382]: 2025-10-08 10:04:13.387200343 +0000 UTC m=+0.866139518 container init 263199f3fe24b920bd1df3631db6417d67c0fffa3c425949ce43d72e1506d754 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_gauss, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 10:04:13 compute-0 podman[259382]: 2025-10-08 10:04:13.399395828 +0000 UTC m=+0.878334943 container start 263199f3fe24b920bd1df3631db6417d67c0fffa3c425949ce43d72e1506d754 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_gauss, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 10:04:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:04:13 compute-0 magical_gauss[259416]: {
Oct 08 10:04:13 compute-0 magical_gauss[259416]:     "1": [
Oct 08 10:04:13 compute-0 magical_gauss[259416]:         {
Oct 08 10:04:13 compute-0 magical_gauss[259416]:             "devices": [
Oct 08 10:04:13 compute-0 magical_gauss[259416]:                 "/dev/loop3"
Oct 08 10:04:13 compute-0 magical_gauss[259416]:             ],
Oct 08 10:04:13 compute-0 magical_gauss[259416]:             "lv_name": "ceph_lv0",
Oct 08 10:04:13 compute-0 magical_gauss[259416]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:04:13 compute-0 magical_gauss[259416]:             "lv_size": "21470642176",
Oct 08 10:04:13 compute-0 magical_gauss[259416]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 08 10:04:13 compute-0 magical_gauss[259416]:             "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 10:04:13 compute-0 magical_gauss[259416]:             "name": "ceph_lv0",
Oct 08 10:04:13 compute-0 magical_gauss[259416]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:04:13 compute-0 magical_gauss[259416]:             "tags": {
Oct 08 10:04:13 compute-0 magical_gauss[259416]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:04:13 compute-0 magical_gauss[259416]:                 "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 10:04:13 compute-0 magical_gauss[259416]:                 "ceph.cephx_lockbox_secret": "",
Oct 08 10:04:13 compute-0 magical_gauss[259416]:                 "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct 08 10:04:13 compute-0 magical_gauss[259416]:                 "ceph.cluster_name": "ceph",
Oct 08 10:04:13 compute-0 magical_gauss[259416]:                 "ceph.crush_device_class": "",
Oct 08 10:04:13 compute-0 magical_gauss[259416]:                 "ceph.encrypted": "0",
Oct 08 10:04:13 compute-0 magical_gauss[259416]:                 "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct 08 10:04:13 compute-0 magical_gauss[259416]:                 "ceph.osd_id": "1",
Oct 08 10:04:13 compute-0 magical_gauss[259416]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 08 10:04:13 compute-0 magical_gauss[259416]:                 "ceph.type": "block",
Oct 08 10:04:13 compute-0 magical_gauss[259416]:                 "ceph.vdo": "0",
Oct 08 10:04:13 compute-0 magical_gauss[259416]:                 "ceph.with_tpm": "0"
Oct 08 10:04:13 compute-0 magical_gauss[259416]:             },
Oct 08 10:04:13 compute-0 magical_gauss[259416]:             "type": "block",
Oct 08 10:04:13 compute-0 magical_gauss[259416]:             "vg_name": "ceph_vg0"
Oct 08 10:04:13 compute-0 magical_gauss[259416]:         }
Oct 08 10:04:13 compute-0 magical_gauss[259416]:     ]
Oct 08 10:04:13 compute-0 magical_gauss[259416]: }
Oct 08 10:04:13 compute-0 podman[259382]: 2025-10-08 10:04:13.731656101 +0000 UTC m=+1.210595206 container attach 263199f3fe24b920bd1df3631db6417d67c0fffa3c425949ce43d72e1506d754 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_gauss, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 10:04:13 compute-0 systemd[1]: libpod-263199f3fe24b920bd1df3631db6417d67c0fffa3c425949ce43d72e1506d754.scope: Deactivated successfully.
Oct 08 10:04:13 compute-0 podman[259426]: 2025-10-08 10:04:13.778247841 +0000 UTC m=+0.027189793 container died 263199f3fe24b920bd1df3631db6417d67c0fffa3c425949ce43d72e1506d754 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Oct 08 10:04:13 compute-0 ceph-mon[73572]: pgmap v580: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 10:04:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:04:13 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95bc004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:04:13 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v581: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 10:04:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:04:14 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95d0001a60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:04:14 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:04:14 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:04:14 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:04:14.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:04:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-a85e9308449801274e0acc2b2f2e3ca58161c77cbe6136e406b4656d6dd86ac4-merged.mount: Deactivated successfully.
Oct 08 10:04:14 compute-0 podman[259426]: 2025-10-08 10:04:14.346125236 +0000 UTC m=+0.595067168 container remove 263199f3fe24b920bd1df3631db6417d67c0fffa3c425949ce43d72e1506d754 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_gauss, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:04:14 compute-0 systemd[1]: libpod-conmon-263199f3fe24b920bd1df3631db6417d67c0fffa3c425949ce43d72e1506d754.scope: Deactivated successfully.
Oct 08 10:04:14 compute-0 sudo[259100]: pam_unix(sudo:session): session closed for user root
Oct 08 10:04:14 compute-0 sudo[259442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:04:14 compute-0 sudo[259442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:04:14 compute-0 sudo[259442]: pam_unix(sudo:session): session closed for user root
Oct 08 10:04:14 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:04:14 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:04:14 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:04:14.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:04:14 compute-0 sudo[259467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- raw list --format json
Oct 08 10:04:14 compute-0 sudo[259467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:04:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:04:14 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95b00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:04:15 compute-0 podman[259533]: 2025-10-08 10:04:14.959792076 +0000 UTC m=+0.023131390 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:04:15 compute-0 podman[259533]: 2025-10-08 10:04:15.070488711 +0000 UTC m=+0.133827995 container create e21d64260393e79b0d6d58e2c5943d429a5018360b4f2fea5c16e05b0efd83c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:04:15 compute-0 systemd[1]: Started libpod-conmon-e21d64260393e79b0d6d58e2c5943d429a5018360b4f2fea5c16e05b0efd83c6.scope.
Oct 08 10:04:15 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:04:15 compute-0 podman[259533]: 2025-10-08 10:04:15.158919817 +0000 UTC m=+0.222259151 container init e21d64260393e79b0d6d58e2c5943d429a5018360b4f2fea5c16e05b0efd83c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:04:15 compute-0 podman[259533]: 2025-10-08 10:04:15.165099046 +0000 UTC m=+0.228438330 container start e21d64260393e79b0d6d58e2c5943d429a5018360b4f2fea5c16e05b0efd83c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_sinoussi, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 10:04:15 compute-0 elastic_sinoussi[259550]: 167 167
Oct 08 10:04:15 compute-0 systemd[1]: libpod-e21d64260393e79b0d6d58e2c5943d429a5018360b4f2fea5c16e05b0efd83c6.scope: Deactivated successfully.
Oct 08 10:04:15 compute-0 podman[259533]: 2025-10-08 10:04:15.191315556 +0000 UTC m=+0.254654840 container attach e21d64260393e79b0d6d58e2c5943d429a5018360b4f2fea5c16e05b0efd83c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_sinoussi, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct 08 10:04:15 compute-0 podman[259533]: 2025-10-08 10:04:15.191986118 +0000 UTC m=+0.255325392 container died e21d64260393e79b0d6d58e2c5943d429a5018360b4f2fea5c16e05b0efd83c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct 08 10:04:15 compute-0 ceph-mon[73572]: pgmap v581: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 10:04:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba8cb00e5e4ca3891e4072f9532e67994b7ec62615360e50019acb434dab8f96-merged.mount: Deactivated successfully.
Oct 08 10:04:15 compute-0 podman[259533]: 2025-10-08 10:04:15.733706595 +0000 UTC m=+0.797045899 container remove e21d64260393e79b0d6d58e2c5943d429a5018360b4f2fea5c16e05b0efd83c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_sinoussi, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 10:04:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:04:15] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct 08 10:04:15 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:04:15] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct 08 10:04:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:04:15 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95b8003c10 fd 39 proxy ignored for local
Oct 08 10:04:15 compute-0 kernel: ganesha.nfsd[259218]: segfault at 50 ip 00007f968ea3532e sp 00007f964fffe210 error 4 in libntirpc.so.5.8[7f968ea1a000+2c000] likely on CPU 1 (core 0, socket 1)
Oct 08 10:04:15 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 08 10:04:15 compute-0 systemd[1]: Started Process Core Dump (PID 259617/UID 0).
Oct 08 10:04:15 compute-0 systemd[1]: libpod-conmon-e21d64260393e79b0d6d58e2c5943d429a5018360b4f2fea5c16e05b0efd83c6.scope: Deactivated successfully.
Oct 08 10:04:15 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v582: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Oct 08 10:04:15 compute-0 podman[259627]: 2025-10-08 10:04:15.89465683 +0000 UTC m=+0.027120391 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:04:16 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:04:16 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:04:16 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:04:16.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:04:16 compute-0 podman[259627]: 2025-10-08 10:04:16.047109958 +0000 UTC m=+0.179573479 container create 183e0b8de8637f1b93d74e47e5cc95b393e3e7d074b8e77b2ce63a699c5a2a20 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_heyrovsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 10:04:16 compute-0 sudo[259715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwhpftsycatvkrhfspmsswekuuohgsbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917855.7636807-4885-21038724923331/AnsiballZ_stat.py'
Oct 08 10:04:16 compute-0 sudo[259715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:04:16 compute-0 systemd[1]: Started libpod-conmon-183e0b8de8637f1b93d74e47e5cc95b393e3e7d074b8e77b2ce63a699c5a2a20.scope.
Oct 08 10:04:16 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:04:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a31c5806f97eff4fe65ca41f2cee8b4f0ddc011412f9b663efb0cb5482c4777/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:04:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a31c5806f97eff4fe65ca41f2cee8b4f0ddc011412f9b663efb0cb5482c4777/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:04:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a31c5806f97eff4fe65ca41f2cee8b4f0ddc011412f9b663efb0cb5482c4777/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:04:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a31c5806f97eff4fe65ca41f2cee8b4f0ddc011412f9b663efb0cb5482c4777/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:04:16 compute-0 podman[259627]: 2025-10-08 10:04:16.219527324 +0000 UTC m=+0.351990875 container init 183e0b8de8637f1b93d74e47e5cc95b393e3e7d074b8e77b2ce63a699c5a2a20 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct 08 10:04:16 compute-0 podman[259627]: 2025-10-08 10:04:16.232276886 +0000 UTC m=+0.364740397 container start 183e0b8de8637f1b93d74e47e5cc95b393e3e7d074b8e77b2ce63a699c5a2a20 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 10:04:16 compute-0 python3.9[259717]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 10:04:16 compute-0 sudo[259715]: pam_unix(sudo:session): session closed for user root
Oct 08 10:04:16 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:04:16 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:04:16 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:04:16.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:04:16 compute-0 podman[259627]: 2025-10-08 10:04:16.726264359 +0000 UTC m=+0.858727880 container attach 183e0b8de8637f1b93d74e47e5cc95b393e3e7d074b8e77b2ce63a699c5a2a20 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:04:16 compute-0 lvm[259821]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 08 10:04:16 compute-0 lvm[259821]: VG ceph_vg0 finished
Oct 08 10:04:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:04:17.079Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:04:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:04:17.081Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:04:17 compute-0 sudo[259949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elapprbpscxzyggfniflqxhgyqraeusg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917856.950852-4921-36350234865587/AnsiballZ_container_config_data.py'
Oct 08 10:04:17 compute-0 sudo[259949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:04:17 compute-0 youthful_heyrovsky[259720]: {}
Oct 08 10:04:17 compute-0 systemd[1]: libpod-183e0b8de8637f1b93d74e47e5cc95b393e3e7d074b8e77b2ce63a699c5a2a20.scope: Deactivated successfully.
Oct 08 10:04:17 compute-0 systemd[1]: libpod-183e0b8de8637f1b93d74e47e5cc95b393e3e7d074b8e77b2ce63a699c5a2a20.scope: Consumed 1.094s CPU time.
Oct 08 10:04:17 compute-0 podman[259952]: 2025-10-08 10:04:17.332255979 +0000 UTC m=+0.023414919 container died 183e0b8de8637f1b93d74e47e5cc95b393e3e7d074b8e77b2ce63a699c5a2a20 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 08 10:04:17 compute-0 python3.9[259951]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Oct 08 10:04:17 compute-0 sudo[259949]: pam_unix(sudo:session): session closed for user root
Oct 08 10:04:17 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:04:17 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:04:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:04:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:04:17 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v583: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Oct 08 10:04:17 compute-0 ceph-mon[73572]: pgmap v582: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Oct 08 10:04:18 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:04:18 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:04:18 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:04:18.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:04:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:04:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:04:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:04:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:04:18 compute-0 sudo[260115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbwfrazuvnepbbfldyttebadclblnlas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917857.8859408-4948-214097583988932/AnsiballZ_container_config_hash.py'
Oct 08 10:04:18 compute-0 sudo[260115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:04:18 compute-0 systemd-coredump[259622]: Process 252462 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 55:
                                                    #0  0x00007f968ea3532e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Oct 08 10:04:18 compute-0 systemd[1]: systemd-coredump@8-259617-0.service: Deactivated successfully.
Oct 08 10:04:18 compute-0 systemd[1]: systemd-coredump@8-259617-0.service: Consumed 1.289s CPU time.
Oct 08 10:04:18 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:04:18 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:04:18 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:04:18.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:04:18 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:04:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a31c5806f97eff4fe65ca41f2cee8b4f0ddc011412f9b663efb0cb5482c4777-merged.mount: Deactivated successfully.
Oct 08 10:04:18 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:04:18 compute-0 ceph-mon[73572]: pgmap v583: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Oct 08 10:04:19 compute-0 podman[259952]: 2025-10-08 10:04:19.016687125 +0000 UTC m=+1.707846055 container remove 183e0b8de8637f1b93d74e47e5cc95b393e3e7d074b8e77b2ce63a699c5a2a20 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_heyrovsky, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 10:04:19 compute-0 systemd[1]: libpod-conmon-183e0b8de8637f1b93d74e47e5cc95b393e3e7d074b8e77b2ce63a699c5a2a20.scope: Deactivated successfully.
Oct 08 10:04:19 compute-0 podman[260122]: 2025-10-08 10:04:19.049511738 +0000 UTC m=+0.713520494 container died 66d9754add4f5d233bac2b4e75d179001d45850627c14f21452b2ea76b1c37dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 08 10:04:19 compute-0 sudo[259467]: pam_unix(sudo:session): session closed for user root
Oct 08 10:04:19 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 10:04:19 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:04:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-058d28317895a40b586ee5c061b698a4e724c357ad19ff7c07b4d7b8ef4bea6d-merged.mount: Deactivated successfully.
Oct 08 10:04:19 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 10:04:19 compute-0 python3.9[260117]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 08 10:04:19 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:04:19 compute-0 sudo[260115]: pam_unix(sudo:session): session closed for user root
Oct 08 10:04:19 compute-0 podman[260122]: 2025-10-08 10:04:19.159716718 +0000 UTC m=+0.823725444 container remove 66d9754add4f5d233bac2b4e75d179001d45850627c14f21452b2ea76b1c37dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 08 10:04:19 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Main process exited, code=exited, status=139/n/a
Oct 08 10:04:19 compute-0 sudo[260141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 08 10:04:19 compute-0 sudo[260141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:04:19 compute-0 sudo[260141]: pam_unix(sudo:session): session closed for user root
Oct 08 10:04:19 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Failed with result 'exit-code'.
Oct 08 10:04:19 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Consumed 1.628s CPU time.
Oct 08 10:04:19 compute-0 sudo[260343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suvgksmoylhcqwbqtrjwwqmunzeyeaee ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759917859.4713-4978-76205447991372/AnsiballZ_edpm_container_manage.py'
Oct 08 10:04:19 compute-0 sudo[260343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:04:19 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v584: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Oct 08 10:04:20 compute-0 python3[260345]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Oct 08 10:04:20 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:04:20 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:04:20 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:04:20.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:04:20 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:04:20 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:04:20 compute-0 podman[260382]: 2025-10-08 10:04:20.227304182 +0000 UTC m=+0.055899862 container create 10bd2f5ab045dd8f96a9121ffea55bc5cdad29c81800340bfb8b3be5bc672ab2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=edpm, container_name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Oct 08 10:04:20 compute-0 podman[260382]: 2025-10-08 10:04:20.196742112 +0000 UTC m=+0.025337822 image pull 7ac362f4e836cf46e10a309acb4abf774df9481a1d6404c213437495cfb42f5d quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844
Oct 08 10:04:20 compute-0 python3[260345]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844 kolla_start
Oct 08 10:04:20 compute-0 sudo[260343]: pam_unix(sudo:session): session closed for user root
Oct 08 10:04:20 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:04:20 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:04:20 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:04:20.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:04:20 compute-0 sudo[260570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhkvxjzlqxellycvmmxnntbnwrlrrqlf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917860.6390579-5002-49173124308413/AnsiballZ_stat.py'
Oct 08 10:04:20 compute-0 sudo[260570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:04:21 compute-0 python3.9[260572]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 10:04:21 compute-0 ceph-mon[73572]: pgmap v584: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Oct 08 10:04:21 compute-0 sudo[260570]: pam_unix(sudo:session): session closed for user root
Oct 08 10:04:21 compute-0 sudo[260725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snykriunecwrsfrvwznoqsxwlhswvofq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917861.5579658-5029-147643325066546/AnsiballZ_file.py'
Oct 08 10:04:21 compute-0 sudo[260725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:04:21 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v585: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Oct 08 10:04:22 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:04:22 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:04:22 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:04:22.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:04:22 compute-0 python3.9[260727]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:04:22 compute-0 sudo[260725]: pam_unix(sudo:session): session closed for user root
Oct 08 10:04:22 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:04:22 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:04:22 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:04:22.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:04:22 compute-0 sudo[260877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysxnyplofsbvderdecxcrpbpmipjssbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917862.1342084-5029-258447652750949/AnsiballZ_copy.py'
Oct 08 10:04:22 compute-0 sudo[260877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:04:22 compute-0 python3.9[260879]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759917862.1342084-5029-258447652750949/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 10:04:22 compute-0 sudo[260877]: pam_unix(sudo:session): session closed for user root
Oct 08 10:04:23 compute-0 sudo[260953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izptsgjcytsvwfhsxgbvqrzckhicxciu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917862.1342084-5029-258447652750949/AnsiballZ_systemd.py'
Oct 08 10:04:23 compute-0 sudo[260953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:04:23 compute-0 ceph-mon[73572]: pgmap v585: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Oct 08 10:04:23 compute-0 python3.9[260955]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 08 10:04:23 compute-0 systemd[1]: Reloading.
Oct 08 10:04:23 compute-0 systemd-rc-local-generator[260983]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 10:04:23 compute-0 systemd-sysv-generator[260986]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 10:04:23 compute-0 sudo[260953]: pam_unix(sudo:session): session closed for user root
Oct 08 10:04:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/100423 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 08 10:04:23 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v586: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 08 10:04:23 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:04:23 compute-0 sudo[261065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxqywsznersoimxurhdfzbhewiyorkxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917862.1342084-5029-258447652750949/AnsiballZ_systemd.py'
Oct 08 10:04:23 compute-0 sudo[261065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:04:24 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:04:24 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:04:24 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:04:24.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:04:24 compute-0 python3.9[261067]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 10:04:24 compute-0 systemd[1]: Reloading.
Oct 08 10:04:24 compute-0 systemd-rc-local-generator[261094]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 10:04:24 compute-0 systemd-sysv-generator[261100]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 10:04:24 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:04:24 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:04:24 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:04:24.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:04:24 compute-0 systemd[1]: Starting nova_compute container...
Oct 08 10:04:24 compute-0 sudo[261113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:04:24 compute-0 sudo[261113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:04:24 compute-0 sudo[261113]: pam_unix(sudo:session): session closed for user root
Oct 08 10:04:24 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:04:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15207b115f66f3b9ef265fd8e31c6a3ae8a40fba9e93cc4c4511e0cb7364338e/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 08 10:04:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15207b115f66f3b9ef265fd8e31c6a3ae8a40fba9e93cc4c4511e0cb7364338e/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct 08 10:04:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15207b115f66f3b9ef265fd8e31c6a3ae8a40fba9e93cc4c4511e0cb7364338e/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 08 10:04:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15207b115f66f3b9ef265fd8e31c6a3ae8a40fba9e93cc4c4511e0cb7364338e/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 08 10:04:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15207b115f66f3b9ef265fd8e31c6a3ae8a40fba9e93cc4c4511e0cb7364338e/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 08 10:04:24 compute-0 podman[261107]: 2025-10-08 10:04:24.884504578 +0000 UTC m=+0.140919066 container init 10bd2f5ab045dd8f96a9121ffea55bc5cdad29c81800340bfb8b3be5bc672ab2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=edpm, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 08 10:04:24 compute-0 podman[261107]: 2025-10-08 10:04:24.891642399 +0000 UTC m=+0.148056857 container start 10bd2f5ab045dd8f96a9121ffea55bc5cdad29c81800340bfb8b3be5bc672ab2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.schema-version=1.0)
Oct 08 10:04:24 compute-0 nova_compute[261144]: + sudo -E kolla_set_configs
Oct 08 10:04:24 compute-0 podman[261107]: nova_compute
Oct 08 10:04:24 compute-0 systemd[1]: Started nova_compute container.
Oct 08 10:04:24 compute-0 nova_compute[261144]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 08 10:04:24 compute-0 nova_compute[261144]: INFO:__main__:Validating config file
Oct 08 10:04:24 compute-0 nova_compute[261144]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 08 10:04:24 compute-0 nova_compute[261144]: INFO:__main__:Copying service configuration files
Oct 08 10:04:24 compute-0 nova_compute[261144]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct 08 10:04:24 compute-0 nova_compute[261144]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct 08 10:04:24 compute-0 nova_compute[261144]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct 08 10:04:24 compute-0 nova_compute[261144]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct 08 10:04:24 compute-0 nova_compute[261144]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct 08 10:04:24 compute-0 nova_compute[261144]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 08 10:04:24 compute-0 sudo[261065]: pam_unix(sudo:session): session closed for user root
Oct 08 10:04:24 compute-0 nova_compute[261144]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 08 10:04:24 compute-0 nova_compute[261144]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 08 10:04:24 compute-0 nova_compute[261144]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 08 10:04:24 compute-0 nova_compute[261144]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct 08 10:04:24 compute-0 nova_compute[261144]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct 08 10:04:24 compute-0 nova_compute[261144]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 08 10:04:24 compute-0 nova_compute[261144]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 08 10:04:24 compute-0 nova_compute[261144]: INFO:__main__:Deleting /etc/ceph
Oct 08 10:04:24 compute-0 nova_compute[261144]: INFO:__main__:Creating directory /etc/ceph
Oct 08 10:04:24 compute-0 nova_compute[261144]: INFO:__main__:Setting permission for /etc/ceph
Oct 08 10:04:24 compute-0 nova_compute[261144]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct 08 10:04:24 compute-0 nova_compute[261144]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 08 10:04:24 compute-0 nova_compute[261144]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct 08 10:04:24 compute-0 nova_compute[261144]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 08 10:04:24 compute-0 nova_compute[261144]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct 08 10:04:24 compute-0 nova_compute[261144]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 08 10:04:24 compute-0 nova_compute[261144]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct 08 10:04:24 compute-0 nova_compute[261144]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 08 10:04:24 compute-0 nova_compute[261144]: INFO:__main__:Writing out command to execute
Oct 08 10:04:24 compute-0 nova_compute[261144]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 08 10:04:24 compute-0 nova_compute[261144]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 08 10:04:24 compute-0 nova_compute[261144]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct 08 10:04:24 compute-0 nova_compute[261144]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 08 10:04:24 compute-0 nova_compute[261144]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 08 10:04:24 compute-0 nova_compute[261144]: ++ cat /run_command
Oct 08 10:04:25 compute-0 nova_compute[261144]: + CMD=nova-compute
Oct 08 10:04:25 compute-0 nova_compute[261144]: + ARGS=
Oct 08 10:04:25 compute-0 nova_compute[261144]: + sudo kolla_copy_cacerts
Oct 08 10:04:25 compute-0 nova_compute[261144]: + [[ ! -n '' ]]
Oct 08 10:04:25 compute-0 nova_compute[261144]: + . kolla_extend_start
Oct 08 10:04:25 compute-0 nova_compute[261144]: Running command: 'nova-compute'
Oct 08 10:04:25 compute-0 nova_compute[261144]: + echo 'Running command: '\''nova-compute'\'''
Oct 08 10:04:25 compute-0 nova_compute[261144]: + umask 0022
Oct 08 10:04:25 compute-0 nova_compute[261144]: + exec nova-compute
Oct 08 10:04:25 compute-0 podman[261155]: 2025-10-08 10:04:25.058752362 +0000 UTC m=+0.098152790 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 08 10:04:25 compute-0 ceph-mon[73572]: pgmap v586: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 08 10:04:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:04:25] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct 08 10:04:25 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:04:25] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct 08 10:04:25 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v587: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 10:04:26 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:04:26 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:04:26 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:04:26.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:04:26 compute-0 python3.9[261335]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 10:04:26 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:04:26 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:04:26 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:04:26.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:04:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:04:27.082Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:04:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:04:27.083Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:04:27 compute-0 python3.9[261486]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 10:04:27 compute-0 ceph-mon[73572]: pgmap v587: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 10:04:27 compute-0 python3.9[261637]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 10:04:27 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v588: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 10:04:28 compute-0 nova_compute[261144]: 2025-10-08 10:04:28.048 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 08 10:04:28 compute-0 nova_compute[261144]: 2025-10-08 10:04:28.049 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 08 10:04:28 compute-0 nova_compute[261144]: 2025-10-08 10:04:28.049 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 08 10:04:28 compute-0 nova_compute[261144]: 2025-10-08 10:04:28.049 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Oct 08 10:04:28 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:04:28 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:04:28 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:04:28.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:04:28 compute-0 nova_compute[261144]: 2025-10-08 10:04:28.229 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:04:28 compute-0 nova_compute[261144]: 2025-10-08 10:04:28.252 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:04:28 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:04:28 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:04:28 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:04:28.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:04:28 compute-0 sudo[261792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-keqituybfhpzicmhiwbngiufjdnanfyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917868.2826312-5209-181840199027361/AnsiballZ_podman_container.py'
Oct 08 10:04:28 compute-0 sudo[261792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:04:28 compute-0 python3.9[261794]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 08 10:04:28 compute-0 nova_compute[261144]: 2025-10-08 10:04:28.817 2 INFO nova.virt.driver [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Oct 08 10:04:28 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 08 10:04:28 compute-0 sudo[261792]: pam_unix(sudo:session): session closed for user root
Oct 08 10:04:28 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.088 2 INFO nova.compute.provider_config [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.103 2 DEBUG oslo_concurrency.lockutils [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.103 2 DEBUG oslo_concurrency.lockutils [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.103 2 DEBUG oslo_concurrency.lockutils [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.104 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.104 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.104 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.104 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.104 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.105 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.105 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.105 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.105 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.105 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.106 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.106 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.106 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.106 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.106 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.107 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.107 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.107 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.107 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.107 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.107 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.108 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.108 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.108 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.108 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.108 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.109 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.109 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.109 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.109 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.109 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.110 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.110 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.110 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.110 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.110 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.111 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.111 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.111 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.111 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.111 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.112 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.112 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.112 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.112 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.112 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.113 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.113 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.113 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.113 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.113 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.114 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.114 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.114 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.114 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.114 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.115 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.115 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.115 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.115 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.115 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.115 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.116 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.116 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.116 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.116 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.116 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.116 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.117 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.117 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.117 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.117 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.118 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.118 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.118 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.118 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.118 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.119 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.119 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.119 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.119 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.119 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.119 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.120 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.120 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.120 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.120 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.120 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.121 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.121 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.121 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.121 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.121 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.122 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.122 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.122 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.122 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.122 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.122 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.123 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.123 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.123 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.123 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.123 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.124 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.124 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.124 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.124 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.124 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.124 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.125 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.125 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.125 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.125 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.125 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.126 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.126 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.126 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.126 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.126 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.127 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.127 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.127 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.127 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.127 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.127 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.128 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.128 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.128 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.128 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.128 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.129 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.129 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.129 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.129 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.129 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.130 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.130 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.130 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.130 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.130 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.130 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.131 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.131 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.131 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.131 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.131 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.132 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.132 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.132 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.132 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.132 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.133 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.133 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.133 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.133 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.133 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.134 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.134 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.134 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.134 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.134 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.135 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.135 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.135 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.135 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.135 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.135 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.136 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.136 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.136 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.136 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.136 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.137 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.137 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.137 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.137 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.137 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.138 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.138 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.138 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.138 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.138 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.139 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.139 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.139 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.139 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.139 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.140 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.140 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.140 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.140 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.140 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.140 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.141 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.141 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.141 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.141 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.141 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.142 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.142 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.142 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.142 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.142 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.143 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.143 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.143 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.143 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.143 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.143 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.144 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.144 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.144 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.144 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.144 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.145 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.145 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.145 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.145 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.145 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.146 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.146 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.146 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.146 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.146 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.146 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.147 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.147 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.147 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.147 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.147 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.148 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.148 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.148 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.148 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.148 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.148 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.149 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.149 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.149 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.149 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.149 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.150 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.150 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.150 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.150 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.150 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.151 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.151 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.151 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.151 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.151 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.151 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.152 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.152 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.152 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.152 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.152 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.153 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.153 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.153 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.153 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.153 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.154 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.154 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.154 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.154 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.154 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.155 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.155 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.155 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.155 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.155 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.156 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.156 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.156 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.156 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.156 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.157 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.157 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.157 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.157 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.157 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.158 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.158 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.158 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.158 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.158 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.159 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.159 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.159 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.159 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.159 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.159 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.160 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.160 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.160 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.160 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.160 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.161 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.161 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.161 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.161 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.161 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.162 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.162 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.162 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.162 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.162 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.163 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.163 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.163 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.163 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.163 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.163 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.164 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.164 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.164 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.164 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.164 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.165 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.165 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.165 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.165 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.165 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.166 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.166 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.166 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.166 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.166 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.167 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.167 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.167 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.167 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.167 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.167 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.168 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.168 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.168 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.168 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.169 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.169 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.169 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.169 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.169 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.170 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.170 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.170 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.170 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.170 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.170 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.171 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.171 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.171 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.171 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.172 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.172 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.172 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.172 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.172 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.173 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.173 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.173 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.173 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.174 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.174 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.174 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.174 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.174 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.174 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.174 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.175 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.175 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.175 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.175 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.175 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.175 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.175 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.176 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.176 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.176 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.176 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.176 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.176 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.176 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.177 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.177 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.177 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.177 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.177 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.177 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.178 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.178 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.178 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.178 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.178 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.178 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.178 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.179 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.179 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.179 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.179 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.179 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.179 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.180 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.180 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.180 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.180 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.180 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.181 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.181 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.181 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.181 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.181 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.181 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.182 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.182 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.182 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.182 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.182 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.182 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.183 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.183 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.183 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.183 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.183 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.183 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.184 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.184 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.184 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.184 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.184 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.184 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.185 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.185 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.185 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.185 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.185 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.186 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.186 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.186 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.186 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.186 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.186 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.187 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.187 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.187 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.187 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.187 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.188 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.188 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.188 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.188 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.188 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.188 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.189 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.189 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.189 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.189 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.190 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.190 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.190 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.190 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.190 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.190 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.191 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.191 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.191 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.191 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.191 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.192 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.192 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.192 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.192 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.192 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.193 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.193 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.193 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.193 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.193 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.193 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.194 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.194 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.194 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.194 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.194 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.194 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.195 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.195 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.195 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.195 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.195 2 WARNING oslo_config.cfg [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct 08 10:04:29 compute-0 nova_compute[261144]: live_migration_uri is deprecated for removal in favor of two other options that
Oct 08 10:04:29 compute-0 nova_compute[261144]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct 08 10:04:29 compute-0 nova_compute[261144]: and ``live_migration_inbound_addr`` respectively.
Oct 08 10:04:29 compute-0 nova_compute[261144]: ).  Its value may be silently ignored in the future.
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.196 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.196 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.196 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.196 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.196 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.197 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.197 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.197 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.197 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.197 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.198 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.198 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.198 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.198 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.198 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.198 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.199 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.199 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.199 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.rbd_secret_uuid        = 787292cc-8154-50c4-9e00-e9be3e817149 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.199 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.199 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.199 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.200 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.200 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.200 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.200 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.200 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.200 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.200 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.201 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.201 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.201 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.201 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.201 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.202 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.202 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.202 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.202 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.202 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.202 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.203 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.203 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.203 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.203 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.203 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.204 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.204 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.204 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.204 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.204 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.204 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.205 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.205 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.205 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.205 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.205 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.206 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.206 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.206 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.206 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.206 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.207 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.207 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.207 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.207 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.207 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.207 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.208 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.208 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.208 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.208 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.208 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.208 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.208 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.209 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.209 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.209 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.209 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.209 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.209 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.210 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.210 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.210 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.210 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.210 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.211 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.211 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.211 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.211 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.211 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.211 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.212 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.212 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.212 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.212 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.212 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.212 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.212 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.212 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.213 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.213 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.213 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.213 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.213 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.213 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.214 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.214 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.214 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.214 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.214 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.214 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.215 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.215 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.215 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.215 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.215 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.215 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.216 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.216 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.216 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.216 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.216 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.216 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.216 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.217 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.217 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.217 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.217 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.217 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.217 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.218 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.218 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.218 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.218 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.218 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.218 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.219 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.219 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.219 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.219 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.219 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.220 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.220 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.220 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.220 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.220 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.220 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.220 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.221 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.221 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.221 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.221 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.221 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.221 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.222 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.222 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.222 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.222 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.222 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.223 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.223 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.223 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.223 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.223 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.223 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.224 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.224 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.224 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.224 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.224 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.224 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.224 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.225 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.225 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.225 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.225 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.225 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.225 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.226 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.226 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.226 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.226 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.226 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.226 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.227 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.227 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.227 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.227 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.227 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.227 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.228 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.228 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.228 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.228 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.228 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.228 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.229 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.229 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.229 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.229 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.229 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.229 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.230 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.230 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.230 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.230 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.230 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.230 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.231 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.231 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.231 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.231 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.231 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.231 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.231 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.232 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.232 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.232 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.232 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.232 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.232 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.232 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.233 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.233 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.233 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.233 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.233 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.233 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.234 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.234 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.234 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.234 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.234 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.234 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.235 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.235 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.235 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.235 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.235 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.235 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.236 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.236 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.236 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.236 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.236 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.236 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.237 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.237 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.237 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.237 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.237 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.238 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.238 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.238 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.238 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.238 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.238 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.239 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.239 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.239 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.239 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.239 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.239 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.239 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.240 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.240 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.240 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.240 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.240 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.240 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.240 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.241 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.241 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.241 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.241 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.241 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.241 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.242 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.242 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.242 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.242 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.242 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.242 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.243 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.243 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.243 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.243 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.243 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.243 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.244 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.244 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.244 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.244 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.244 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.244 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.245 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.245 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.245 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.245 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.245 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.245 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.246 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.246 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.246 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.246 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.246 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.246 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.247 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.247 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.247 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.247 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.247 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.247 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.248 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.248 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.248 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.248 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.248 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.249 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.249 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.249 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.249 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.249 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.250 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.250 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.250 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.250 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.250 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.251 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.251 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.251 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.251 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.251 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.251 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.252 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.252 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.252 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.252 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.252 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.253 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.253 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.253 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.253 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.254 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.254 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.254 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.254 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.254 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.255 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.255 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.255 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.255 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.255 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.255 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.255 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.256 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.256 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.256 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.256 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.256 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.256 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.257 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.257 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.257 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.257 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.257 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.258 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.258 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.258 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.258 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.258 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.258 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.258 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.259 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.259 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.259 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.259 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.259 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.259 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.259 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.260 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.260 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.260 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.260 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.260 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.260 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.261 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.261 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.261 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.261 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.261 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.262 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.262 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.262 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.262 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.262 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.263 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.263 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.263 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.263 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.263 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.264 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.264 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.264 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.264 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.264 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.264 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.264 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.265 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.265 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.265 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.265 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.265 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.265 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.265 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.266 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.266 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.266 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.266 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.266 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.266 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.267 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.267 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.267 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.268 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.287 2 DEBUG nova.virt.libvirt.host [None req-8dca3da0-0fca-4b45-ac2e-54421443828a - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.288 2 DEBUG nova.virt.libvirt.host [None req-8dca3da0-0fca-4b45-ac2e-54421443828a - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.288 2 DEBUG nova.virt.libvirt.host [None req-8dca3da0-0fca-4b45-ac2e-54421443828a - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.288 2 DEBUG nova.virt.libvirt.host [None req-8dca3da0-0fca-4b45-ac2e-54421443828a - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Oct 08 10:04:29 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Oct 08 10:04:29 compute-0 systemd[1]: Started libvirt QEMU daemon.
Oct 08 10:04:29 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Scheduled restart job, restart counter is at 9.
Oct 08 10:04:29 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct 08 10:04:29 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Consumed 1.628s CPU time.
Oct 08 10:04:29 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct 08 10:04:29 compute-0 ceph-mon[73572]: pgmap v588: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.364 2 DEBUG nova.virt.libvirt.host [None req-8dca3da0-0fca-4b45-ac2e-54421443828a - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f6553ebb4c0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.367 2 DEBUG nova.virt.libvirt.host [None req-8dca3da0-0fca-4b45-ac2e-54421443828a - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f6553ebb4c0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.368 2 INFO nova.virt.libvirt.driver [None req-8dca3da0-0fca-4b45-ac2e-54421443828a - - - - - -] Connection event '1' reason 'None'
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.381 2 WARNING nova.virt.libvirt.driver [None req-8dca3da0-0fca-4b45-ac2e-54421443828a - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Oct 08 10:04:29 compute-0 nova_compute[261144]: 2025-10-08 10:04:29.382 2 DEBUG nova.virt.libvirt.volume.mount [None req-8dca3da0-0fca-4b45-ac2e-54421443828a - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Oct 08 10:04:29 compute-0 podman[262030]: 2025-10-08 10:04:29.559145679 +0000 UTC m=+0.042554919 container create dcd28dc3b591a8ad1bbef3775b31bab43e62da06b22c6c50b9245ad61c1024bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 08 10:04:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0368887a430f991d02246d619e7304973cce2d2c741718f4bff3761663df78c0/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 08 10:04:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0368887a430f991d02246d619e7304973cce2d2c741718f4bff3761663df78c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:04:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0368887a430f991d02246d619e7304973cce2d2c741718f4bff3761663df78c0/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 10:04:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0368887a430f991d02246d619e7304973cce2d2c741718f4bff3761663df78c0/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.uynkmx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 10:04:29 compute-0 sudo[262081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omciyhtdwomsvobnqgiubmizrakdppce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917869.320313-5233-31573914314726/AnsiballZ_systemd.py'
Oct 08 10:04:29 compute-0 sudo[262081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:04:29 compute-0 podman[262030]: 2025-10-08 10:04:29.619109521 +0000 UTC m=+0.102518791 container init dcd28dc3b591a8ad1bbef3775b31bab43e62da06b22c6c50b9245ad61c1024bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 08 10:04:29 compute-0 podman[262030]: 2025-10-08 10:04:29.624666771 +0000 UTC m=+0.108076011 container start dcd28dc3b591a8ad1bbef3775b31bab43e62da06b22c6c50b9245ad61c1024bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:04:29 compute-0 bash[262030]: dcd28dc3b591a8ad1bbef3775b31bab43e62da06b22c6c50b9245ad61c1024bd
Oct 08 10:04:29 compute-0 podman[262030]: 2025-10-08 10:04:29.540161935 +0000 UTC m=+0.023571195 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:04:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:29 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 08 10:04:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:29 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 08 10:04:29 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct 08 10:04:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:29 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 08 10:04:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:29 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 08 10:04:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:29 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 08 10:04:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:29 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 08 10:04:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:29 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 08 10:04:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:29 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:04:29 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v589: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:04:29 compute-0 python3.9[262088]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 08 10:04:30 compute-0 systemd[1]: Stopping nova_compute container...
Oct 08 10:04:30 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:04:30 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:04:30 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:04:30.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:04:30 compute-0 nova_compute[261144]: 2025-10-08 10:04:30.117 2 DEBUG oslo_concurrency.lockutils [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 08 10:04:30 compute-0 nova_compute[261144]: 2025-10-08 10:04:30.117 2 DEBUG oslo_concurrency.lockutils [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 08 10:04:30 compute-0 nova_compute[261144]: 2025-10-08 10:04:30.117 2 DEBUG oslo_concurrency.lockutils [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 08 10:04:30 compute-0 ceph-mon[73572]: pgmap v589: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:04:30 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:04:30 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:04:30 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:04:30.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:04:30 compute-0 virtqemud[261885]: libvirt version: 10.10.0, package: 15.el9 (builder@centos.org, 2025-08-18-13:22:20, )
Oct 08 10:04:30 compute-0 virtqemud[261885]: hostname: compute-0
Oct 08 10:04:30 compute-0 virtqemud[261885]: End of file while reading data: Input/output error
Oct 08 10:04:30 compute-0 systemd[1]: libpod-10bd2f5ab045dd8f96a9121ffea55bc5cdad29c81800340bfb8b3be5bc672ab2.scope: Deactivated successfully.
Oct 08 10:04:30 compute-0 systemd[1]: libpod-10bd2f5ab045dd8f96a9121ffea55bc5cdad29c81800340bfb8b3be5bc672ab2.scope: Consumed 3.418s CPU time.
Oct 08 10:04:30 compute-0 podman[262140]: 2025-10-08 10:04:30.669372594 +0000 UTC m=+0.626671672 container died 10bd2f5ab045dd8f96a9121ffea55bc5cdad29c81800340bfb8b3be5bc672ab2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=nova_compute, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 08 10:04:30 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-10bd2f5ab045dd8f96a9121ffea55bc5cdad29c81800340bfb8b3be5bc672ab2-userdata-shm.mount: Deactivated successfully.
Oct 08 10:04:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-15207b115f66f3b9ef265fd8e31c6a3ae8a40fba9e93cc4c4511e0cb7364338e-merged.mount: Deactivated successfully.
Oct 08 10:04:31 compute-0 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 08 10:04:31 compute-0 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 9009 writes, 35K keys, 9009 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 9009 writes, 1887 syncs, 4.77 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 764 writes, 1222 keys, 764 commit groups, 1.0 writes per commit group, ingest: 0.41 MB, 0.00 MB/s
                                           Interval WAL: 764 writes, 362 syncs, 2.11 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb69b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb69b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb69b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 08 10:04:31 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v590: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 10:04:32 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:04:32 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:04:32 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:04:32.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:04:32 compute-0 podman[262169]: 2025-10-08 10:04:32.137729851 +0000 UTC m=+0.046397195 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 08 10:04:32 compute-0 podman[262140]: 2025-10-08 10:04:32.190680456 +0000 UTC m=+2.147979534 container cleanup 10bd2f5ab045dd8f96a9121ffea55bc5cdad29c81800340bfb8b3be5bc672ab2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=nova_compute, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:04:32 compute-0 podman[262140]: nova_compute
Oct 08 10:04:32 compute-0 podman[262191]: nova_compute
Oct 08 10:04:32 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Oct 08 10:04:32 compute-0 systemd[1]: Stopped nova_compute container.
Oct 08 10:04:32 compute-0 systemd[1]: Starting nova_compute container...
Oct 08 10:04:32 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:04:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15207b115f66f3b9ef265fd8e31c6a3ae8a40fba9e93cc4c4511e0cb7364338e/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 08 10:04:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15207b115f66f3b9ef265fd8e31c6a3ae8a40fba9e93cc4c4511e0cb7364338e/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct 08 10:04:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15207b115f66f3b9ef265fd8e31c6a3ae8a40fba9e93cc4c4511e0cb7364338e/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 08 10:04:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15207b115f66f3b9ef265fd8e31c6a3ae8a40fba9e93cc4c4511e0cb7364338e/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 08 10:04:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15207b115f66f3b9ef265fd8e31c6a3ae8a40fba9e93cc4c4511e0cb7364338e/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 08 10:04:32 compute-0 podman[262204]: 2025-10-08 10:04:32.409258176 +0000 UTC m=+0.128010818 container init 10bd2f5ab045dd8f96a9121ffea55bc5cdad29c81800340bfb8b3be5bc672ab2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, io.buildah.version=1.41.3, container_name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 08 10:04:32 compute-0 podman[262204]: 2025-10-08 10:04:32.414584849 +0000 UTC m=+0.133337461 container start 10bd2f5ab045dd8f96a9121ffea55bc5cdad29c81800340bfb8b3be5bc672ab2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=edpm)
Oct 08 10:04:32 compute-0 nova_compute[262220]: + sudo -E kolla_set_configs
Oct 08 10:04:32 compute-0 podman[262204]: nova_compute
Oct 08 10:04:32 compute-0 systemd[1]: Started nova_compute container.
Oct 08 10:04:32 compute-0 sudo[262081]: pam_unix(sudo:session): session closed for user root
Oct 08 10:04:32 compute-0 nova_compute[262220]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 08 10:04:32 compute-0 nova_compute[262220]: INFO:__main__:Validating config file
Oct 08 10:04:32 compute-0 nova_compute[262220]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 08 10:04:32 compute-0 nova_compute[262220]: INFO:__main__:Copying service configuration files
Oct 08 10:04:32 compute-0 nova_compute[262220]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct 08 10:04:32 compute-0 nova_compute[262220]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct 08 10:04:32 compute-0 nova_compute[262220]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct 08 10:04:32 compute-0 nova_compute[262220]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Oct 08 10:04:32 compute-0 nova_compute[262220]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct 08 10:04:32 compute-0 nova_compute[262220]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct 08 10:04:32 compute-0 nova_compute[262220]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 08 10:04:32 compute-0 nova_compute[262220]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 08 10:04:32 compute-0 nova_compute[262220]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 08 10:04:32 compute-0 nova_compute[262220]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 08 10:04:32 compute-0 nova_compute[262220]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 08 10:04:32 compute-0 nova_compute[262220]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 08 10:04:32 compute-0 nova_compute[262220]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Oct 08 10:04:32 compute-0 nova_compute[262220]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct 08 10:04:32 compute-0 nova_compute[262220]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct 08 10:04:32 compute-0 nova_compute[262220]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 08 10:04:32 compute-0 nova_compute[262220]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 08 10:04:32 compute-0 nova_compute[262220]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 08 10:04:32 compute-0 nova_compute[262220]: INFO:__main__:Deleting /etc/ceph
Oct 08 10:04:32 compute-0 nova_compute[262220]: INFO:__main__:Creating directory /etc/ceph
Oct 08 10:04:32 compute-0 nova_compute[262220]: INFO:__main__:Setting permission for /etc/ceph
Oct 08 10:04:32 compute-0 nova_compute[262220]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct 08 10:04:32 compute-0 nova_compute[262220]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 08 10:04:32 compute-0 nova_compute[262220]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct 08 10:04:32 compute-0 nova_compute[262220]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 08 10:04:32 compute-0 nova_compute[262220]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Oct 08 10:04:32 compute-0 nova_compute[262220]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct 08 10:04:32 compute-0 nova_compute[262220]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 08 10:04:32 compute-0 nova_compute[262220]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Oct 08 10:04:32 compute-0 nova_compute[262220]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct 08 10:04:32 compute-0 nova_compute[262220]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 08 10:04:32 compute-0 nova_compute[262220]: INFO:__main__:Writing out command to execute
Oct 08 10:04:32 compute-0 nova_compute[262220]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 08 10:04:32 compute-0 nova_compute[262220]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 08 10:04:32 compute-0 nova_compute[262220]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct 08 10:04:32 compute-0 nova_compute[262220]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 08 10:04:32 compute-0 nova_compute[262220]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 08 10:04:32 compute-0 nova_compute[262220]: ++ cat /run_command
Oct 08 10:04:32 compute-0 nova_compute[262220]: + CMD=nova-compute
Oct 08 10:04:32 compute-0 nova_compute[262220]: + ARGS=
Oct 08 10:04:32 compute-0 nova_compute[262220]: + sudo kolla_copy_cacerts
Oct 08 10:04:32 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:04:32 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:04:32 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:04:32.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:04:32 compute-0 nova_compute[262220]: + [[ ! -n '' ]]
Oct 08 10:04:32 compute-0 nova_compute[262220]: + . kolla_extend_start
Oct 08 10:04:32 compute-0 nova_compute[262220]: Running command: 'nova-compute'
Oct 08 10:04:32 compute-0 nova_compute[262220]: + echo 'Running command: '\''nova-compute'\'''
Oct 08 10:04:32 compute-0 nova_compute[262220]: + umask 0022
Oct 08 10:04:32 compute-0 nova_compute[262220]: + exec nova-compute
Oct 08 10:04:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:04:32 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:04:33 compute-0 ceph-mon[73572]: pgmap v590: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 10:04:33 compute-0 podman[262258]: 2025-10-08 10:04:33.931829289 +0000 UTC m=+0.088874490 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 08 10:04:33 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v591: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 511 B/s wr, 1 op/s
Oct 08 10:04:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:04:34 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:04:34 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:04:34 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:04:34.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:04:34 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:04:34 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:04:34 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:04:34.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:04:34 compute-0 sudo[262406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojlaljbvfzggeitcmgasassxltsgdkxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759917874.3465805-5260-86240696539473/AnsiballZ_podman_container.py'
Oct 08 10:04:34 compute-0 sudo[262406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:04:34 compute-0 nova_compute[262220]: 2025-10-08 10:04:34.708 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 08 10:04:34 compute-0 nova_compute[262220]: 2025-10-08 10:04:34.709 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 08 10:04:34 compute-0 nova_compute[262220]: 2025-10-08 10:04:34.709 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 08 10:04:34 compute-0 nova_compute[262220]: 2025-10-08 10:04:34.709 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Oct 08 10:04:34 compute-0 nova_compute[262220]: 2025-10-08 10:04:34.848 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:04:34 compute-0 nova_compute[262220]: 2025-10-08 10:04:34.877 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:04:34 compute-0 python3.9[262408]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 08 10:04:35 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:04:35 compute-0 ceph-mon[73572]: pgmap v591: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 511 B/s wr, 1 op/s
Oct 08 10:04:35 compute-0 systemd[1]: Started libpod-conmon-17713851e79d1ffcd0dd8102ba919b86e4d786f034fb81664009fcbd548787f8.scope.
Oct 08 10:04:35 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:04:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d531c804c7a8b89836f21851bdd3a7c846cca0fd115fa1914c09fa20f892011c/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Oct 08 10:04:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d531c804c7a8b89836f21851bdd3a7c846cca0fd115fa1914c09fa20f892011c/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 08 10:04:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d531c804c7a8b89836f21851bdd3a7c846cca0fd115fa1914c09fa20f892011c/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.310 2 INFO nova.virt.driver [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Oct 08 10:04:35 compute-0 podman[262436]: 2025-10-08 10:04:35.356387916 +0000 UTC m=+0.363153845 container init 17713851e79d1ffcd0dd8102ba919b86e4d786f034fb81664009fcbd548787f8 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.build-date=20251001)
Oct 08 10:04:35 compute-0 podman[262436]: 2025-10-08 10:04:35.365105269 +0000 UTC m=+0.371871198 container start 17713851e79d1ffcd0dd8102ba919b86e4d786f034fb81664009fcbd548787f8 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=edpm, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=nova_compute_init, managed_by=edpm_ansible)
Oct 08 10:04:35 compute-0 python3.9[262408]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Oct 08 10:04:35 compute-0 nova_compute_init[262458]: INFO:nova_statedir:Applying nova statedir ownership
Oct 08 10:04:35 compute-0 nova_compute_init[262458]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Oct 08 10:04:35 compute-0 nova_compute_init[262458]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Oct 08 10:04:35 compute-0 nova_compute_init[262458]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Oct 08 10:04:35 compute-0 nova_compute_init[262458]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Oct 08 10:04:35 compute-0 nova_compute_init[262458]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Oct 08 10:04:35 compute-0 nova_compute_init[262458]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Oct 08 10:04:35 compute-0 nova_compute_init[262458]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Oct 08 10:04:35 compute-0 nova_compute_init[262458]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Oct 08 10:04:35 compute-0 nova_compute_init[262458]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Oct 08 10:04:35 compute-0 nova_compute_init[262458]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Oct 08 10:04:35 compute-0 nova_compute_init[262458]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Oct 08 10:04:35 compute-0 nova_compute_init[262458]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Oct 08 10:04:35 compute-0 nova_compute_init[262458]: INFO:nova_statedir:Nova statedir ownership complete
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.415 2 INFO nova.compute.provider_config [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Oct 08 10:04:35 compute-0 systemd[1]: libpod-17713851e79d1ffcd0dd8102ba919b86e4d786f034fb81664009fcbd548787f8.scope: Deactivated successfully.
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.428 2 DEBUG oslo_concurrency.lockutils [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.429 2 DEBUG oslo_concurrency.lockutils [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.429 2 DEBUG oslo_concurrency.lockutils [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.429 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.429 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.429 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.430 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.430 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.430 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.430 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.430 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.430 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.430 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.431 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.431 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.431 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.431 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.431 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.431 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.432 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.432 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.432 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.432 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.432 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.432 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.433 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.433 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.433 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.433 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.433 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.433 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.434 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.434 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.434 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 podman[262459]: 2025-10-08 10:04:35.434505167 +0000 UTC m=+0.026107297 container died 17713851e79d1ffcd0dd8102ba919b86e4d786f034fb81664009fcbd548787f8 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, tcib_managed=true)
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.434 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.434 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.434 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.435 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.435 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.435 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.435 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.435 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.435 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.436 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.436 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.436 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.436 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.436 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.436 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.437 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.437 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.437 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.437 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.437 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.437 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.437 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.438 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.438 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.438 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.438 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.438 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.438 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.439 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.439 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.439 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.439 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.439 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.439 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.439 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.440 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.440 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.440 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.440 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.440 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.440 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.441 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.441 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.441 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.441 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.441 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.441 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.442 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.442 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.442 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.442 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.442 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.443 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.443 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.443 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.443 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.443 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.444 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.444 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.444 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.444 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.444 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.445 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.445 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.445 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.445 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.445 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.446 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.446 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.446 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.446 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.446 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.446 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.447 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.447 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.447 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.447 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.447 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.447 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.448 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.448 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.448 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.448 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.448 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.448 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.448 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.448 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.449 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.449 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.449 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.449 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.449 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.449 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.449 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.450 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.450 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.450 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.450 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.450 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.450 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.450 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.450 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.451 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.451 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.451 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.451 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.451 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.451 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.452 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.452 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.452 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.452 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.452 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.452 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.452 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.453 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.453 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.453 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.453 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.453 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.454 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.454 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.454 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.454 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.454 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.454 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.454 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.455 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.455 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.455 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.455 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.455 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.455 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.455 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.456 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.456 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.456 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.456 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.456 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.456 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.457 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.457 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.457 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.457 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.457 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.458 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.458 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.458 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.458 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.458 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.458 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.458 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.459 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.459 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.459 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.459 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.459 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.459 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.460 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.460 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.460 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.460 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.460 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.460 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.460 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.461 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.461 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.461 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.461 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.461 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.461 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.461 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.462 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.462 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.462 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.462 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.462 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.462 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.462 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.463 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.463 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.463 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.463 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.463 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.463 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.464 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.464 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.464 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.464 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.464 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.464 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.464 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.465 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.465 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.465 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.465 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.465 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.465 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.465 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.466 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.466 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.466 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.466 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.466 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.466 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.467 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.467 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.467 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.467 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.467 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.467 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.467 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.468 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.468 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.468 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.468 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.468 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.468 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.468 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.469 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.469 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.469 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.469 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.469 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.469 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.469 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.470 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.470 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.470 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.470 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.470 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.470 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.471 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.471 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.471 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.471 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.471 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.471 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.471 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.472 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.472 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.472 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.472 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.472 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.472 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.472 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.472 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.473 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.473 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.473 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.473 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.473 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.473 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.473 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.474 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.474 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.474 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.474 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.474 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.474 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.474 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.475 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.475 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.475 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.475 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.475 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.475 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.476 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.476 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.476 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.476 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.476 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.476 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.476 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.477 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.477 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.477 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.477 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.477 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.477 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.477 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.478 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.478 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.478 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.478 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.478 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.478 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.478 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.478 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.479 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.479 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.479 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.479 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.479 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.479 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.479 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.480 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.480 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.480 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.480 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.480 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.480 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.480 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.481 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.481 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.481 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.481 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.481 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.481 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.481 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.481 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.482 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.482 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.482 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.482 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.482 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.482 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.482 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.483 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.483 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.483 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.483 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.483 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.483 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.484 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.484 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.484 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.484 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.484 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.484 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.485 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.485 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.485 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.485 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.485 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.485 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.485 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.485 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.486 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.486 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.486 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.486 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.486 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.486 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.486 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.487 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.487 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.487 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.487 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.487 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.487 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.487 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.488 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.488 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.488 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.488 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.488 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.488 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.488 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.489 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.489 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.489 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.489 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.489 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.489 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.489 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.490 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.490 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.490 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.490 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.490 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.490 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.490 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.491 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.491 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.491 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.491 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.491 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.491 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.491 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.492 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.492 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.492 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.492 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.492 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.492 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.492 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.492 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.493 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.493 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.493 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.493 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.493 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.493 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.493 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.494 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.494 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.494 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.494 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.494 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.494 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.494 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.495 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.495 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.495 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.495 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.495 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.495 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.495 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.496 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.496 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.496 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.496 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.496 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.496 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.496 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.497 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.497 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.497 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.497 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.497 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.497 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.497 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.498 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.498 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.498 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.498 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.498 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.498 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.498 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.499 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.499 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.499 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.499 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.499 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.499 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.499 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.500 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.500 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.500 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.500 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.500 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.500 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.500 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.501 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.501 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.501 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.501 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.501 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.501 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.501 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.501 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.502 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.502 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.502 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.502 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.502 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.502 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.502 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.503 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.503 2 WARNING oslo_config.cfg [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct 08 10:04:35 compute-0 nova_compute[262220]: live_migration_uri is deprecated for removal in favor of two other options that
Oct 08 10:04:35 compute-0 nova_compute[262220]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct 08 10:04:35 compute-0 nova_compute[262220]: and ``live_migration_inbound_addr`` respectively.
Oct 08 10:04:35 compute-0 nova_compute[262220]: ).  Its value may be silently ignored in the future.
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.503 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.503 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.503 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.504 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.504 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.504 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.504 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.504 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.504 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.505 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.505 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.505 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.505 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.505 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.505 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.505 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.506 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.506 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.506 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.rbd_secret_uuid        = 787292cc-8154-50c4-9e00-e9be3e817149 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.506 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.506 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.506 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.506 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.507 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.507 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.507 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.507 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.507 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.507 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.508 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.508 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.508 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.508 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.508 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.508 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.508 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.509 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.509 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.509 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.509 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.509 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.509 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.509 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.510 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.510 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.510 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.510 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.510 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.510 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.510 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.511 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.511 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.511 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.511 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.511 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.511 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.511 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.511 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.512 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.512 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.512 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.512 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.512 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.512 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.512 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.513 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.513 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.513 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.513 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.513 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.513 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.513 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.514 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.514 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.514 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.514 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.514 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.514 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.514 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.514 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.515 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.515 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.515 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.515 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.515 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.515 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.515 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.516 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.516 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.516 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.516 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.516 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.516 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.516 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.517 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.517 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.517 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.517 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.517 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.517 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.517 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.518 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.518 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.518 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.518 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.518 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.518 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.518 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.518 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.519 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.519 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.519 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.519 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.519 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.519 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.519 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.520 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.520 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.520 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.520 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.520 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.520 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.520 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.521 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.521 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.521 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.521 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.521 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.521 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.521 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.521 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.522 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.522 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.522 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.522 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.522 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.522 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.523 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.523 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.523 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.523 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.523 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.523 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.524 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.524 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.524 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.524 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.524 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.524 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.524 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.525 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.525 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.525 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.525 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.525 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.525 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.525 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.525 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.526 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.526 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.526 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.526 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.526 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.526 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.526 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.527 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.527 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.527 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.527 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.527 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.527 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.527 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.528 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.528 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.528 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.528 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.528 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.528 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.528 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.529 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.529 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.529 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.529 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.529 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.529 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.530 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.530 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.530 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.530 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.530 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.530 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.530 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.530 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.531 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.531 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.531 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.531 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.531 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.531 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.532 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.532 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.532 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.532 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.532 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.532 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.532 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.533 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.533 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.533 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.533 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.533 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.533 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.533 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.534 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.534 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.534 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.534 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.534 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.534 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.534 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.535 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.535 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.535 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.535 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.535 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.535 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.535 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.536 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.536 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.536 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.536 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.536 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.536 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.536 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.537 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.537 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.537 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.537 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.537 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.537 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.538 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.538 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.538 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.538 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.538 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.538 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.538 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.539 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.539 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.539 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.539 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.539 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.539 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.540 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.540 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.540 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.540 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.540 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.540 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.540 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.541 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.541 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.541 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.541 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.541 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.541 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.542 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.542 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.542 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.542 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.542 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.542 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.543 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.543 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.543 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.543 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.543 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.543 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.543 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.544 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.544 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.544 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.544 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.544 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.544 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.544 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.545 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.545 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.545 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.545 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.545 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.545 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.545 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.546 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.546 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.546 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.546 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.546 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.546 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.546 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.547 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.547 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.547 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.547 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.547 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.547 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.547 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.548 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.548 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.548 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.548 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.548 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.548 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.548 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.549 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.549 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.549 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.549 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.549 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.549 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.549 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.550 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.550 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.550 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.550 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.550 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.550 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.550 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.550 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.551 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.551 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.551 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.551 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.551 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.551 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.551 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.552 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.552 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.552 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.552 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.552 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.552 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.552 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-17713851e79d1ffcd0dd8102ba919b86e4d786f034fb81664009fcbd548787f8-userdata-shm.mount: Deactivated successfully.
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.553 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.553 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.553 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.553 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.553 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.553 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.554 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.554 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.554 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.554 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.554 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.554 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.554 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.555 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.555 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.555 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.555 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.555 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.555 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.556 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.556 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.556 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.556 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.556 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.556 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.556 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.556 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-d531c804c7a8b89836f21851bdd3a7c846cca0fd115fa1914c09fa20f892011c-merged.mount: Deactivated successfully.
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.557 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.557 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.557 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.557 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.557 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.557 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.557 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.558 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.558 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.558 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.558 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.558 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.558 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.558 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.558 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.559 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.559 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.559 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.559 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.559 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.559 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.560 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.560 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.560 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.560 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.560 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.560 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.560 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.561 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.561 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.561 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.561 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.561 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.561 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.561 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.562 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.562 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.562 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.562 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.562 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.562 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.562 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.563 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.563 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.563 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.563 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.563 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.563 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.563 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.564 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.564 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.564 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.564 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.564 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.565 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.579 2 DEBUG nova.virt.libvirt.host [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.580 2 DEBUG nova.virt.libvirt.host [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.580 2 DEBUG nova.virt.libvirt.host [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.580 2 DEBUG nova.virt.libvirt.host [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Oct 08 10:04:35 compute-0 podman[262469]: 2025-10-08 10:04:35.584860907 +0000 UTC m=+0.151931442 container cleanup 17713851e79d1ffcd0dd8102ba919b86e4d786f034fb81664009fcbd548787f8 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, config_id=edpm, container_name=nova_compute_init, org.label-schema.build-date=20251001, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.591 2 DEBUG nova.virt.libvirt.host [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fc2df2f24f0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Oct 08 10:04:35 compute-0 systemd[1]: libpod-conmon-17713851e79d1ffcd0dd8102ba919b86e4d786f034fb81664009fcbd548787f8.scope: Deactivated successfully.
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.594 2 DEBUG nova.virt.libvirt.host [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fc2df2f24f0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.595 2 INFO nova.virt.libvirt.driver [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Connection event '1' reason 'None'
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.601 2 INFO nova.virt.libvirt.host [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Libvirt host capabilities <capabilities>
Oct 08 10:04:35 compute-0 nova_compute[262220]: 
Oct 08 10:04:35 compute-0 nova_compute[262220]:   <host>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <uuid>a1287f1c-5981-4c2e-a0ce-6a9c84016045</uuid>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <cpu>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <arch>x86_64</arch>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model>EPYC-Rome-v4</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <vendor>AMD</vendor>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <microcode version='16777317'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <signature family='23' model='49' stepping='0'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <maxphysaddr mode='emulate' bits='40'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature name='x2apic'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature name='tsc-deadline'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature name='osxsave'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature name='hypervisor'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature name='tsc_adjust'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature name='spec-ctrl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature name='stibp'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature name='arch-capabilities'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature name='ssbd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature name='cmp_legacy'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature name='topoext'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature name='virt-ssbd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature name='lbrv'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature name='tsc-scale'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature name='vmcb-clean'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature name='pause-filter'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature name='pfthreshold'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature name='svme-addr-chk'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature name='rdctl-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature name='skip-l1dfl-vmentry'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature name='mds-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature name='pschange-mc-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <pages unit='KiB' size='4'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <pages unit='KiB' size='2048'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <pages unit='KiB' size='1048576'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </cpu>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <power_management>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <suspend_mem/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </power_management>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <iommu support='no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <migration_features>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <live/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <uri_transports>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <uri_transport>tcp</uri_transport>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <uri_transport>rdma</uri_transport>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </uri_transports>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </migration_features>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <topology>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <cells num='1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <cell id='0'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:           <memory unit='KiB'>7864104</memory>
Oct 08 10:04:35 compute-0 nova_compute[262220]:           <pages unit='KiB' size='4'>1966026</pages>
Oct 08 10:04:35 compute-0 nova_compute[262220]:           <pages unit='KiB' size='2048'>0</pages>
Oct 08 10:04:35 compute-0 nova_compute[262220]:           <pages unit='KiB' size='1048576'>0</pages>
Oct 08 10:04:35 compute-0 nova_compute[262220]:           <distances>
Oct 08 10:04:35 compute-0 nova_compute[262220]:             <sibling id='0' value='10'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:           </distances>
Oct 08 10:04:35 compute-0 nova_compute[262220]:           <cpus num='8'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:           </cpus>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         </cell>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </cells>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </topology>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <cache>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </cache>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <secmodel>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model>selinux</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <doi>0</doi>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </secmodel>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <secmodel>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model>dac</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <doi>0</doi>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <baselabel type='kvm'>+107:+107</baselabel>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <baselabel type='qemu'>+107:+107</baselabel>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </secmodel>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   </host>
Oct 08 10:04:35 compute-0 nova_compute[262220]: 
Oct 08 10:04:35 compute-0 nova_compute[262220]:   <guest>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <os_type>hvm</os_type>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <arch name='i686'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <wordsize>32</wordsize>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <domain type='qemu'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <domain type='kvm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </arch>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <features>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <pae/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <nonpae/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <acpi default='on' toggle='yes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <apic default='on' toggle='no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <cpuselection/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <deviceboot/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <disksnapshot default='on' toggle='no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <externalSnapshot/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </features>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   </guest>
Oct 08 10:04:35 compute-0 nova_compute[262220]: 
Oct 08 10:04:35 compute-0 nova_compute[262220]:   <guest>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <os_type>hvm</os_type>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <arch name='x86_64'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <wordsize>64</wordsize>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <domain type='qemu'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <domain type='kvm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </arch>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <features>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <acpi default='on' toggle='yes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <apic default='on' toggle='no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <cpuselection/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <deviceboot/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <disksnapshot default='on' toggle='no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <externalSnapshot/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </features>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   </guest>
Oct 08 10:04:35 compute-0 nova_compute[262220]: 
Oct 08 10:04:35 compute-0 nova_compute[262220]: </capabilities>
Oct 08 10:04:35 compute-0 nova_compute[262220]: 
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.607 2 DEBUG nova.virt.libvirt.host [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.609 2 WARNING nova.virt.libvirt.driver [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.609 2 DEBUG nova.virt.libvirt.volume.mount [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.637 2 DEBUG nova.virt.libvirt.host [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Oct 08 10:04:35 compute-0 nova_compute[262220]: <domainCapabilities>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   <path>/usr/libexec/qemu-kvm</path>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   <domain>kvm</domain>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   <machine>pc-q35-rhel9.6.0</machine>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   <arch>i686</arch>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   <vcpu max='4096'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   <iothreads supported='yes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   <os supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <enum name='firmware'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <loader supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='type'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>rom</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>pflash</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='readonly'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>yes</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>no</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='secure'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>no</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </loader>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   </os>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   <cpu>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <mode name='host-passthrough' supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='hostPassthroughMigratable'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>on</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>off</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </mode>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <mode name='maximum' supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='maximumMigratable'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>on</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>off</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </mode>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <mode name='host-model' supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <vendor>AMD</vendor>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='x2apic'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='tsc-deadline'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='hypervisor'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='tsc_adjust'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='spec-ctrl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='stibp'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='arch-capabilities'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='ssbd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='cmp_legacy'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='overflow-recov'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='succor'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='ibrs'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='amd-ssbd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='virt-ssbd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='lbrv'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='tsc-scale'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='vmcb-clean'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='flushbyasid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='pause-filter'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='pfthreshold'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='svme-addr-chk'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='rdctl-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='mds-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='pschange-mc-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='gds-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='rfds-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='disable' name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </mode>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <mode name='custom' supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Broadwell'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Broadwell-IBRS'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Broadwell-noTSX'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Broadwell-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Broadwell-v2'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Broadwell-v3'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Broadwell-v4'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Cascadelake-Server'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Cascadelake-Server-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Cascadelake-Server-v2'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Cascadelake-Server-v3'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Cascadelake-Server-v4'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Cascadelake-Server-v5'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Cooperlake'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='taa-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Cooperlake-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='taa-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Cooperlake-v2'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='taa-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Denverton'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='mpx'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Denverton-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='mpx'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Denverton-v2'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Denverton-v3'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Dhyana-v2'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='EPYC-Genoa'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amd-psfd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='auto-ibrs'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512ifma'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='no-nested-data-bp'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='null-sel-clr-base'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='stibp-always-on'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='EPYC-Genoa-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amd-psfd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='auto-ibrs'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512ifma'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='no-nested-data-bp'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='null-sel-clr-base'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='stibp-always-on'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='EPYC-Milan'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='EPYC-Milan-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='EPYC-Milan-v2'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amd-psfd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='no-nested-data-bp'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='null-sel-clr-base'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='stibp-always-on'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='EPYC-Rome'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='EPYC-Rome-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='EPYC-Rome-v2'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='EPYC-Rome-v3'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 08 10:04:35 compute-0 sudo[262406]: pam_unix(sudo:session): session closed for user root
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='EPYC-v3'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='EPYC-v4'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='GraniteRapids'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-fp16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-int8'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-tile'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx-vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-fp16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512ifma'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='bus-lock-detect'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fbsdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrc'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrs'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fzrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='mcdt-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pbrsb-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='prefetchiti'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='psdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='sbdr-ssdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='serialize'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='taa-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='tsx-ldtrk'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xfd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='GraniteRapids-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-fp16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-int8'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-tile'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx-vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-fp16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512ifma'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='bus-lock-detect'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fbsdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrc'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrs'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fzrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='mcdt-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pbrsb-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='prefetchiti'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='psdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='sbdr-ssdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='serialize'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='taa-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='tsx-ldtrk'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xfd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='GraniteRapids-v2'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-fp16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-int8'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-tile'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx-vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx10'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx10-128'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx10-256'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx10-512'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-fp16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512ifma'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='bus-lock-detect'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='cldemote'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fbsdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrc'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrs'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fzrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='mcdt-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='movdir64b'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='movdiri'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pbrsb-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='prefetchiti'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='psdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='sbdr-ssdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='serialize'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ss'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='taa-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='tsx-ldtrk'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xfd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Haswell'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Haswell-IBRS'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Haswell-noTSX'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Haswell-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Haswell-v2'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Haswell-v3'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Haswell-v4'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Icelake-Server'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Icelake-Server-noTSX'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Icelake-Server-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Icelake-Server-v2'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Icelake-Server-v3'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='taa-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Icelake-Server-v4'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512ifma'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='taa-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Icelake-Server-v5'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512ifma'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='taa-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Icelake-Server-v6'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512ifma'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='taa-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Icelake-Server-v7'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512ifma'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='taa-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='IvyBridge'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='IvyBridge-IBRS'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='IvyBridge-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='IvyBridge-v2'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='KnightsMill'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-4fmaps'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-4vnniw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512er'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512pf'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ss'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='KnightsMill-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-4fmaps'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-4vnniw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512er'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512pf'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ss'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Opteron_G4'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fma4'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xop'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Opteron_G4-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fma4'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xop'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Opteron_G5'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fma4'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='tbm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xop'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Opteron_G5-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fma4'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='tbm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xop'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='SapphireRapids'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-int8'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-tile'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx-vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-fp16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512ifma'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='bus-lock-detect'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrc'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrs'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fzrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='serialize'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='taa-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='tsx-ldtrk'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xfd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='SapphireRapids-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-int8'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-tile'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx-vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-fp16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512ifma'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='bus-lock-detect'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrc'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrs'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fzrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='serialize'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='taa-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='tsx-ldtrk'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xfd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='SapphireRapids-v2'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-int8'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-tile'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx-vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-fp16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512ifma'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='bus-lock-detect'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fbsdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrc'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrs'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fzrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='psdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='sbdr-ssdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='serialize'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='taa-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='tsx-ldtrk'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xfd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='SapphireRapids-v3'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-int8'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-tile'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx-vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-fp16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512ifma'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='bus-lock-detect'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='cldemote'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fbsdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrc'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrs'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fzrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='movdir64b'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='movdiri'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='psdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='sbdr-ssdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='serialize'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ss'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='taa-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='tsx-ldtrk'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xfd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='SierraForest'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx-ifma'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx-ne-convert'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx-vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx-vnni-int8'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='bus-lock-detect'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='cmpccxadd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fbsdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrs'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='mcdt-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pbrsb-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='psdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='sbdr-ssdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='serialize'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='SierraForest-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx-ifma'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx-ne-convert'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx-vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx-vnni-int8'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='bus-lock-detect'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='cmpccxadd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fbsdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrs'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='mcdt-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pbrsb-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='psdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='sbdr-ssdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='serialize'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Skylake-Client'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Skylake-Client-IBRS'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Skylake-Client-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Skylake-Client-v2'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Skylake-Client-v3'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Skylake-Client-v4'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Skylake-Server'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Skylake-Server-IBRS'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Skylake-Server-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Skylake-Server-v2'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Skylake-Server-v3'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Skylake-Server-v4'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Skylake-Server-v5'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Snowridge'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='cldemote'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='core-capability'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='movdir64b'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='movdiri'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='mpx'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='split-lock-detect'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Snowridge-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='cldemote'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='core-capability'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='movdir64b'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='movdiri'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='mpx'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='split-lock-detect'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Snowridge-v2'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='cldemote'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='core-capability'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='movdir64b'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='movdiri'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='split-lock-detect'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Snowridge-v3'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='cldemote'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='core-capability'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='movdir64b'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='movdiri'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='split-lock-detect'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Snowridge-v4'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='cldemote'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='movdir64b'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='movdiri'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='athlon'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='3dnow'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='3dnowext'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='athlon-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='3dnow'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='3dnowext'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='core2duo'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ss'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='core2duo-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ss'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='coreduo'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ss'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='coreduo-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ss'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='n270'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ss'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='n270-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ss'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='phenom'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='3dnow'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='3dnowext'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='phenom-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='3dnow'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='3dnowext'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </mode>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   </cpu>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   <memoryBacking supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <enum name='sourceType'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <value>file</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <value>anonymous</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <value>memfd</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   </memoryBacking>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   <devices>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <disk supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='diskDevice'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>disk</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>cdrom</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>floppy</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>lun</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='bus'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>fdc</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>scsi</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>virtio</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>usb</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>sata</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='model'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>virtio</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>virtio-transitional</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>virtio-non-transitional</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </disk>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <graphics supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='type'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>vnc</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>egl-headless</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>dbus</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </graphics>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <video supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='modelType'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>vga</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>cirrus</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>virtio</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>none</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>bochs</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>ramfb</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </video>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <hostdev supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='mode'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>subsystem</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='startupPolicy'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>default</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>mandatory</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>requisite</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>optional</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='subsysType'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>usb</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>pci</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>scsi</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='capsType'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='pciBackend'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </hostdev>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <rng supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='model'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>virtio</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>virtio-transitional</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>virtio-non-transitional</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='backendModel'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>random</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>egd</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>builtin</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </rng>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <filesystem supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='driverType'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>path</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>handle</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>virtiofs</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </filesystem>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <tpm supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='model'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>tpm-tis</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>tpm-crb</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='backendModel'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>emulator</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>external</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='backendVersion'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>2.0</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </tpm>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <redirdev supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='bus'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>usb</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </redirdev>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <channel supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='type'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>pty</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>unix</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </channel>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <crypto supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='model'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='type'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>qemu</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='backendModel'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>builtin</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </crypto>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <interface supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='backendType'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>default</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>passt</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </interface>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <panic supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='model'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>isa</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>hyperv</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </panic>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   </devices>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   <features>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <gic supported='no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <vmcoreinfo supported='yes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <genid supported='yes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <backingStoreInput supported='yes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <backup supported='yes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <async-teardown supported='yes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <ps2 supported='yes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <sev supported='no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <sgx supported='no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <hyperv supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='features'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>relaxed</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>vapic</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>spinlocks</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>vpindex</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>runtime</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>synic</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>stimer</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>reset</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>vendor_id</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>frequencies</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>reenlightenment</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>tlbflush</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>ipi</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>avic</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>emsr_bitmap</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>xmm_input</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </hyperv>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <launchSecurity supported='no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   </features>
Oct 08 10:04:35 compute-0 nova_compute[262220]: </domainCapabilities>
Oct 08 10:04:35 compute-0 nova_compute[262220]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.642 2 DEBUG nova.virt.libvirt.host [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Oct 08 10:04:35 compute-0 nova_compute[262220]: <domainCapabilities>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   <path>/usr/libexec/qemu-kvm</path>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   <domain>kvm</domain>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   <machine>pc-i440fx-rhel7.6.0</machine>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   <arch>i686</arch>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   <vcpu max='240'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   <iothreads supported='yes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   <os supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <enum name='firmware'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <loader supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='type'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>rom</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>pflash</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='readonly'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>yes</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>no</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='secure'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>no</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </loader>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   </os>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   <cpu>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <mode name='host-passthrough' supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='hostPassthroughMigratable'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>on</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>off</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </mode>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <mode name='maximum' supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='maximumMigratable'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>on</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>off</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </mode>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <mode name='host-model' supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <vendor>AMD</vendor>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='x2apic'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='tsc-deadline'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='hypervisor'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='tsc_adjust'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='spec-ctrl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='stibp'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='arch-capabilities'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='ssbd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='cmp_legacy'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='overflow-recov'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='succor'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='ibrs'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='amd-ssbd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='virt-ssbd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='lbrv'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='tsc-scale'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='vmcb-clean'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='flushbyasid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='pause-filter'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='pfthreshold'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='svme-addr-chk'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='rdctl-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='mds-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='pschange-mc-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='gds-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='rfds-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='disable' name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </mode>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <mode name='custom' supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Broadwell'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Broadwell-IBRS'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Broadwell-noTSX'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Broadwell-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Broadwell-v2'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Broadwell-v3'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Broadwell-v4'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Cascadelake-Server'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Cascadelake-Server-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Cascadelake-Server-v2'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Cascadelake-Server-v3'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Cascadelake-Server-v4'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Cascadelake-Server-v5'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Cooperlake'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='taa-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Cooperlake-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='taa-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Cooperlake-v2'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='taa-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Denverton'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='mpx'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Denverton-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='mpx'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Denverton-v2'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Denverton-v3'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Dhyana-v2'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='EPYC-Genoa'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amd-psfd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='auto-ibrs'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512ifma'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='no-nested-data-bp'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='null-sel-clr-base'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='stibp-always-on'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='EPYC-Genoa-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amd-psfd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='auto-ibrs'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512ifma'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='no-nested-data-bp'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='null-sel-clr-base'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='stibp-always-on'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='EPYC-Milan'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='EPYC-Milan-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='EPYC-Milan-v2'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amd-psfd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='no-nested-data-bp'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='null-sel-clr-base'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='stibp-always-on'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='EPYC-Rome'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='EPYC-Rome-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='EPYC-Rome-v2'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='EPYC-Rome-v3'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='EPYC-v3'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='EPYC-v4'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='GraniteRapids'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-fp16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-int8'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-tile'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx-vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-fp16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512ifma'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='bus-lock-detect'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fbsdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrc'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrs'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fzrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='mcdt-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pbrsb-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='prefetchiti'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='psdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='sbdr-ssdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='serialize'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='taa-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='tsx-ldtrk'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xfd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='GraniteRapids-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-fp16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-int8'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-tile'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx-vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-fp16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512ifma'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='bus-lock-detect'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fbsdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrc'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrs'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fzrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='mcdt-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pbrsb-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='prefetchiti'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='psdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='sbdr-ssdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='serialize'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='taa-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='tsx-ldtrk'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xfd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='GraniteRapids-v2'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-fp16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-int8'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-tile'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx-vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx10'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx10-128'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx10-256'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx10-512'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-fp16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512ifma'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='bus-lock-detect'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='cldemote'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fbsdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrc'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrs'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fzrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='mcdt-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='movdir64b'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='movdiri'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pbrsb-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='prefetchiti'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='psdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='sbdr-ssdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='serialize'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ss'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='taa-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='tsx-ldtrk'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xfd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Haswell'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Haswell-IBRS'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Haswell-noTSX'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Haswell-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Haswell-v2'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Haswell-v3'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Haswell-v4'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Icelake-Server'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Icelake-Server-noTSX'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Icelake-Server-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Icelake-Server-v2'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Icelake-Server-v3'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='taa-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Icelake-Server-v4'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512ifma'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='taa-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Icelake-Server-v5'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512ifma'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='taa-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Icelake-Server-v6'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512ifma'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='taa-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Icelake-Server-v7'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512ifma'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='taa-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='IvyBridge'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='IvyBridge-IBRS'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='IvyBridge-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='IvyBridge-v2'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='KnightsMill'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-4fmaps'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-4vnniw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512er'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512pf'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ss'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='KnightsMill-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-4fmaps'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-4vnniw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512er'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512pf'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ss'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Opteron_G4'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fma4'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xop'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Opteron_G4-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fma4'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xop'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Opteron_G5'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fma4'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='tbm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xop'/>
Oct 08 10:04:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:04:35] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Opteron_G5-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fma4'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='tbm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xop'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='SapphireRapids'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-int8'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-tile'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx-vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-fp16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512ifma'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='bus-lock-detect'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrc'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrs'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fzrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='serialize'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='taa-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='tsx-ldtrk'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xfd'/>
Oct 08 10:04:35 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:04:35] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='SapphireRapids-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-int8'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-tile'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx-vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-fp16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512ifma'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='bus-lock-detect'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrc'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrs'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fzrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='serialize'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='taa-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='tsx-ldtrk'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xfd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='SapphireRapids-v2'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-int8'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-tile'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx-vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-fp16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512ifma'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='bus-lock-detect'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fbsdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrc'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrs'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fzrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='psdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='sbdr-ssdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='serialize'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='taa-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='tsx-ldtrk'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xfd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='SapphireRapids-v3'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-int8'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-tile'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx-vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-fp16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512ifma'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='bus-lock-detect'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='cldemote'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fbsdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrc'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrs'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fzrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='movdir64b'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='movdiri'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='psdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='sbdr-ssdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='serialize'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ss'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='taa-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='tsx-ldtrk'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xfd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='SierraForest'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx-ifma'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx-ne-convert'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx-vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx-vnni-int8'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='bus-lock-detect'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='cmpccxadd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fbsdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrs'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='mcdt-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pbrsb-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='psdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='sbdr-ssdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='serialize'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='SierraForest-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx-ifma'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx-ne-convert'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx-vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx-vnni-int8'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='bus-lock-detect'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='cmpccxadd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fbsdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrs'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='mcdt-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pbrsb-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='psdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='sbdr-ssdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='serialize'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Skylake-Client'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Skylake-Client-IBRS'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Skylake-Client-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Skylake-Client-v2'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Skylake-Client-v3'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Skylake-Client-v4'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Skylake-Server'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Skylake-Server-IBRS'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Skylake-Server-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Skylake-Server-v2'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Skylake-Server-v3'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Skylake-Server-v4'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Skylake-Server-v5'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Snowridge'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='cldemote'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='core-capability'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='movdir64b'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='movdiri'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='mpx'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='split-lock-detect'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Snowridge-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='cldemote'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='core-capability'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='movdir64b'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='movdiri'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='mpx'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='split-lock-detect'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Snowridge-v2'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='cldemote'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='core-capability'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='movdir64b'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='movdiri'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='split-lock-detect'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Snowridge-v3'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='cldemote'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='core-capability'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='movdir64b'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='movdiri'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='split-lock-detect'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Snowridge-v4'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='cldemote'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='movdir64b'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='movdiri'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='athlon'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='3dnow'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='3dnowext'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='athlon-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='3dnow'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='3dnowext'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='core2duo'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ss'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='core2duo-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ss'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='coreduo'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ss'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='coreduo-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ss'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='n270'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ss'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='n270-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ss'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='phenom'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='3dnow'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='3dnowext'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='phenom-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='3dnow'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='3dnowext'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </mode>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   </cpu>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   <memoryBacking supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <enum name='sourceType'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <value>file</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <value>anonymous</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <value>memfd</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   </memoryBacking>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   <devices>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <disk supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='diskDevice'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>disk</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>cdrom</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>floppy</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>lun</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='bus'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>ide</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>fdc</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>scsi</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>virtio</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>usb</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>sata</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='model'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>virtio</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>virtio-transitional</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>virtio-non-transitional</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </disk>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <graphics supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='type'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>vnc</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>egl-headless</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>dbus</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </graphics>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <video supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='modelType'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>vga</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>cirrus</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>virtio</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>none</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>bochs</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>ramfb</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </video>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <hostdev supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='mode'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>subsystem</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='startupPolicy'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>default</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>mandatory</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>requisite</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>optional</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='subsysType'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>usb</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>pci</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>scsi</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='capsType'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='pciBackend'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </hostdev>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <rng supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='model'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>virtio</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>virtio-transitional</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>virtio-non-transitional</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='backendModel'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>random</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>egd</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>builtin</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </rng>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <filesystem supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='driverType'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>path</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>handle</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>virtiofs</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </filesystem>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <tpm supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='model'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>tpm-tis</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>tpm-crb</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='backendModel'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>emulator</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>external</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='backendVersion'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>2.0</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </tpm>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <redirdev supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='bus'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>usb</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </redirdev>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <channel supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='type'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>pty</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>unix</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </channel>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <crypto supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='model'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='type'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>qemu</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='backendModel'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>builtin</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </crypto>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <interface supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='backendType'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>default</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>passt</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </interface>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <panic supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='model'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>isa</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>hyperv</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </panic>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   </devices>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   <features>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <gic supported='no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <vmcoreinfo supported='yes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <genid supported='yes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <backingStoreInput supported='yes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <backup supported='yes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <async-teardown supported='yes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <ps2 supported='yes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <sev supported='no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <sgx supported='no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <hyperv supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='features'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>relaxed</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>vapic</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>spinlocks</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>vpindex</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>runtime</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>synic</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>stimer</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>reset</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>vendor_id</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>frequencies</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>reenlightenment</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>tlbflush</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>ipi</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>avic</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>emsr_bitmap</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>xmm_input</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </hyperv>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <launchSecurity supported='no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   </features>
Oct 08 10:04:35 compute-0 nova_compute[262220]: </domainCapabilities>
Oct 08 10:04:35 compute-0 nova_compute[262220]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.670 2 DEBUG nova.virt.libvirt.host [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.674 2 DEBUG nova.virt.libvirt.host [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Oct 08 10:04:35 compute-0 nova_compute[262220]: <domainCapabilities>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   <path>/usr/libexec/qemu-kvm</path>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   <domain>kvm</domain>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   <machine>pc-q35-rhel9.6.0</machine>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   <arch>x86_64</arch>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   <vcpu max='4096'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   <iothreads supported='yes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   <os supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <enum name='firmware'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <value>efi</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <loader supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='type'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>rom</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>pflash</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='readonly'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>yes</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>no</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='secure'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>yes</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>no</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </loader>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   </os>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   <cpu>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <mode name='host-passthrough' supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='hostPassthroughMigratable'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>on</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>off</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </mode>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <mode name='maximum' supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='maximumMigratable'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>on</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>off</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </mode>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <mode name='host-model' supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <vendor>AMD</vendor>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='x2apic'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='tsc-deadline'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='hypervisor'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='tsc_adjust'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='spec-ctrl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='stibp'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='arch-capabilities'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='ssbd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='cmp_legacy'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='overflow-recov'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='succor'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='ibrs'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='amd-ssbd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='virt-ssbd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='lbrv'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='tsc-scale'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='vmcb-clean'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='flushbyasid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='pause-filter'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='pfthreshold'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='svme-addr-chk'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='rdctl-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='mds-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='pschange-mc-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='gds-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='rfds-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='disable' name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </mode>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <mode name='custom' supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Broadwell'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Broadwell-IBRS'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Broadwell-noTSX'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Broadwell-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Broadwell-v2'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Broadwell-v3'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Broadwell-v4'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Cascadelake-Server'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Cascadelake-Server-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Cascadelake-Server-v2'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Cascadelake-Server-v3'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Cascadelake-Server-v4'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Cascadelake-Server-v5'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Cooperlake'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='taa-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Cooperlake-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='taa-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Cooperlake-v2'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='taa-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Denverton'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='mpx'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Denverton-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='mpx'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Denverton-v2'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Denverton-v3'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Dhyana-v2'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='EPYC-Genoa'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amd-psfd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='auto-ibrs'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512ifma'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='no-nested-data-bp'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='null-sel-clr-base'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='stibp-always-on'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='EPYC-Genoa-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amd-psfd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='auto-ibrs'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512ifma'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='no-nested-data-bp'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='null-sel-clr-base'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='stibp-always-on'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='EPYC-Milan'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='EPYC-Milan-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='EPYC-Milan-v2'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amd-psfd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='no-nested-data-bp'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='null-sel-clr-base'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='stibp-always-on'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='EPYC-Rome'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='EPYC-Rome-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='EPYC-Rome-v2'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='EPYC-Rome-v3'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='EPYC-v3'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='EPYC-v4'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='GraniteRapids'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-fp16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-int8'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-tile'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx-vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-fp16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512ifma'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='bus-lock-detect'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fbsdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrc'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrs'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fzrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='mcdt-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pbrsb-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='prefetchiti'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='psdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='sbdr-ssdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='serialize'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='taa-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='tsx-ldtrk'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xfd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='GraniteRapids-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-fp16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-int8'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-tile'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx-vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-fp16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512ifma'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='bus-lock-detect'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fbsdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrc'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrs'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fzrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='mcdt-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pbrsb-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='prefetchiti'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='psdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='sbdr-ssdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='serialize'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='taa-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='tsx-ldtrk'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xfd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='GraniteRapids-v2'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-fp16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-int8'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-tile'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx-vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx10'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx10-128'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx10-256'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx10-512'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-fp16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512ifma'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='bus-lock-detect'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='cldemote'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fbsdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrc'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrs'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fzrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='mcdt-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='movdir64b'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='movdiri'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pbrsb-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='prefetchiti'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='psdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='sbdr-ssdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='serialize'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ss'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='taa-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='tsx-ldtrk'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xfd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Haswell'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Haswell-IBRS'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Haswell-noTSX'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Haswell-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Haswell-v2'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Haswell-v3'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Haswell-v4'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Icelake-Server'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Icelake-Server-noTSX'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Icelake-Server-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Icelake-Server-v2'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Icelake-Server-v3'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='taa-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Icelake-Server-v4'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512ifma'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='taa-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Icelake-Server-v5'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512ifma'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='taa-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Icelake-Server-v6'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512ifma'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='taa-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Icelake-Server-v7'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512ifma'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='taa-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='IvyBridge'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='IvyBridge-IBRS'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='IvyBridge-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='IvyBridge-v2'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='KnightsMill'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-4fmaps'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-4vnniw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512er'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512pf'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ss'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='KnightsMill-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-4fmaps'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-4vnniw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512er'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512pf'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ss'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Opteron_G4'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fma4'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xop'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Opteron_G4-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fma4'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xop'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Opteron_G5'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fma4'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='tbm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xop'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Opteron_G5-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fma4'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='tbm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xop'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='SapphireRapids'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-int8'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-tile'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx-vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-fp16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512ifma'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='bus-lock-detect'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrc'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrs'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fzrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='serialize'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='taa-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='tsx-ldtrk'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xfd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='SapphireRapids-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-int8'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-tile'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx-vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-fp16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512ifma'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='bus-lock-detect'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrc'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrs'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fzrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='serialize'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='taa-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='tsx-ldtrk'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xfd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='SapphireRapids-v2'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-int8'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-tile'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx-vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-fp16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512ifma'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='bus-lock-detect'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fbsdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrc'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrs'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fzrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='psdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='sbdr-ssdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='serialize'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='taa-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='tsx-ldtrk'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xfd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='SapphireRapids-v3'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-int8'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-tile'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx-vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-fp16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512ifma'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='bus-lock-detect'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='cldemote'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fbsdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrc'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrs'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fzrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='movdir64b'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='movdiri'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='psdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='sbdr-ssdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='serialize'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ss'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='taa-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='tsx-ldtrk'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xfd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='SierraForest'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx-ifma'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx-ne-convert'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx-vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx-vnni-int8'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='bus-lock-detect'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='cmpccxadd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fbsdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrs'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='mcdt-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pbrsb-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='psdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='sbdr-ssdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='serialize'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='SierraForest-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx-ifma'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx-ne-convert'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx-vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx-vnni-int8'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='bus-lock-detect'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='cmpccxadd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fbsdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrs'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='mcdt-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pbrsb-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='psdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='sbdr-ssdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='serialize'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Skylake-Client'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Skylake-Client-IBRS'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Skylake-Client-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Skylake-Client-v2'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Skylake-Client-v3'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Skylake-Client-v4'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Skylake-Server'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Skylake-Server-IBRS'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Skylake-Server-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Skylake-Server-v2'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Skylake-Server-v3'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Skylake-Server-v4'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Skylake-Server-v5'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Snowridge'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='cldemote'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='core-capability'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='movdir64b'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='movdiri'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='mpx'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='split-lock-detect'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Snowridge-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='cldemote'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='core-capability'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='movdir64b'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='movdiri'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='mpx'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='split-lock-detect'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Snowridge-v2'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='cldemote'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='core-capability'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='movdir64b'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='movdiri'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='split-lock-detect'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Snowridge-v3'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='cldemote'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='core-capability'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='movdir64b'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='movdiri'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='split-lock-detect'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Snowridge-v4'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='cldemote'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='movdir64b'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='movdiri'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='athlon'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='3dnow'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='3dnowext'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='athlon-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='3dnow'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='3dnowext'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='core2duo'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ss'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='core2duo-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ss'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='coreduo'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ss'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='coreduo-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ss'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='n270'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ss'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='n270-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ss'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='phenom'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='3dnow'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='3dnowext'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='phenom-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='3dnow'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='3dnowext'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </mode>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   </cpu>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   <memoryBacking supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <enum name='sourceType'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <value>file</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <value>anonymous</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <value>memfd</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   </memoryBacking>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   <devices>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <disk supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='diskDevice'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>disk</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>cdrom</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>floppy</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>lun</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='bus'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>fdc</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>scsi</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>virtio</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>usb</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>sata</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='model'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>virtio</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>virtio-transitional</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>virtio-non-transitional</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </disk>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <graphics supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='type'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>vnc</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>egl-headless</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>dbus</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </graphics>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <video supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='modelType'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>vga</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>cirrus</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>virtio</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>none</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>bochs</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>ramfb</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </video>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <hostdev supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='mode'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>subsystem</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='startupPolicy'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>default</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>mandatory</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>requisite</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>optional</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='subsysType'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>usb</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>pci</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>scsi</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='capsType'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='pciBackend'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </hostdev>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <rng supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='model'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>virtio</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>virtio-transitional</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>virtio-non-transitional</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='backendModel'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>random</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>egd</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>builtin</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </rng>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <filesystem supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='driverType'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>path</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>handle</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>virtiofs</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </filesystem>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <tpm supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='model'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>tpm-tis</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>tpm-crb</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='backendModel'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>emulator</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>external</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='backendVersion'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>2.0</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </tpm>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <redirdev supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='bus'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>usb</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </redirdev>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <channel supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='type'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>pty</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>unix</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </channel>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <crypto supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='model'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='type'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>qemu</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='backendModel'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>builtin</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </crypto>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <interface supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='backendType'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>default</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>passt</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </interface>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <panic supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='model'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>isa</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>hyperv</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </panic>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   </devices>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   <features>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <gic supported='no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <vmcoreinfo supported='yes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <genid supported='yes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <backingStoreInput supported='yes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <backup supported='yes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <async-teardown supported='yes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <ps2 supported='yes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <sev supported='no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <sgx supported='no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <hyperv supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='features'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>relaxed</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>vapic</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>spinlocks</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>vpindex</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>runtime</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>synic</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>stimer</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>reset</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>vendor_id</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>frequencies</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>reenlightenment</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>tlbflush</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>ipi</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>avic</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>emsr_bitmap</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>xmm_input</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </hyperv>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <launchSecurity supported='no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   </features>
Oct 08 10:04:35 compute-0 nova_compute[262220]: </domainCapabilities>
Oct 08 10:04:35 compute-0 nova_compute[262220]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.756 2 DEBUG nova.virt.libvirt.host [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Oct 08 10:04:35 compute-0 nova_compute[262220]: <domainCapabilities>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   <path>/usr/libexec/qemu-kvm</path>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   <domain>kvm</domain>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   <machine>pc-i440fx-rhel7.6.0</machine>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   <arch>x86_64</arch>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   <vcpu max='240'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   <iothreads supported='yes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   <os supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <enum name='firmware'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <loader supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='type'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>rom</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>pflash</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='readonly'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>yes</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>no</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='secure'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>no</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </loader>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   </os>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   <cpu>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <mode name='host-passthrough' supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='hostPassthroughMigratable'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>on</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>off</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </mode>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <mode name='maximum' supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='maximumMigratable'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>on</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>off</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </mode>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <mode name='host-model' supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <vendor>AMD</vendor>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='x2apic'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='tsc-deadline'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='hypervisor'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='tsc_adjust'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='spec-ctrl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='stibp'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='arch-capabilities'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='ssbd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='cmp_legacy'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='overflow-recov'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='succor'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='ibrs'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='amd-ssbd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='virt-ssbd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='lbrv'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='tsc-scale'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='vmcb-clean'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='flushbyasid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='pause-filter'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='pfthreshold'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='svme-addr-chk'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='rdctl-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='mds-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='pschange-mc-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='gds-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='require' name='rfds-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <feature policy='disable' name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </mode>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <mode name='custom' supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Broadwell'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Broadwell-IBRS'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Broadwell-noTSX'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Broadwell-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Broadwell-v2'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Broadwell-v3'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Broadwell-v4'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Cascadelake-Server'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Cascadelake-Server-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Cascadelake-Server-v2'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Cascadelake-Server-v3'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Cascadelake-Server-v4'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Cascadelake-Server-v5'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Cooperlake'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='taa-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Cooperlake-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='taa-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Cooperlake-v2'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='taa-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Denverton'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='mpx'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Denverton-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='mpx'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Denverton-v2'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Denverton-v3'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Dhyana-v2'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='EPYC-Genoa'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amd-psfd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='auto-ibrs'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512ifma'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='no-nested-data-bp'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='null-sel-clr-base'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='stibp-always-on'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='EPYC-Genoa-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amd-psfd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='auto-ibrs'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512ifma'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='no-nested-data-bp'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='null-sel-clr-base'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='stibp-always-on'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='EPYC-Milan'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='EPYC-Milan-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='EPYC-Milan-v2'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amd-psfd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='no-nested-data-bp'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='null-sel-clr-base'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='stibp-always-on'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='EPYC-Rome'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='EPYC-Rome-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='EPYC-Rome-v2'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='EPYC-Rome-v3'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='EPYC-v3'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='EPYC-v4'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='GraniteRapids'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-fp16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-int8'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-tile'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx-vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-fp16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512ifma'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='bus-lock-detect'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fbsdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrc'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrs'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fzrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='mcdt-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pbrsb-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='prefetchiti'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='psdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='sbdr-ssdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='serialize'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='taa-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='tsx-ldtrk'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xfd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='GraniteRapids-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-fp16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-int8'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-tile'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx-vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-fp16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512ifma'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='bus-lock-detect'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fbsdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrc'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrs'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fzrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='mcdt-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pbrsb-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='prefetchiti'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='psdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='sbdr-ssdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='serialize'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='taa-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='tsx-ldtrk'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xfd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='GraniteRapids-v2'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-fp16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-int8'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-tile'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx-vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx10'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx10-128'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx10-256'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx10-512'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-fp16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512ifma'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='bus-lock-detect'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='cldemote'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fbsdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrc'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrs'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fzrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='mcdt-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='movdir64b'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='movdiri'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pbrsb-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='prefetchiti'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='psdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='sbdr-ssdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='serialize'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ss'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='taa-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='tsx-ldtrk'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xfd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Haswell'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Haswell-IBRS'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Haswell-noTSX'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Haswell-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Haswell-v2'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Haswell-v3'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Haswell-v4'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Icelake-Server'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Icelake-Server-noTSX'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Icelake-Server-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Icelake-Server-v2'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Icelake-Server-v3'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='taa-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Icelake-Server-v4'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512ifma'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='taa-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Icelake-Server-v5'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512ifma'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='taa-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Icelake-Server-v6'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512ifma'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='taa-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Icelake-Server-v7'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512ifma'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='taa-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='IvyBridge'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='IvyBridge-IBRS'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='IvyBridge-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='IvyBridge-v2'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='KnightsMill'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-4fmaps'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-4vnniw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512er'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512pf'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ss'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='KnightsMill-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-4fmaps'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-4vnniw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512er'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512pf'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ss'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Opteron_G4'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fma4'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xop'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Opteron_G4-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fma4'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xop'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Opteron_G5'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fma4'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='tbm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xop'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Opteron_G5-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fma4'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='tbm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xop'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='SapphireRapids'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-int8'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-tile'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx-vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-fp16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512ifma'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='bus-lock-detect'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrc'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrs'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fzrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='serialize'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='taa-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='tsx-ldtrk'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xfd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='SapphireRapids-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-int8'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-tile'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx-vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-fp16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512ifma'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='bus-lock-detect'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrc'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrs'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fzrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='serialize'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='taa-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='tsx-ldtrk'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xfd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='SapphireRapids-v2'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-int8'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-tile'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx-vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-fp16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512ifma'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='bus-lock-detect'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fbsdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrc'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrs'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fzrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='psdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='sbdr-ssdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='serialize'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='taa-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='tsx-ldtrk'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xfd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='SapphireRapids-v3'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-int8'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='amx-tile'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx-vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-bf16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-fp16'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512-vpopcntdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bitalg'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512ifma'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vbmi2'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='bus-lock-detect'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='cldemote'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fbsdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrc'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrs'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fzrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='la57'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='movdir64b'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='movdiri'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='psdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='sbdr-ssdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='serialize'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ss'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='taa-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='tsx-ldtrk'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xfd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='SierraForest'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx-ifma'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx-ne-convert'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx-vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx-vnni-int8'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='bus-lock-detect'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='cmpccxadd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fbsdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrs'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='mcdt-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pbrsb-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='psdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='sbdr-ssdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='serialize'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='SierraForest-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx-ifma'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx-ne-convert'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx-vnni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx-vnni-int8'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='bus-lock-detect'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='cmpccxadd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fbsdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='fsrs'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ibrs-all'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='mcdt-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pbrsb-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='psdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='sbdr-ssdp-no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='serialize'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vaes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='vpclmulqdq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Skylake-Client'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Skylake-Client-IBRS'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Skylake-Client-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Skylake-Client-v2'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Skylake-Client-v3'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Skylake-Client-v4'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Skylake-Server'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Skylake-Server-IBRS'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Skylake-Server-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Skylake-Server-v2'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='hle'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='rtm'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Skylake-Server-v3'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Skylake-Server-v4'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Skylake-Server-v5'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512bw'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512cd'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512dq'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512f'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='avx512vl'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='invpcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pcid'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='pku'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Snowridge'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='cldemote'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='core-capability'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='movdir64b'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='movdiri'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='mpx'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='split-lock-detect'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Snowridge-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='cldemote'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='core-capability'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='movdir64b'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='movdiri'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='mpx'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='split-lock-detect'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Snowridge-v2'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='cldemote'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='core-capability'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='movdir64b'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='movdiri'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='split-lock-detect'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Snowridge-v3'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='cldemote'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='core-capability'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='movdir64b'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='movdiri'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='split-lock-detect'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='Snowridge-v4'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='cldemote'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='erms'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='gfni'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='movdir64b'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='movdiri'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='xsaves'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='athlon'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='3dnow'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='3dnowext'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='athlon-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='3dnow'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='3dnowext'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='core2duo'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ss'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='core2duo-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ss'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='coreduo'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ss'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='coreduo-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ss'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='n270'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ss'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='n270-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='ss'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='phenom'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='3dnow'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='3dnowext'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <blockers model='phenom-v1'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='3dnow'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <feature name='3dnowext'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </blockers>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </mode>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   </cpu>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   <memoryBacking supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <enum name='sourceType'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <value>file</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <value>anonymous</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <value>memfd</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   </memoryBacking>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   <devices>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <disk supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='diskDevice'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>disk</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>cdrom</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>floppy</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>lun</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='bus'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>ide</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>fdc</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>scsi</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>virtio</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>usb</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>sata</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='model'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>virtio</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>virtio-transitional</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>virtio-non-transitional</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </disk>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <graphics supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='type'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>vnc</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>egl-headless</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>dbus</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </graphics>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <video supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='modelType'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>vga</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>cirrus</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>virtio</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>none</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>bochs</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>ramfb</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </video>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <hostdev supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='mode'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>subsystem</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='startupPolicy'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>default</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>mandatory</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>requisite</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>optional</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='subsysType'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>usb</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>pci</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>scsi</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='capsType'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='pciBackend'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </hostdev>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <rng supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='model'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>virtio</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>virtio-transitional</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>virtio-non-transitional</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='backendModel'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>random</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>egd</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>builtin</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </rng>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <filesystem supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='driverType'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>path</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>handle</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>virtiofs</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </filesystem>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <tpm supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='model'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>tpm-tis</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>tpm-crb</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='backendModel'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>emulator</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>external</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='backendVersion'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>2.0</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </tpm>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <redirdev supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='bus'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>usb</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </redirdev>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <channel supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='type'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>pty</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>unix</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </channel>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <crypto supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='model'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='type'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>qemu</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='backendModel'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>builtin</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </crypto>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <interface supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='backendType'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>default</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>passt</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </interface>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <panic supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='model'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>isa</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>hyperv</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </panic>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   </devices>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   <features>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <gic supported='no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <vmcoreinfo supported='yes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <genid supported='yes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <backingStoreInput supported='yes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <backup supported='yes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <async-teardown supported='yes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <ps2 supported='yes'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <sev supported='no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <sgx supported='no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <hyperv supported='yes'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       <enum name='features'>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>relaxed</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>vapic</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>spinlocks</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>vpindex</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>runtime</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>synic</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>stimer</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>reset</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>vendor_id</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>frequencies</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>reenlightenment</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>tlbflush</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>ipi</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>avic</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>emsr_bitmap</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:         <value>xmm_input</value>
Oct 08 10:04:35 compute-0 nova_compute[262220]:       </enum>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     </hyperv>
Oct 08 10:04:35 compute-0 nova_compute[262220]:     <launchSecurity supported='no'/>
Oct 08 10:04:35 compute-0 nova_compute[262220]:   </features>
Oct 08 10:04:35 compute-0 nova_compute[262220]: </domainCapabilities>
Oct 08 10:04:35 compute-0 nova_compute[262220]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.820 2 DEBUG nova.virt.libvirt.host [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.820 2 INFO nova.virt.libvirt.host [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Secure Boot support detected
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.822 2 INFO nova.virt.libvirt.driver [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.822 2 INFO nova.virt.libvirt.driver [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.832 2 DEBUG nova.virt.libvirt.driver [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.870 2 INFO nova.virt.node [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Determined node identity 62e4b021-d3ae-43f9-883d-805e2c7d21a2 from /var/lib/nova/compute_id
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.886 2 WARNING nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Compute nodes ['62e4b021-d3ae-43f9-883d-805e2c7d21a2'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.924 2 INFO nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Oct 08 10:04:35 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v592: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.989 2 WARNING nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.990 2 DEBUG oslo_concurrency.lockutils [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.990 2 DEBUG oslo_concurrency.lockutils [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.990 2 DEBUG oslo_concurrency.lockutils [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.990 2 DEBUG nova.compute.resource_tracker [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 08 10:04:35 compute-0 nova_compute[262220]: 2025-10-08 10:04:35.991 2 DEBUG oslo_concurrency.processutils [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:04:36 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:04:36 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:04:36 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:04:36.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:04:36 compute-0 sshd-session[223280]: Connection closed by 192.168.122.30 port 34140
Oct 08 10:04:36 compute-0 sshd-session[223258]: pam_unix(sshd:session): session closed for user zuul
Oct 08 10:04:36 compute-0 systemd[1]: session-55.scope: Deactivated successfully.
Oct 08 10:04:36 compute-0 systemd[1]: session-55.scope: Consumed 2min 40.010s CPU time.
Oct 08 10:04:36 compute-0 systemd-logind[798]: Session 55 logged out. Waiting for processes to exit.
Oct 08 10:04:36 compute-0 systemd-logind[798]: Removed session 55.
Oct 08 10:04:36 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 08 10:04:36 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1325292707' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:04:36 compute-0 nova_compute[262220]: 2025-10-08 10:04:36.475 2 DEBUG oslo_concurrency.processutils [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:04:36 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Oct 08 10:04:36 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:04:36 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:04:36 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:04:36.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:04:36 compute-0 systemd[1]: Started libvirt nodedev daemon.
Oct 08 10:04:36 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:36 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:04:36 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:36 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:04:36 compute-0 nova_compute[262220]: 2025-10-08 10:04:36.801 2 WARNING nova.virt.libvirt.driver [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 10:04:36 compute-0 nova_compute[262220]: 2025-10-08 10:04:36.802 2 DEBUG nova.compute.resource_tracker [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4925MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 08 10:04:36 compute-0 nova_compute[262220]: 2025-10-08 10:04:36.802 2 DEBUG oslo_concurrency.lockutils [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:04:36 compute-0 nova_compute[262220]: 2025-10-08 10:04:36.803 2 DEBUG oslo_concurrency.lockutils [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:04:36 compute-0 nova_compute[262220]: 2025-10-08 10:04:36.822 2 WARNING nova.compute.resource_tracker [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] No compute node record for compute-0.ctlplane.example.com:62e4b021-d3ae-43f9-883d-805e2c7d21a2: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 62e4b021-d3ae-43f9-883d-805e2c7d21a2 could not be found.
Oct 08 10:04:36 compute-0 nova_compute[262220]: 2025-10-08 10:04:36.856 2 INFO nova.compute.resource_tracker [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 62e4b021-d3ae-43f9-883d-805e2c7d21a2
Oct 08 10:04:36 compute-0 nova_compute[262220]: 2025-10-08 10:04:36.920 2 DEBUG nova.compute.resource_tracker [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 08 10:04:36 compute-0 nova_compute[262220]: 2025-10-08 10:04:36.920 2 DEBUG nova.compute.resource_tracker [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 08 10:04:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:04:37.084Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:04:37 compute-0 ceph-mon[73572]: pgmap v592: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Oct 08 10:04:37 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/1325292707' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:04:37 compute-0 nova_compute[262220]: 2025-10-08 10:04:37.754 2 INFO nova.scheduler.client.report [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [req-e54a4d4b-04f5-4d0b-9635-3bda654eb34d] Created resource provider record via placement API for resource provider with UUID 62e4b021-d3ae-43f9-883d-805e2c7d21a2 and name compute-0.ctlplane.example.com.
Oct 08 10:04:37 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v593: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Oct 08 10:04:38 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:04:38 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:04:38 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:04:38.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:04:38 compute-0 nova_compute[262220]: 2025-10-08 10:04:38.168 2 DEBUG oslo_concurrency.processutils [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:04:38 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/3233319559' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:04:38 compute-0 ceph-mon[73572]: pgmap v593: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Oct 08 10:04:38 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/2977056903' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:04:38 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:04:38 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:04:38 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:04:38.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:04:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 08 10:04:38 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/634133654' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:04:38 compute-0 nova_compute[262220]: 2025-10-08 10:04:38.633 2 DEBUG oslo_concurrency.processutils [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:04:38 compute-0 nova_compute[262220]: 2025-10-08 10:04:38.638 2 DEBUG nova.virt.libvirt.host [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Oct 08 10:04:38 compute-0 nova_compute[262220]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Oct 08 10:04:38 compute-0 nova_compute[262220]: 2025-10-08 10:04:38.638 2 INFO nova.virt.libvirt.host [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] kernel doesn't support AMD SEV
Oct 08 10:04:38 compute-0 nova_compute[262220]: 2025-10-08 10:04:38.639 2 DEBUG nova.compute.provider_tree [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Updating inventory in ProviderTree for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 08 10:04:38 compute-0 nova_compute[262220]: 2025-10-08 10:04:38.639 2 DEBUG nova.virt.libvirt.driver [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 08 10:04:38 compute-0 nova_compute[262220]: 2025-10-08 10:04:38.686 2 DEBUG nova.scheduler.client.report [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Updated inventory for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Oct 08 10:04:38 compute-0 nova_compute[262220]: 2025-10-08 10:04:38.686 2 DEBUG nova.compute.provider_tree [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Updating resource provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Oct 08 10:04:38 compute-0 nova_compute[262220]: 2025-10-08 10:04:38.686 2 DEBUG nova.compute.provider_tree [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Updating inventory in ProviderTree for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 08 10:04:38 compute-0 nova_compute[262220]: 2025-10-08 10:04:38.774 2 DEBUG nova.compute.provider_tree [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Updating resource provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Oct 08 10:04:38 compute-0 nova_compute[262220]: 2025-10-08 10:04:38.799 2 DEBUG nova.compute.resource_tracker [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 08 10:04:38 compute-0 nova_compute[262220]: 2025-10-08 10:04:38.799 2 DEBUG oslo_concurrency.lockutils [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.996s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:04:38 compute-0 nova_compute[262220]: 2025-10-08 10:04:38.799 2 DEBUG nova.service [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Oct 08 10:04:38 compute-0 nova_compute[262220]: 2025-10-08 10:04:38.916 2 DEBUG nova.service [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Oct 08 10:04:38 compute-0 nova_compute[262220]: 2025-10-08 10:04:38.916 2 DEBUG nova.servicegroup.drivers.db [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Oct 08 10:04:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:04:39 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/634133654' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:04:39 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/2053121441' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:04:39 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/3957957424' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:04:39 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v594: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 10:04:40 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:04:40 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:04:40 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:04:40.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:04:40 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:04:40 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:04:40 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:04:40.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:04:40 compute-0 ceph-mon[73572]: pgmap v594: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 10:04:41 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v595: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 10:04:42 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:04:42 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:04:42 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:04:42.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:04:42 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:04:42 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:04:42 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:04:42.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:04:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 08 10:04:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 08 10:04:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 08 10:04:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 08 10:04:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 08 10:04:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 08 10:04:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 08 10:04:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 08 10:04:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 08 10:04:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 08 10:04:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 08 10:04:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 08 10:04:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 08 10:04:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 08 10:04:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 08 10:04:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 08 10:04:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 08 10:04:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 08 10:04:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 08 10:04:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 08 10:04:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 08 10:04:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 08 10:04:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 08 10:04:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 08 10:04:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 08 10:04:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 08 10:04:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 08 10:04:42 compute-0 podman[262630]: 2025-10-08 10:04:42.907785417 +0000 UTC m=+0.062524287 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 08 10:04:43 compute-0 ceph-mon[73572]: pgmap v595: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 10:04:43 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:43 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:04:43 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v596: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 08 10:04:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:04:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:44 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06380016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:04:44 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:04:44 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:04:44 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:04:44.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:04:44 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:04:44 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:04:44 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:04:44.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:04:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:44 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:04:45 compute-0 sudo[262656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:04:45 compute-0 sudo[262656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:04:45 compute-0 sudo[262656]: pam_unix(sudo:session): session closed for user root
Oct 08 10:04:45 compute-0 ceph-mon[73572]: pgmap v596: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 08 10:04:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:04:45] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Oct 08 10:04:45 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:04:45] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Oct 08 10:04:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/100445 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 08 10:04:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:45 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:04:45 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v597: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Oct 08 10:04:46 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:46 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f062c000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:04:46 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:04:46 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:04:46 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:04:46.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:04:46 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:04:46 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:04:46 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:04:46.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:04:46 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:46 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06380016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:04:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:04:47.085Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:04:47 compute-0 ceph-mon[73572]: pgmap v597: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Oct 08 10:04:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:04:47
Oct 08 10:04:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 08 10:04:47 compute-0 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct 08 10:04:47 compute-0 ceph-mgr[73869]: [balancer INFO root] pools ['vms', 'default.rgw.control', 'volumes', 'backups', 'default.rgw.log', '.mgr', '.nfs', '.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta', 'images']
Oct 08 10:04:47 compute-0 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct 08 10:04:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:47 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06280016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:04:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:04:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:04:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:04:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:04:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct 08 10:04:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:04:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 08 10:04:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:04:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:04:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:04:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:04:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:04:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:04:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:04:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:04:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:04:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 08 10:04:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:04:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:04:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:04:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 08 10:04:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:04:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 08 10:04:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:04:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:04:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:04:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 08 10:04:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:04:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 10:04:47 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v598: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Oct 08 10:04:47 compute-0 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 08 10:04:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:04:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:04:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:04:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:04:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:48 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:04:48 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:04:48 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:04:48 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:04:48.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:04:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:04:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:04:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:04:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:04:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:04:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 08 10:04:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:04:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:04:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:04:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:04:48 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:04:48 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:04:48 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:04:48.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:04:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:48 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f062c001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:04:48 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:04:49 compute-0 ceph-mon[73572]: pgmap v598: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Oct 08 10:04:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:49 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06380016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:04:49 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v599: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Oct 08 10:04:50 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:50 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06280016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:04:50 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:04:50 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:04:50 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:04:50.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:04:50 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:04:50 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:04:50 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:04:50.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:04:50 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:50 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:04:51 compute-0 ceph-mon[73572]: pgmap v599: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Oct 08 10:04:51 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:51 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f062c001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:04:51 compute-0 nova_compute[262220]: 2025-10-08 10:04:51.918 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:04:51 compute-0 nova_compute[262220]: 2025-10-08 10:04:51.944 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:04:51 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v600: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct 08 10:04:52 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:52 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06380016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:04:52 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:04:52 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:04:52 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:04:52.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:04:52 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:04:52 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:04:52 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:04:52.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:04:52 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:52 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06280016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:04:53 compute-0 ceph-mon[73572]: pgmap v600: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct 08 10:04:53 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:53 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c0091b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:04:53 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:04:53 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v601: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct 08 10:04:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:54 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f062c001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:04:54 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:04:54 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:04:54 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:04:54.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:04:54 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:04:54 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:04:54 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:04:54.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:04:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:54 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06380016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:04:55 compute-0 ceph-mon[73572]: pgmap v601: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct 08 10:04:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:04:55] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Oct 08 10:04:55 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:04:55] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Oct 08 10:04:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:55 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:04:55 compute-0 podman[262692]: 2025-10-08 10:04:55.929559087 +0000 UTC m=+0.090329637 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Oct 08 10:04:55 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v602: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:04:56 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:56 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c0091b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:04:56 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:04:56 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:04:56 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:04:56.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:04:56 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:04:56 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:04:56 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:04:56.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:04:56 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:56 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f062c002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:04:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:04:57.086Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:04:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:04:57.086Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:04:57 compute-0 ceph-mon[73572]: pgmap v602: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:04:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:04:57.402 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:04:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:04:57.403 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:04:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:04:57.403 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:04:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:57 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06380016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:04:57 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v603: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:04:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:58 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:04:58 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:04:58 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:04:58 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:04:58.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:04:58 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:04:58 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:04:58 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:04:58.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:04:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:58 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:04:58 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:04:59 compute-0 ceph-mon[73572]: pgmap v603: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:04:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:59 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f062c002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:04:59 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v604: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:05:00 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:00 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f062c002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:00 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:05:00 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:05:00 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:05:00.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:05:00 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:05:00 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:05:00 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:05:00.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:05:00 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:00 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:01 compute-0 ceph-mon[73572]: pgmap v604: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:05:01 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:01 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:01 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v605: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:05:02 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:02 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06380016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:02 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:05:02 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:05:02 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:05:02.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:05:02 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:05:02 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:05:02 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:05:02.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:05:02 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:02 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f062c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:05:02 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:05:02 compute-0 podman[262726]: 2025-10-08 10:05:02.897815306 +0000 UTC m=+0.053916956 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 08 10:05:03 compute-0 ceph-mon[73572]: pgmap v605: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:05:03 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:05:03 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:03 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f062c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:03 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:05:03 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v606: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 08 10:05:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:04 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:04 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:05:04 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:05:04 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:05:04.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:05:04 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:05:04 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:05:04 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:05:04.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:05:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:04 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06380016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:04 compute-0 podman[262749]: 2025-10-08 10:05:04.893087161 +0000 UTC m=+0.054590580 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 08 10:05:05 compute-0 sudo[262772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:05:05 compute-0 sudo[262772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:05:05 compute-0 sudo[262772]: pam_unix(sudo:session): session closed for user root
Oct 08 10:05:05 compute-0 ceph-mon[73572]: pgmap v606: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 08 10:05:05 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 08 10:05:05 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/398418135' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 08 10:05:05 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 08 10:05:05 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/398418135' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 08 10:05:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:05:05] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Oct 08 10:05:05 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:05:05] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Oct 08 10:05:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:05 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:05 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v607: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:05:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:06 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f062c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:06 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:05:06 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:05:06 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:05:06.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:05:06 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 08 10:05:06 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1345408182' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 08 10:05:06 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 08 10:05:06 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1345408182' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 08 10:05:06 compute-0 ceph-mon[73572]: from='client.? 192.168.122.10:0/3579628909' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 08 10:05:06 compute-0 ceph-mon[73572]: from='client.? 192.168.122.10:0/3579628909' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 08 10:05:06 compute-0 ceph-mon[73572]: from='client.? 192.168.122.10:0/398418135' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 08 10:05:06 compute-0 ceph-mon[73572]: from='client.? 192.168.122.10:0/398418135' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 08 10:05:06 compute-0 ceph-mon[73572]: from='client.? 192.168.122.10:0/1345408182' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 08 10:05:06 compute-0 ceph-mon[73572]: from='client.? 192.168.122.10:0/1345408182' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 08 10:05:06 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:05:06 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:05:06 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:05:06.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:05:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:06 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:05:07.087Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:05:07 compute-0 ceph-mon[73572]: pgmap v607: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:05:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:07 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06380016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:07 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v608: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:05:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:08 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:08 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:05:08 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:05:08 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:05:08.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:05:08 compute-0 ceph-mon[73572]: pgmap v608: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:05:08 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:05:08 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:05:08 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:05:08.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:05:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:08 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f062c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:08 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:05:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:09 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:09 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v609: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:05:10 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:10 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06380037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:10 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:05:10 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:05:10 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:05:10.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:05:10 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:05:10 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:05:10 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:05:10.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:05:10 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:10 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:11 compute-0 ceph-mon[73572]: pgmap v609: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:05:11 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:11 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f062c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:11 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v610: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:05:12 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:12 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:12 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:05:12 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:05:12 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:05:12.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:05:12 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:05:12 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:05:12 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:05:12.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:05:12 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:12 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06380037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:13 compute-0 ceph-mon[73572]: pgmap v610: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:05:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:13 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:13 compute-0 podman[262805]: 2025-10-08 10:05:13.904576843 +0000 UTC m=+0.057935060 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 08 10:05:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:05:13 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v611: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 08 10:05:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:14 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f062c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:14 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:05:14 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:05:14 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:05:14.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:05:14 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:05:14 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:05:14 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:05:14.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:05:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:14 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:15 compute-0 ceph-mon[73572]: pgmap v611: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 08 10:05:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:05:15] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Oct 08 10:05:15 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:05:15] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Oct 08 10:05:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:15 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06380037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:15 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v612: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:05:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:16 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:16 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:05:16 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:05:16 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:05:16.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:05:16 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:05:16 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:05:16 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:05:16.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:05:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:16 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:05:17.088Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:05:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:05:17.088Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:05:17 compute-0 ceph-mon[73572]: pgmap v612: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:05:17 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:05:17 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:05:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:17 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:05:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:05:17 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v613: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:05:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:18 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06380037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:18 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:05:18 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:05:18 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:05:18.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:05:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:05:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:05:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:05:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:05:18 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:05:18 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:05:18 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:05:18 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:05:18.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:05:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:18 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/100518 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 08 10:05:18 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:05:19 compute-0 ceph-mon[73572]: pgmap v613: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:05:19 compute-0 sudo[262835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:05:19 compute-0 sudo[262835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:05:19 compute-0 sudo[262835]: pam_unix(sudo:session): session closed for user root
Oct 08 10:05:19 compute-0 sudo[262860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 08 10:05:19 compute-0 sudo[262860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:05:19 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 08 10:05:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:19 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:19 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:05:19 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 08 10:05:19 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:05:19 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v614: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:05:20 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:20 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:20 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:05:20 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:05:20 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:05:20.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:05:20 compute-0 sudo[262860]: pam_unix(sudo:session): session closed for user root
Oct 08 10:05:20 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 10:05:20 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:05:20 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 08 10:05:20 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 10:05:20 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 08 10:05:20 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:05:20 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:05:20 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:05:20.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:05:20 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:05:20 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 08 10:05:20 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:20 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:20 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:05:20 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 08 10:05:20 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 10:05:20 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 08 10:05:20 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 10:05:20 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 10:05:20 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:05:20 compute-0 sudo[262919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:05:20 compute-0 sudo[262919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:05:20 compute-0 sudo[262919]: pam_unix(sudo:session): session closed for user root
Oct 08 10:05:20 compute-0 sudo[262944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 08 10:05:20 compute-0 sudo[262944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:05:20 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:05:20 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:05:20 compute-0 ceph-mon[73572]: pgmap v614: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:05:20 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:05:20 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 10:05:20 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:05:20 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:05:20 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 10:05:20 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 10:05:20 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:05:21 compute-0 podman[263012]: 2025-10-08 10:05:21.257346943 +0000 UTC m=+0.045183810 container create 29530d826fa666ff5873b16e1e87c8e8b2d6e3c9b3df93c5122f2ff3d4998402 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_ramanujan, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 10:05:21 compute-0 systemd[1]: Started libpod-conmon-29530d826fa666ff5873b16e1e87c8e8b2d6e3c9b3df93c5122f2ff3d4998402.scope.
Oct 08 10:05:21 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:05:21 compute-0 podman[263012]: 2025-10-08 10:05:21.236545716 +0000 UTC m=+0.024382623 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:05:21 compute-0 podman[263012]: 2025-10-08 10:05:21.375540175 +0000 UTC m=+0.163377072 container init 29530d826fa666ff5873b16e1e87c8e8b2d6e3c9b3df93c5122f2ff3d4998402 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_ramanujan, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 10:05:21 compute-0 podman[263012]: 2025-10-08 10:05:21.38379426 +0000 UTC m=+0.171631137 container start 29530d826fa666ff5873b16e1e87c8e8b2d6e3c9b3df93c5122f2ff3d4998402 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct 08 10:05:21 compute-0 interesting_ramanujan[263028]: 167 167
Oct 08 10:05:21 compute-0 systemd[1]: libpod-29530d826fa666ff5873b16e1e87c8e8b2d6e3c9b3df93c5122f2ff3d4998402.scope: Deactivated successfully.
Oct 08 10:05:21 compute-0 conmon[263028]: conmon 29530d826fa666ff5873 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-29530d826fa666ff5873b16e1e87c8e8b2d6e3c9b3df93c5122f2ff3d4998402.scope/container/memory.events
Oct 08 10:05:21 compute-0 podman[263012]: 2025-10-08 10:05:21.417877034 +0000 UTC m=+0.205713921 container attach 29530d826fa666ff5873b16e1e87c8e8b2d6e3c9b3df93c5122f2ff3d4998402 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_ramanujan, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:05:21 compute-0 podman[263012]: 2025-10-08 10:05:21.419483815 +0000 UTC m=+0.207320692 container died 29530d826fa666ff5873b16e1e87c8e8b2d6e3c9b3df93c5122f2ff3d4998402 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:05:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-027552be38688e8e7cb393c5f547fe7d9044305a9f0d9fb2820f6e65bc1f93b5-merged.mount: Deactivated successfully.
Oct 08 10:05:21 compute-0 podman[263012]: 2025-10-08 10:05:21.467564797 +0000 UTC m=+0.255401674 container remove 29530d826fa666ff5873b16e1e87c8e8b2d6e3c9b3df93c5122f2ff3d4998402 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Oct 08 10:05:21 compute-0 systemd[1]: libpod-conmon-29530d826fa666ff5873b16e1e87c8e8b2d6e3c9b3df93c5122f2ff3d4998402.scope: Deactivated successfully.
Oct 08 10:05:21 compute-0 podman[263052]: 2025-10-08 10:05:21.641975313 +0000 UTC m=+0.040350176 container create 45bb1fe3f9bc62bd937220b2b3c236294f36ba58bb54867bf1b9c0a064e89a5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_mayer, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct 08 10:05:21 compute-0 systemd[1]: Started libpod-conmon-45bb1fe3f9bc62bd937220b2b3c236294f36ba58bb54867bf1b9c0a064e89a5f.scope.
Oct 08 10:05:21 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:05:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49ad66571a94c4e710a694a4d52277b0100e7609c22b81d9b134c013ce159a6d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:05:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49ad66571a94c4e710a694a4d52277b0100e7609c22b81d9b134c013ce159a6d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:05:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49ad66571a94c4e710a694a4d52277b0100e7609c22b81d9b134c013ce159a6d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:05:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49ad66571a94c4e710a694a4d52277b0100e7609c22b81d9b134c013ce159a6d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:05:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49ad66571a94c4e710a694a4d52277b0100e7609c22b81d9b134c013ce159a6d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 10:05:21 compute-0 podman[263052]: 2025-10-08 10:05:21.62505425 +0000 UTC m=+0.023429143 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:05:21 compute-0 podman[263052]: 2025-10-08 10:05:21.722761095 +0000 UTC m=+0.121135988 container init 45bb1fe3f9bc62bd937220b2b3c236294f36ba58bb54867bf1b9c0a064e89a5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_mayer, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Oct 08 10:05:21 compute-0 podman[263052]: 2025-10-08 10:05:21.729233803 +0000 UTC m=+0.127608686 container start 45bb1fe3f9bc62bd937220b2b3c236294f36ba58bb54867bf1b9c0a064e89a5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:05:21 compute-0 podman[263052]: 2025-10-08 10:05:21.733574942 +0000 UTC m=+0.131949815 container attach 45bb1fe3f9bc62bd937220b2b3c236294f36ba58bb54867bf1b9c0a064e89a5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Oct 08 10:05:21 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:21 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:21 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v615: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:05:22 compute-0 romantic_mayer[263069]: --> passed data devices: 0 physical, 1 LVM
Oct 08 10:05:22 compute-0 romantic_mayer[263069]: --> All data devices are unavailable
Oct 08 10:05:22 compute-0 systemd[1]: libpod-45bb1fe3f9bc62bd937220b2b3c236294f36ba58bb54867bf1b9c0a064e89a5f.scope: Deactivated successfully.
Oct 08 10:05:22 compute-0 podman[263052]: 2025-10-08 10:05:22.056698318 +0000 UTC m=+0.455073201 container died 45bb1fe3f9bc62bd937220b2b3c236294f36ba58bb54867bf1b9c0a064e89a5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_mayer, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Oct 08 10:05:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-49ad66571a94c4e710a694a4d52277b0100e7609c22b81d9b134c013ce159a6d-merged.mount: Deactivated successfully.
Oct 08 10:05:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:22 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:22 compute-0 podman[263052]: 2025-10-08 10:05:22.10291132 +0000 UTC m=+0.501286184 container remove 45bb1fe3f9bc62bd937220b2b3c236294f36ba58bb54867bf1b9c0a064e89a5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_mayer, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 08 10:05:22 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:05:22 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:05:22 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:05:22.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:05:22 compute-0 systemd[1]: libpod-conmon-45bb1fe3f9bc62bd937220b2b3c236294f36ba58bb54867bf1b9c0a064e89a5f.scope: Deactivated successfully.
Oct 08 10:05:22 compute-0 sudo[262944]: pam_unix(sudo:session): session closed for user root
Oct 08 10:05:22 compute-0 sudo[263097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:05:22 compute-0 sudo[263097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:05:22 compute-0 sudo[263097]: pam_unix(sudo:session): session closed for user root
Oct 08 10:05:22 compute-0 sudo[263122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- lvm list --format json
Oct 08 10:05:22 compute-0 sudo[263122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:05:22 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:05:22 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:05:22 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:05:22.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:05:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:22 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06380037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:22 compute-0 podman[263187]: 2025-10-08 10:05:22.698579841 +0000 UTC m=+0.041380649 container create e435fcbdb68719d60482abe71720b037aebdf35556dd40a926cb0955cc369a1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_pare, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0)
Oct 08 10:05:22 compute-0 systemd[1]: Started libpod-conmon-e435fcbdb68719d60482abe71720b037aebdf35556dd40a926cb0955cc369a1b.scope.
Oct 08 10:05:22 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:05:22 compute-0 podman[263187]: 2025-10-08 10:05:22.679825379 +0000 UTC m=+0.022626217 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:05:22 compute-0 podman[263187]: 2025-10-08 10:05:22.786484351 +0000 UTC m=+0.129285179 container init e435fcbdb68719d60482abe71720b037aebdf35556dd40a926cb0955cc369a1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_pare, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 08 10:05:22 compute-0 podman[263187]: 2025-10-08 10:05:22.794174167 +0000 UTC m=+0.136974975 container start e435fcbdb68719d60482abe71720b037aebdf35556dd40a926cb0955cc369a1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct 08 10:05:22 compute-0 practical_pare[263203]: 167 167
Oct 08 10:05:22 compute-0 systemd[1]: libpod-e435fcbdb68719d60482abe71720b037aebdf35556dd40a926cb0955cc369a1b.scope: Deactivated successfully.
Oct 08 10:05:22 compute-0 conmon[263203]: conmon e435fcbdb68719d60482 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e435fcbdb68719d60482abe71720b037aebdf35556dd40a926cb0955cc369a1b.scope/container/memory.events
Oct 08 10:05:22 compute-0 podman[263187]: 2025-10-08 10:05:22.828823499 +0000 UTC m=+0.171624307 container attach e435fcbdb68719d60482abe71720b037aebdf35556dd40a926cb0955cc369a1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_pare, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct 08 10:05:22 compute-0 podman[263187]: 2025-10-08 10:05:22.829531232 +0000 UTC m=+0.172332040 container died e435fcbdb68719d60482abe71720b037aebdf35556dd40a926cb0955cc369a1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_pare, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 08 10:05:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3f0e4b9107df44b384e6d24ab0c8a7ee0ac3671e202a808b9cd1ee4fe674b52-merged.mount: Deactivated successfully.
Oct 08 10:05:23 compute-0 podman[263187]: 2025-10-08 10:05:23.051992639 +0000 UTC m=+0.394793447 container remove e435fcbdb68719d60482abe71720b037aebdf35556dd40a926cb0955cc369a1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:05:23 compute-0 systemd[1]: libpod-conmon-e435fcbdb68719d60482abe71720b037aebdf35556dd40a926cb0955cc369a1b.scope: Deactivated successfully.
Oct 08 10:05:23 compute-0 ceph-mon[73572]: pgmap v615: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:05:23 compute-0 podman[263228]: 2025-10-08 10:05:23.289811139 +0000 UTC m=+0.083316654 container create 6b90c314c3fa81888e67f21de36cb5514f26d9943dcc1ebcfe69bab1c76018e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 08 10:05:23 compute-0 podman[263228]: 2025-10-08 10:05:23.227923583 +0000 UTC m=+0.021429108 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:05:23 compute-0 systemd[1]: Started libpod-conmon-6b90c314c3fa81888e67f21de36cb5514f26d9943dcc1ebcfe69bab1c76018e0.scope.
Oct 08 10:05:23 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:05:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c78adcf053e0248b209e9933261764a7cf694a3ef6c1e2d960dc5691b534a8c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:05:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c78adcf053e0248b209e9933261764a7cf694a3ef6c1e2d960dc5691b534a8c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:05:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c78adcf053e0248b209e9933261764a7cf694a3ef6c1e2d960dc5691b534a8c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:05:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c78adcf053e0248b209e9933261764a7cf694a3ef6c1e2d960dc5691b534a8c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:05:23 compute-0 podman[263228]: 2025-10-08 10:05:23.369525807 +0000 UTC m=+0.163031342 container init 6b90c314c3fa81888e67f21de36cb5514f26d9943dcc1ebcfe69bab1c76018e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_babbage, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 08 10:05:23 compute-0 podman[263228]: 2025-10-08 10:05:23.377584385 +0000 UTC m=+0.171089900 container start 6b90c314c3fa81888e67f21de36cb5514f26d9943dcc1ebcfe69bab1c76018e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_babbage, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 08 10:05:23 compute-0 podman[263228]: 2025-10-08 10:05:23.381746359 +0000 UTC m=+0.175251904 container attach 6b90c314c3fa81888e67f21de36cb5514f26d9943dcc1ebcfe69bab1c76018e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_babbage, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 10:05:23 compute-0 eloquent_babbage[263244]: {
Oct 08 10:05:23 compute-0 eloquent_babbage[263244]:     "1": [
Oct 08 10:05:23 compute-0 eloquent_babbage[263244]:         {
Oct 08 10:05:23 compute-0 eloquent_babbage[263244]:             "devices": [
Oct 08 10:05:23 compute-0 eloquent_babbage[263244]:                 "/dev/loop3"
Oct 08 10:05:23 compute-0 eloquent_babbage[263244]:             ],
Oct 08 10:05:23 compute-0 eloquent_babbage[263244]:             "lv_name": "ceph_lv0",
Oct 08 10:05:23 compute-0 eloquent_babbage[263244]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:05:23 compute-0 eloquent_babbage[263244]:             "lv_size": "21470642176",
Oct 08 10:05:23 compute-0 eloquent_babbage[263244]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 08 10:05:23 compute-0 eloquent_babbage[263244]:             "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 10:05:23 compute-0 eloquent_babbage[263244]:             "name": "ceph_lv0",
Oct 08 10:05:23 compute-0 eloquent_babbage[263244]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:05:23 compute-0 eloquent_babbage[263244]:             "tags": {
Oct 08 10:05:23 compute-0 eloquent_babbage[263244]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:05:23 compute-0 eloquent_babbage[263244]:                 "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 10:05:23 compute-0 eloquent_babbage[263244]:                 "ceph.cephx_lockbox_secret": "",
Oct 08 10:05:23 compute-0 eloquent_babbage[263244]:                 "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct 08 10:05:23 compute-0 eloquent_babbage[263244]:                 "ceph.cluster_name": "ceph",
Oct 08 10:05:23 compute-0 eloquent_babbage[263244]:                 "ceph.crush_device_class": "",
Oct 08 10:05:23 compute-0 eloquent_babbage[263244]:                 "ceph.encrypted": "0",
Oct 08 10:05:23 compute-0 eloquent_babbage[263244]:                 "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct 08 10:05:23 compute-0 eloquent_babbage[263244]:                 "ceph.osd_id": "1",
Oct 08 10:05:23 compute-0 eloquent_babbage[263244]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 08 10:05:23 compute-0 eloquent_babbage[263244]:                 "ceph.type": "block",
Oct 08 10:05:23 compute-0 eloquent_babbage[263244]:                 "ceph.vdo": "0",
Oct 08 10:05:23 compute-0 eloquent_babbage[263244]:                 "ceph.with_tpm": "0"
Oct 08 10:05:23 compute-0 eloquent_babbage[263244]:             },
Oct 08 10:05:23 compute-0 eloquent_babbage[263244]:             "type": "block",
Oct 08 10:05:23 compute-0 eloquent_babbage[263244]:             "vg_name": "ceph_vg0"
Oct 08 10:05:23 compute-0 eloquent_babbage[263244]:         }
Oct 08 10:05:23 compute-0 eloquent_babbage[263244]:     ]
Oct 08 10:05:23 compute-0 eloquent_babbage[263244]: }
Oct 08 10:05:23 compute-0 systemd[1]: libpod-6b90c314c3fa81888e67f21de36cb5514f26d9943dcc1ebcfe69bab1c76018e0.scope: Deactivated successfully.
Oct 08 10:05:23 compute-0 podman[263228]: 2025-10-08 10:05:23.688548821 +0000 UTC m=+0.482054356 container died 6b90c314c3fa81888e67f21de36cb5514f26d9943dcc1ebcfe69bab1c76018e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2)
Oct 08 10:05:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c78adcf053e0248b209e9933261764a7cf694a3ef6c1e2d960dc5691b534a8c-merged.mount: Deactivated successfully.
Oct 08 10:05:23 compute-0 podman[263228]: 2025-10-08 10:05:23.816601819 +0000 UTC m=+0.610107334 container remove 6b90c314c3fa81888e67f21de36cb5514f26d9943dcc1ebcfe69bab1c76018e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:05:23 compute-0 systemd[1]: libpod-conmon-6b90c314c3fa81888e67f21de36cb5514f26d9943dcc1ebcfe69bab1c76018e0.scope: Deactivated successfully.
Oct 08 10:05:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:23 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:23 compute-0 sudo[263122]: pam_unix(sudo:session): session closed for user root
Oct 08 10:05:23 compute-0 sudo[263268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:05:23 compute-0 sudo[263268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:05:23 compute-0 sudo[263268]: pam_unix(sudo:session): session closed for user root
Oct 08 10:05:23 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:05:23 compute-0 sudo[263294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- raw list --format json
Oct 08 10:05:23 compute-0 sudo[263294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:05:23 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v616: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 08 10:05:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:24 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:24 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:05:24 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:05:24 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:05:24.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:05:24 compute-0 podman[263360]: 2025-10-08 10:05:24.38952668 +0000 UTC m=+0.038360612 container create 898a24fb116e2f39205a990e5aae3f5d1b6616e40b665eb507c0df15a9e8bd0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_mccarthy, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:05:24 compute-0 systemd[1]: Started libpod-conmon-898a24fb116e2f39205a990e5aae3f5d1b6616e40b665eb507c0df15a9e8bd0e.scope.
Oct 08 10:05:24 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:05:24 compute-0 podman[263360]: 2025-10-08 10:05:24.375640485 +0000 UTC m=+0.024474437 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:05:24 compute-0 podman[263360]: 2025-10-08 10:05:24.474645651 +0000 UTC m=+0.123479593 container init 898a24fb116e2f39205a990e5aae3f5d1b6616e40b665eb507c0df15a9e8bd0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_mccarthy, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 08 10:05:24 compute-0 podman[263360]: 2025-10-08 10:05:24.480419436 +0000 UTC m=+0.129253368 container start 898a24fb116e2f39205a990e5aae3f5d1b6616e40b665eb507c0df15a9e8bd0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_mccarthy, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct 08 10:05:24 compute-0 heuristic_mccarthy[263377]: 167 167
Oct 08 10:05:24 compute-0 systemd[1]: libpod-898a24fb116e2f39205a990e5aae3f5d1b6616e40b665eb507c0df15a9e8bd0e.scope: Deactivated successfully.
Oct 08 10:05:24 compute-0 podman[263360]: 2025-10-08 10:05:24.485641453 +0000 UTC m=+0.134475405 container attach 898a24fb116e2f39205a990e5aae3f5d1b6616e40b665eb507c0df15a9e8bd0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_mccarthy, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:05:24 compute-0 podman[263360]: 2025-10-08 10:05:24.48615103 +0000 UTC m=+0.134984952 container died 898a24fb116e2f39205a990e5aae3f5d1b6616e40b665eb507c0df15a9e8bd0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:05:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e78c3a02a23b2d14d420dda26df807ec1b7fce999a2de93408a5b79c66c5ced-merged.mount: Deactivated successfully.
Oct 08 10:05:24 compute-0 podman[263360]: 2025-10-08 10:05:24.560370342 +0000 UTC m=+0.209204264 container remove 898a24fb116e2f39205a990e5aae3f5d1b6616e40b665eb507c0df15a9e8bd0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_mccarthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct 08 10:05:24 compute-0 systemd[1]: libpod-conmon-898a24fb116e2f39205a990e5aae3f5d1b6616e40b665eb507c0df15a9e8bd0e.scope: Deactivated successfully.
Oct 08 10:05:24 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:05:24 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:05:24 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:05:24.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:05:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:24 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:24 compute-0 podman[263401]: 2025-10-08 10:05:24.731294304 +0000 UTC m=+0.042802664 container create 1f6046a2bc1c69551b5156a6f2222e4ecb6803a7528f658e1457777a7f32c610 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_lamport, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:05:24 compute-0 systemd[1]: Started libpod-conmon-1f6046a2bc1c69551b5156a6f2222e4ecb6803a7528f658e1457777a7f32c610.scope.
Oct 08 10:05:24 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:05:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7d59a5a6116a1f1546008538956a5d0f0548576878256d41a15d83db7d59a59/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:05:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7d59a5a6116a1f1546008538956a5d0f0548576878256d41a15d83db7d59a59/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:05:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7d59a5a6116a1f1546008538956a5d0f0548576878256d41a15d83db7d59a59/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:05:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7d59a5a6116a1f1546008538956a5d0f0548576878256d41a15d83db7d59a59/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:05:24 compute-0 podman[263401]: 2025-10-08 10:05:24.807191919 +0000 UTC m=+0.118700289 container init 1f6046a2bc1c69551b5156a6f2222e4ecb6803a7528f658e1457777a7f32c610 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:05:24 compute-0 podman[263401]: 2025-10-08 10:05:24.714820406 +0000 UTC m=+0.026328786 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:05:24 compute-0 podman[263401]: 2025-10-08 10:05:24.817660056 +0000 UTC m=+0.129168416 container start 1f6046a2bc1c69551b5156a6f2222e4ecb6803a7528f658e1457777a7f32c610 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 10:05:24 compute-0 podman[263401]: 2025-10-08 10:05:24.822680797 +0000 UTC m=+0.134189177 container attach 1f6046a2bc1c69551b5156a6f2222e4ecb6803a7528f658e1457777a7f32c610 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:05:25 compute-0 ceph-mon[73572]: pgmap v616: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 08 10:05:25 compute-0 sudo[263464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:05:25 compute-0 sudo[263464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:05:25 compute-0 sudo[263464]: pam_unix(sudo:session): session closed for user root
Oct 08 10:05:25 compute-0 lvm[263518]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 08 10:05:25 compute-0 lvm[263518]: VG ceph_vg0 finished
Oct 08 10:05:25 compute-0 sweet_lamport[263417]: {}
Oct 08 10:05:25 compute-0 systemd[1]: libpod-1f6046a2bc1c69551b5156a6f2222e4ecb6803a7528f658e1457777a7f32c610.scope: Deactivated successfully.
Oct 08 10:05:25 compute-0 podman[263401]: 2025-10-08 10:05:25.538659816 +0000 UTC m=+0.850168176 container died 1f6046a2bc1c69551b5156a6f2222e4ecb6803a7528f658e1457777a7f32c610 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_lamport, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 10:05:25 compute-0 systemd[1]: libpod-1f6046a2bc1c69551b5156a6f2222e4ecb6803a7528f658e1457777a7f32c610.scope: Consumed 1.142s CPU time.
Oct 08 10:05:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-b7d59a5a6116a1f1546008538956a5d0f0548576878256d41a15d83db7d59a59-merged.mount: Deactivated successfully.
Oct 08 10:05:25 compute-0 podman[263401]: 2025-10-08 10:05:25.581130999 +0000 UTC m=+0.892639359 container remove 1f6046a2bc1c69551b5156a6f2222e4ecb6803a7528f658e1457777a7f32c610 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_lamport, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct 08 10:05:25 compute-0 systemd[1]: libpod-conmon-1f6046a2bc1c69551b5156a6f2222e4ecb6803a7528f658e1457777a7f32c610.scope: Deactivated successfully.
Oct 08 10:05:25 compute-0 sudo[263294]: pam_unix(sudo:session): session closed for user root
Oct 08 10:05:25 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 10:05:25 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:05:25 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 10:05:25 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:05:25 compute-0 sudo[263533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 08 10:05:25 compute-0 sudo[263533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:05:25 compute-0 sudo[263533]: pam_unix(sudo:session): session closed for user root
Oct 08 10:05:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:05:25] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct 08 10:05:25 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:05:25] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct 08 10:05:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:25 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06380037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:25 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v617: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 10:05:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:26 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:26 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:05:26 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:05:26 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:05:26.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:05:26 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:05:26 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:05:26 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:05:26.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:05:26 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:05:26 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:05:26 compute-0 ceph-mon[73572]: pgmap v617: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 10:05:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:26 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:26 compute-0 podman[263559]: 2025-10-08 10:05:26.989923406 +0000 UTC m=+0.139589440 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 08 10:05:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:05:27.090Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:05:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:27 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:27 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v618: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 10:05:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:28 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06380037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:28 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:05:28 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:05:28 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:05:28.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:05:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:28 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:05:28 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:05:28 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:05:28 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:05:28.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:05:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:28 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:28 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:05:29 compute-0 ceph-mon[73572]: pgmap v618: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 10:05:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:29 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:29 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v619: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 08 10:05:30 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:30 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:30 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:05:30 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:05:30 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:05:30.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:05:30 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:05:30 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:05:30 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:05:30.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:05:30 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:30 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06380037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:31 compute-0 ceph-mon[73572]: pgmap v619: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 08 10:05:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:31 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:05:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:31 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:05:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:31 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:05:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:31 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:31 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v620: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 08 10:05:32 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:32 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:32 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:05:32 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:05:32 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:05:32.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:05:32 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:05:32 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:05:32 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:05:32.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:05:32 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:32 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:05:32 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:05:33 compute-0 ceph-mon[73572]: pgmap v620: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 08 10:05:33 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:05:33 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:33 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06380037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:33 compute-0 podman[263593]: 2025-10-08 10:05:33.894850468 +0000 UTC m=+0.052269898 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct 08 10:05:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:05:33 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v621: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 10:05:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:34 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:34 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:05:34 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:05:34 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:05:34.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:05:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:34 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 08 10:05:34 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:05:34 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:05:34 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:05:34.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:05:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:34 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:34 compute-0 nova_compute[262220]: 2025-10-08 10:05:34.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:05:34 compute-0 nova_compute[262220]: 2025-10-08 10:05:34.888 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:05:34 compute-0 nova_compute[262220]: 2025-10-08 10:05:34.888 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 08 10:05:34 compute-0 nova_compute[262220]: 2025-10-08 10:05:34.888 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 08 10:05:34 compute-0 nova_compute[262220]: 2025-10-08 10:05:34.907 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 08 10:05:34 compute-0 nova_compute[262220]: 2025-10-08 10:05:34.907 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:05:34 compute-0 nova_compute[262220]: 2025-10-08 10:05:34.907 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:05:34 compute-0 nova_compute[262220]: 2025-10-08 10:05:34.907 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:05:34 compute-0 nova_compute[262220]: 2025-10-08 10:05:34.908 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:05:34 compute-0 nova_compute[262220]: 2025-10-08 10:05:34.908 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:05:34 compute-0 nova_compute[262220]: 2025-10-08 10:05:34.908 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:05:34 compute-0 nova_compute[262220]: 2025-10-08 10:05:34.908 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 08 10:05:34 compute-0 nova_compute[262220]: 2025-10-08 10:05:34.908 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:05:34 compute-0 nova_compute[262220]: 2025-10-08 10:05:34.930 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:05:34 compute-0 nova_compute[262220]: 2025-10-08 10:05:34.931 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:05:34 compute-0 nova_compute[262220]: 2025-10-08 10:05:34.933 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:05:34 compute-0 nova_compute[262220]: 2025-10-08 10:05:34.933 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 08 10:05:34 compute-0 nova_compute[262220]: 2025-10-08 10:05:34.933 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:05:35 compute-0 ceph-mon[73572]: pgmap v621: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 10:05:35 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 08 10:05:35 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3837090167' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:05:35 compute-0 nova_compute[262220]: 2025-10-08 10:05:35.411 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:05:35 compute-0 nova_compute[262220]: 2025-10-08 10:05:35.580 2 WARNING nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 10:05:35 compute-0 nova_compute[262220]: 2025-10-08 10:05:35.581 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4892MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 08 10:05:35 compute-0 nova_compute[262220]: 2025-10-08 10:05:35.581 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:05:35 compute-0 nova_compute[262220]: 2025-10-08 10:05:35.581 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:05:35 compute-0 nova_compute[262220]: 2025-10-08 10:05:35.697 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 08 10:05:35 compute-0 nova_compute[262220]: 2025-10-08 10:05:35.697 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 08 10:05:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:05:35] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct 08 10:05:35 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:05:35] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct 08 10:05:35 compute-0 nova_compute[262220]: 2025-10-08 10:05:35.740 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:05:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:35 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:35 compute-0 podman[263638]: 2025-10-08 10:05:35.895920606 +0000 UTC m=+0.056118731 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=multipathd, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 08 10:05:35 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v622: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 10:05:36 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:36 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:36 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:05:36 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:05:36 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:05:36.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:05:36 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3837090167' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:05:36 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 08 10:05:36 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2055616685' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:05:36 compute-0 nova_compute[262220]: 2025-10-08 10:05:36.248 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:05:36 compute-0 nova_compute[262220]: 2025-10-08 10:05:36.253 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 08 10:05:36 compute-0 nova_compute[262220]: 2025-10-08 10:05:36.279 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 08 10:05:36 compute-0 nova_compute[262220]: 2025-10-08 10:05:36.280 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 08 10:05:36 compute-0 nova_compute[262220]: 2025-10-08 10:05:36.281 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:05:36 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:05:36 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:05:36 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:05:36.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:05:36 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:36 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0620000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:05:37.090Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:05:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:05:37.091Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:05:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:05:37.091Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:05:37 compute-0 ceph-mon[73572]: pgmap v622: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 10:05:37 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/2055616685' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:05:37 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/2610485348' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:05:37 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/2141938182' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:05:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:37 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0610000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:37 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v623: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 10:05:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:38 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:38 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:05:38 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:05:38 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:05:38.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:05:38 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/3277094778' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:05:38 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/48708837' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:05:38 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:05:38 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:05:38 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:05:38.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:05:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:38 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:05:39 compute-0 ceph-mon[73572]: pgmap v623: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 10:05:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:39 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06200016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:39 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v624: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 08 10:05:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:40 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06100016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:40 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:05:40 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:05:40 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:05:40.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:05:40 compute-0 ceph-mon[73572]: pgmap v624: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 08 10:05:40 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:05:40 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:05:40 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:05:40.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:05:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:40 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/100540 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 08 10:05:41 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:41 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:41 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v625: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 08 10:05:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06200016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:42 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:05:42 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:05:42 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:05:42.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:05:42 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:05:42 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:05:42 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:05:42.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:05:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06100016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:43 compute-0 ceph-mon[73572]: pgmap v625: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 08 10:05:43 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:43 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:05:43 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v626: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 08 10:05:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:44 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:44 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:05:44 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:05:44 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:05:44.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:05:44 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:05:44 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:05:44 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:05:44.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:05:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:44 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06200016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:44 compute-0 podman[263689]: 2025-10-08 10:05:44.905215401 +0000 UTC m=+0.061809504 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 08 10:05:45 compute-0 ceph-mon[73572]: pgmap v626: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 08 10:05:45 compute-0 sudo[263712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:05:45 compute-0 sudo[263712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:05:45 compute-0 sudo[263712]: pam_unix(sudo:session): session closed for user root
Oct 08 10:05:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:05:45] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct 08 10:05:45 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:05:45] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct 08 10:05:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:45 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06100016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:45 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v627: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct 08 10:05:46 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:46 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:46 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:05:46 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:05:46 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:05:46.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:05:46 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:05:46 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:05:46 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:05:46.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:05:46 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:46 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:05:47.092Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:05:47 compute-0 ceph-mon[73572]: pgmap v627: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct 08 10:05:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:05:47
Oct 08 10:05:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 08 10:05:47 compute-0 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct 08 10:05:47 compute-0 ceph-mgr[73869]: [balancer INFO root] pools ['images', '.nfs', 'vms', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta', 'volumes', '.mgr']
Oct 08 10:05:47 compute-0 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct 08 10:05:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:05:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:05:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:47 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0620002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:05:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:05:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct 08 10:05:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:05:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 08 10:05:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:05:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:05:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:05:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:05:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:05:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:05:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:05:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:05:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:05:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 08 10:05:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:05:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:05:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:05:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 08 10:05:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:05:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 08 10:05:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:05:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:05:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:05:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 08 10:05:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:05:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 10:05:47 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v628: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct 08 10:05:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 08 10:05:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:05:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:05:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:05:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:05:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:48 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0610002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:48 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:05:48 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:05:48 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:05:48.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:05:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:05:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:05:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:05:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:05:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 08 10:05:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:05:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:05:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:05:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:05:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:05:48 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:05:48 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:05:48 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:05:48.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:05:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:48 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:48 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:05:49 compute-0 ceph-mon[73572]: pgmap v628: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct 08 10:05:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:49 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:49 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v629: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct 08 10:05:50 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:50 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0620002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:50 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:05:50 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:05:50 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:05:50.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:05:50 compute-0 ceph-mon[73572]: pgmap v629: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct 08 10:05:50 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:05:50 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:05:50 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:05:50.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:05:50 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:50 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0610002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:51 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:51 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:51 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v630: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:05:52 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:52 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:52 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:05:52 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:05:52 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:05:52.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:05:52 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:05:52 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:05:52 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:05:52.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:05:52 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:52 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0620002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:53 compute-0 ceph-mon[73572]: pgmap v630: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:05:53 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:53 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0610002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:53 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:05:53 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v631: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 7.6 KiB/s rd, 0 B/s wr, 12 op/s
Oct 08 10:05:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:54 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:54 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:05:54 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:05:54 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:05:54.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:05:54 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:05:54 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:05:54 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:05:54.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:05:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:54 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:55 compute-0 ceph-mon[73572]: pgmap v631: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 7.6 KiB/s rd, 0 B/s wr, 12 op/s
Oct 08 10:05:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:05:55] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct 08 10:05:55 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:05:55] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct 08 10:05:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:55 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0620003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:55 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v632: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 7.6 KiB/s rd, 0 B/s wr, 12 op/s
Oct 08 10:05:56 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:56 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0610003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:56 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:05:56 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:05:56 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:05:56.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:05:56 compute-0 ceph-mon[73572]: pgmap v632: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 7.6 KiB/s rd, 0 B/s wr, 12 op/s
Oct 08 10:05:56 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:05:56 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:05:56 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:05:56.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:05:56 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:56 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:05:57.093Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:05:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:05:57.093Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:05:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:05:57.093Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:05:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:05:57.404 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:05:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:05:57.404 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:05:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:05:57.404 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:05:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:57 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:57 compute-0 podman[263749]: 2025-10-08 10:05:57.925577639 +0000 UTC m=+0.078096947 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 08 10:05:57 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v633: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 7.6 KiB/s rd, 0 B/s wr, 12 op/s
Oct 08 10:05:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:58 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0620003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:58 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:05:58 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:05:58 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:05:58.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:05:58 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:05:58 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:05:58 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:05:58.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:05:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:58 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0610003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:58 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:05:59 compute-0 ceph-mon[73572]: pgmap v633: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 7.6 KiB/s rd, 0 B/s wr, 12 op/s
Oct 08 10:05:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:59 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:05:59 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v634: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 97 KiB/s rd, 0 B/s wr, 161 op/s
Oct 08 10:06:00 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:00 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:00 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:06:00 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:06:00 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:06:00.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:06:00 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:06:00 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:06:00 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:06:00.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:06:00 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:00 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0620003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:01 compute-0 ceph-mon[73572]: pgmap v634: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 97 KiB/s rd, 0 B/s wr, 161 op/s
Oct 08 10:06:01 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.24538 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 08 10:06:01 compute-0 ceph-mgr[73869]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 08 10:06:01 compute-0 ceph-mgr[73869]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 08 10:06:01 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.24544 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 08 10:06:01 compute-0 ceph-mgr[73869]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 08 10:06:01 compute-0 ceph-mgr[73869]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 08 10:06:01 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.24538 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Oct 08 10:06:01 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:01 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0610003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:01 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v635: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 97 KiB/s rd, 0 B/s wr, 161 op/s
Oct 08 10:06:02 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:02 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:02 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:06:02 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:06:02 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:06:02.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:06:02 compute-0 ceph-mon[73572]: from='client.? 192.168.122.10:0/3678047400' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct 08 10:06:02 compute-0 ceph-mon[73572]: from='client.? 192.168.122.10:0/460927921' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct 08 10:06:02 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:06:02 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:06:02 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:06:02.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:06:02 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:02 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:06:02 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:06:03 compute-0 ceph-mon[73572]: from='client.24538 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 08 10:06:03 compute-0 ceph-mon[73572]: from='client.24544 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 08 10:06:03 compute-0 ceph-mon[73572]: from='client.24538 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Oct 08 10:06:03 compute-0 ceph-mon[73572]: pgmap v635: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 97 KiB/s rd, 0 B/s wr, 161 op/s
Oct 08 10:06:03 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:06:03 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:03 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0620003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:03 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:06:03 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v636: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 97 KiB/s rd, 0 B/s wr, 161 op/s
Oct 08 10:06:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:04 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0610003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:04 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:06:04 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:06:04 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:06:04.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:06:04 compute-0 ceph-mon[73572]: pgmap v636: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 97 KiB/s rd, 0 B/s wr, 161 op/s
Oct 08 10:06:04 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:06:04 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:06:04 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:06:04.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:06:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:04 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:04 compute-0 podman[263783]: 2025-10-08 10:06:04.895965014 +0000 UTC m=+0.051077710 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Oct 08 10:06:05 compute-0 sudo[263803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:06:05 compute-0 sudo[263803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:06:05 compute-0 sudo[263803]: pam_unix(sudo:session): session closed for user root
Oct 08 10:06:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:06:05] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Oct 08 10:06:05 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:06:05] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Oct 08 10:06:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:05 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:05 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v637: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 90 KiB/s rd, 0 B/s wr, 149 op/s
Oct 08 10:06:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:06 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0620003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:06 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:06:06 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:06:06 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:06:06.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:06:06 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:06:06 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:06:06 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:06:06.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:06:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:06 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0610003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:06 compute-0 podman[263829]: 2025-10-08 10:06:06.888973083 +0000 UTC m=+0.051283606 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd)
Oct 08 10:06:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:06:07.094Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:06:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:06:07.095Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:06:07 compute-0 ceph-mon[73572]: pgmap v637: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 90 KiB/s rd, 0 B/s wr, 149 op/s
Oct 08 10:06:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:07 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:07 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v638: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 90 KiB/s rd, 0 B/s wr, 149 op/s
Oct 08 10:06:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:08 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:08 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:06:08 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:06:08 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:06:08.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:06:08 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:06:08 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:06:08 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:06:08.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:06:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:08 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0638000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:08 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:06:09 compute-0 ceph-mon[73572]: pgmap v638: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 90 KiB/s rd, 0 B/s wr, 149 op/s
Oct 08 10:06:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:09 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:09 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v639: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 90 KiB/s rd, 0 B/s wr, 149 op/s
Oct 08 10:06:10 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:10 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:10 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:06:10 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:06:10 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:06:10.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:06:10 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:06:10 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:06:10 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:06:10.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:06:10 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:10 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:11 compute-0 ceph-mon[73572]: pgmap v639: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 90 KiB/s rd, 0 B/s wr, 149 op/s
Oct 08 10:06:11 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:11 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:11 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v640: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:06:12 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:12 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:12 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:06:12 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:06:12 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:06:12.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:06:12 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:06:12 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:06:12 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:06:12.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:06:12 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:12 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:13 compute-0 ceph-mon[73572]: pgmap v640: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:06:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:13 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0638001c30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:06:13 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v641: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:06:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:14 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:14 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:06:14 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:06:14 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:06:14.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:06:14 compute-0 ceph-mon[73572]: pgmap v641: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:06:14 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:06:14 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:06:14 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:06:14.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:06:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:14 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:06:15] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Oct 08 10:06:15 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:06:15] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Oct 08 10:06:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:15 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:15 compute-0 podman[263860]: 2025-10-08 10:06:15.893459253 +0000 UTC m=+0.058306581 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct 08 10:06:15 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v642: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:06:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:16 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0638001c30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:16 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:06:16 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:06:16 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:06:16.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:06:16 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:06:16 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:06:16 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:06:16.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:06:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:16 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:06:17.095Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:06:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:06:17.096Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:06:17 compute-0 ceph-mon[73572]: pgmap v642: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:06:17 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:06:17 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:06:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:17 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:06:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:06:17 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v643: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:06:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:18 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:06:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:06:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:06:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:06:18 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:06:18 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:06:18 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:06:18.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:06:18 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:06:18 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:06:18 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:06:18 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:06:18.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:06:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:18 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0638002940 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:18 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:06:19 compute-0 ceph-mon[73572]: pgmap v643: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:06:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:19 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:19 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v644: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 08 10:06:20 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:20 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:20 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:06:20 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:06:20 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:06:20.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:06:20 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:06:20 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:06:20 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:06:20.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:06:20 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:20 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:21 compute-0 ceph-mon[73572]: pgmap v644: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 08 10:06:21 compute-0 ceph-mon[73572]: from='client.? 192.168.122.10:0/2428052803' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 08 10:06:21 compute-0 ceph-mon[73572]: from='client.? 192.168.122.10:0/2428052803' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 08 10:06:21 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:21 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:21 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v645: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:06:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:22 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:22 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:06:22 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:06:22 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:06:22.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:06:22 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:06:22 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:06:22 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:06:22.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:06:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:22 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:22 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Oct 08 10:06:22 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3992189617' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct 08 10:06:22 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Oct 08 10:06:22 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/893782256' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct 08 10:06:23 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.15084 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 08 10:06:23 compute-0 ceph-mgr[73869]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 08 10:06:23 compute-0 ceph-mgr[73869]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 08 10:06:23 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.15081 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 08 10:06:23 compute-0 ceph-mgr[73869]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 08 10:06:23 compute-0 ceph-mgr[73869]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 08 10:06:23 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.15081 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Oct 08 10:06:23 compute-0 ceph-mon[73572]: pgmap v645: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:06:23 compute-0 ceph-mon[73572]: from='client.? 192.168.122.10:0/3992189617' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct 08 10:06:23 compute-0 ceph-mon[73572]: from='client.? 192.168.122.10:0/893782256' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct 08 10:06:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:23 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:23 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:06:23 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v646: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:06:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:24 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0638002940 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:24 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:06:24 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:06:24 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:06:24.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:06:24 compute-0 ceph-mon[73572]: from='client.15084 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 08 10:06:24 compute-0 ceph-mon[73572]: from='client.15081 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 08 10:06:24 compute-0 ceph-mon[73572]: from='client.15081 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Oct 08 10:06:24 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:06:24 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:06:24 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:06:24.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:06:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:24 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:25 compute-0 ceph-mon[73572]: pgmap v646: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:06:25 compute-0 sudo[263890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:06:25 compute-0 sudo[263890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:06:25 compute-0 sudo[263890]: pam_unix(sudo:session): session closed for user root
Oct 08 10:06:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:06:25] "GET /metrics HTTP/1.1" 200 48340 "" "Prometheus/2.51.0"
Oct 08 10:06:25 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:06:25] "GET /metrics HTTP/1.1" 200 48340 "" "Prometheus/2.51.0"
Oct 08 10:06:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:25 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:25 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v647: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:06:26 compute-0 sudo[263916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:06:26 compute-0 sudo[263916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:06:26 compute-0 sudo[263916]: pam_unix(sudo:session): session closed for user root
Oct 08 10:06:26 compute-0 sudo[263941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 08 10:06:26 compute-0 sudo[263941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:06:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:26 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:26 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:06:26 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:06:26 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:06:26.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:06:26 compute-0 ceph-mon[73572]: pgmap v647: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:06:26 compute-0 sudo[263941]: pam_unix(sudo:session): session closed for user root
Oct 08 10:06:26 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 10:06:26 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:06:26 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 08 10:06:26 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 10:06:26 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 08 10:06:26 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:06:26 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:06:26 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:06:26.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:06:26 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:06:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:26 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0638003650 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:26 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 08 10:06:26 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:06:26 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 08 10:06:26 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 10:06:26 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 08 10:06:26 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 10:06:26 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 10:06:26 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:06:26 compute-0 sudo[263996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:06:26 compute-0 sudo[263996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:06:26 compute-0 sudo[263996]: pam_unix(sudo:session): session closed for user root
Oct 08 10:06:26 compute-0 sudo[264021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 08 10:06:26 compute-0 sudo[264021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:06:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:06:27.096Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:06:27 compute-0 podman[264087]: 2025-10-08 10:06:27.275153982 +0000 UTC m=+0.040138149 container create 417aa23a52f0a0770553d03a17077772ce6462c4daaf872be5e4b7fa43aa03de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_gagarin, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:06:27 compute-0 systemd[1]: Started libpod-conmon-417aa23a52f0a0770553d03a17077772ce6462c4daaf872be5e4b7fa43aa03de.scope.
Oct 08 10:06:27 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:06:27 compute-0 podman[264087]: 2025-10-08 10:06:27.350497399 +0000 UTC m=+0.115481596 container init 417aa23a52f0a0770553d03a17077772ce6462c4daaf872be5e4b7fa43aa03de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_gagarin, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 08 10:06:27 compute-0 podman[264087]: 2025-10-08 10:06:27.258166067 +0000 UTC m=+0.023150254 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:06:27 compute-0 podman[264087]: 2025-10-08 10:06:27.359300561 +0000 UTC m=+0.124284728 container start 417aa23a52f0a0770553d03a17077772ce6462c4daaf872be5e4b7fa43aa03de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:06:27 compute-0 podman[264087]: 2025-10-08 10:06:27.362775823 +0000 UTC m=+0.127759990 container attach 417aa23a52f0a0770553d03a17077772ce6462c4daaf872be5e4b7fa43aa03de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_gagarin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 08 10:06:27 compute-0 epic_gagarin[264103]: 167 167
Oct 08 10:06:27 compute-0 systemd[1]: libpod-417aa23a52f0a0770553d03a17077772ce6462c4daaf872be5e4b7fa43aa03de.scope: Deactivated successfully.
Oct 08 10:06:27 compute-0 podman[264087]: 2025-10-08 10:06:27.366105729 +0000 UTC m=+0.131089916 container died 417aa23a52f0a0770553d03a17077772ce6462c4daaf872be5e4b7fa43aa03de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_gagarin, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct 08 10:06:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-19336887525d6be1e4060a1397bae3f6408a2701bdba3a3798676f8767cb7f7d-merged.mount: Deactivated successfully.
Oct 08 10:06:27 compute-0 podman[264087]: 2025-10-08 10:06:27.407943242 +0000 UTC m=+0.172927409 container remove 417aa23a52f0a0770553d03a17077772ce6462c4daaf872be5e4b7fa43aa03de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_gagarin, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct 08 10:06:27 compute-0 systemd[1]: libpod-conmon-417aa23a52f0a0770553d03a17077772ce6462c4daaf872be5e4b7fa43aa03de.scope: Deactivated successfully.
Oct 08 10:06:27 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:06:27 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 10:06:27 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:06:27 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:06:27 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 10:06:27 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 10:06:27 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:06:27 compute-0 podman[264127]: 2025-10-08 10:06:27.5715201 +0000 UTC m=+0.047448444 container create 581267be4aa99fa175d2d69a0996b7688f07a22e7a627fb8735ba876e8767c62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_einstein, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct 08 10:06:27 compute-0 systemd[1]: Started libpod-conmon-581267be4aa99fa175d2d69a0996b7688f07a22e7a627fb8735ba876e8767c62.scope.
Oct 08 10:06:27 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:06:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18327363a634110020b784116267b183437d4a703e77063c3a60898aac668c7c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:06:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18327363a634110020b784116267b183437d4a703e77063c3a60898aac668c7c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:06:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18327363a634110020b784116267b183437d4a703e77063c3a60898aac668c7c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:06:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18327363a634110020b784116267b183437d4a703e77063c3a60898aac668c7c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:06:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18327363a634110020b784116267b183437d4a703e77063c3a60898aac668c7c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 10:06:27 compute-0 podman[264127]: 2025-10-08 10:06:27.546872559 +0000 UTC m=+0.022800923 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:06:27 compute-0 podman[264127]: 2025-10-08 10:06:27.683360088 +0000 UTC m=+0.159288432 container init 581267be4aa99fa175d2d69a0996b7688f07a22e7a627fb8735ba876e8767c62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_einstein, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:06:27 compute-0 podman[264127]: 2025-10-08 10:06:27.68996759 +0000 UTC m=+0.165895934 container start 581267be4aa99fa175d2d69a0996b7688f07a22e7a627fb8735ba876e8767c62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default)
Oct 08 10:06:27 compute-0 podman[264127]: 2025-10-08 10:06:27.695006941 +0000 UTC m=+0.170935335 container attach 581267be4aa99fa175d2d69a0996b7688f07a22e7a627fb8735ba876e8767c62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct 08 10:06:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:27 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:27 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v648: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:06:28 compute-0 unruffled_einstein[264143]: --> passed data devices: 0 physical, 1 LVM
Oct 08 10:06:28 compute-0 unruffled_einstein[264143]: --> All data devices are unavailable
Oct 08 10:06:28 compute-0 systemd[1]: libpod-581267be4aa99fa175d2d69a0996b7688f07a22e7a627fb8735ba876e8767c62.scope: Deactivated successfully.
Oct 08 10:06:28 compute-0 podman[264127]: 2025-10-08 10:06:28.039822734 +0000 UTC m=+0.515751078 container died 581267be4aa99fa175d2d69a0996b7688f07a22e7a627fb8735ba876e8767c62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_einstein, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:06:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-18327363a634110020b784116267b183437d4a703e77063c3a60898aac668c7c-merged.mount: Deactivated successfully.
Oct 08 10:06:28 compute-0 podman[264127]: 2025-10-08 10:06:28.127262329 +0000 UTC m=+0.603190673 container remove 581267be4aa99fa175d2d69a0996b7688f07a22e7a627fb8735ba876e8767c62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_einstein, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Oct 08 10:06:28 compute-0 systemd[1]: libpod-conmon-581267be4aa99fa175d2d69a0996b7688f07a22e7a627fb8735ba876e8767c62.scope: Deactivated successfully.
Oct 08 10:06:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:28 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:28 compute-0 sudo[264021]: pam_unix(sudo:session): session closed for user root
Oct 08 10:06:28 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:06:28 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:06:28 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:06:28.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:06:28 compute-0 podman[264159]: 2025-10-08 10:06:28.201505761 +0000 UTC m=+0.134980452 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 08 10:06:28 compute-0 sudo[264194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:06:28 compute-0 sudo[264194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:06:28 compute-0 sudo[264194]: pam_unix(sudo:session): session closed for user root
Oct 08 10:06:28 compute-0 sudo[264223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- lvm list --format json
Oct 08 10:06:28 compute-0 sudo[264223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:06:28 compute-0 ceph-mon[73572]: pgmap v648: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:06:28 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:06:28 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:06:28 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:06:28.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:06:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:28 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:28 compute-0 podman[264287]: 2025-10-08 10:06:28.755316118 +0000 UTC m=+0.061661299 container create efe574fa69aceea763b4c01bcc33620c82828ebc9f3b423a35d369a46e207628 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 10:06:28 compute-0 systemd[1]: Started libpod-conmon-efe574fa69aceea763b4c01bcc33620c82828ebc9f3b423a35d369a46e207628.scope.
Oct 08 10:06:28 compute-0 podman[264287]: 2025-10-08 10:06:28.720795191 +0000 UTC m=+0.027140382 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:06:28 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:06:28 compute-0 podman[264287]: 2025-10-08 10:06:28.879557984 +0000 UTC m=+0.185903175 container init efe574fa69aceea763b4c01bcc33620c82828ebc9f3b423a35d369a46e207628 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_fermi, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 10:06:28 compute-0 podman[264287]: 2025-10-08 10:06:28.887279032 +0000 UTC m=+0.193624193 container start efe574fa69aceea763b4c01bcc33620c82828ebc9f3b423a35d369a46e207628 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Oct 08 10:06:28 compute-0 recursing_fermi[264302]: 167 167
Oct 08 10:06:28 compute-0 systemd[1]: libpod-efe574fa69aceea763b4c01bcc33620c82828ebc9f3b423a35d369a46e207628.scope: Deactivated successfully.
Oct 08 10:06:28 compute-0 podman[264287]: 2025-10-08 10:06:28.928499104 +0000 UTC m=+0.234844265 container attach efe574fa69aceea763b4c01bcc33620c82828ebc9f3b423a35d369a46e207628 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_fermi, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 08 10:06:28 compute-0 podman[264287]: 2025-10-08 10:06:28.92898506 +0000 UTC m=+0.235330221 container died efe574fa69aceea763b4c01bcc33620c82828ebc9f3b423a35d369a46e207628 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 10:06:28 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:06:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5cd42e7e32a274b4d17ef5e4d76f6f4c1808a20708453209bd88a1635254051-merged.mount: Deactivated successfully.
Oct 08 10:06:29 compute-0 podman[264287]: 2025-10-08 10:06:29.082635859 +0000 UTC m=+0.388981020 container remove efe574fa69aceea763b4c01bcc33620c82828ebc9f3b423a35d369a46e207628 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_fermi, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct 08 10:06:29 compute-0 systemd[1]: libpod-conmon-efe574fa69aceea763b4c01bcc33620c82828ebc9f3b423a35d369a46e207628.scope: Deactivated successfully.
Oct 08 10:06:29 compute-0 podman[264328]: 2025-10-08 10:06:29.219382627 +0000 UTC m=+0.022717440 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:06:29 compute-0 podman[264328]: 2025-10-08 10:06:29.374275005 +0000 UTC m=+0.177609768 container create bf92be93432c516bb3b6ba8ab1740c7307af394536aa060bbaf36d8140726315 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 08 10:06:29 compute-0 systemd[1]: Started libpod-conmon-bf92be93432c516bb3b6ba8ab1740c7307af394536aa060bbaf36d8140726315.scope.
Oct 08 10:06:29 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:06:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0215fd2bda372d4972d92d0a42c3001195cb38f80a57bd3552e8b8eda85ffd6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:06:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0215fd2bda372d4972d92d0a42c3001195cb38f80a57bd3552e8b8eda85ffd6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:06:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0215fd2bda372d4972d92d0a42c3001195cb38f80a57bd3552e8b8eda85ffd6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:06:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0215fd2bda372d4972d92d0a42c3001195cb38f80a57bd3552e8b8eda85ffd6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:06:29 compute-0 podman[264328]: 2025-10-08 10:06:29.497422746 +0000 UTC m=+0.300757539 container init bf92be93432c516bb3b6ba8ab1740c7307af394536aa060bbaf36d8140726315 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_kepler, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct 08 10:06:29 compute-0 podman[264328]: 2025-10-08 10:06:29.505168985 +0000 UTC m=+0.308503748 container start bf92be93432c516bb3b6ba8ab1740c7307af394536aa060bbaf36d8140726315 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_kepler, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Oct 08 10:06:29 compute-0 podman[264328]: 2025-10-08 10:06:29.551364697 +0000 UTC m=+0.354699500 container attach bf92be93432c516bb3b6ba8ab1740c7307af394536aa060bbaf36d8140726315 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_kepler, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 08 10:06:29 compute-0 quizzical_kepler[264344]: {
Oct 08 10:06:29 compute-0 quizzical_kepler[264344]:     "1": [
Oct 08 10:06:29 compute-0 quizzical_kepler[264344]:         {
Oct 08 10:06:29 compute-0 quizzical_kepler[264344]:             "devices": [
Oct 08 10:06:29 compute-0 quizzical_kepler[264344]:                 "/dev/loop3"
Oct 08 10:06:29 compute-0 quizzical_kepler[264344]:             ],
Oct 08 10:06:29 compute-0 quizzical_kepler[264344]:             "lv_name": "ceph_lv0",
Oct 08 10:06:29 compute-0 quizzical_kepler[264344]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:06:29 compute-0 quizzical_kepler[264344]:             "lv_size": "21470642176",
Oct 08 10:06:29 compute-0 quizzical_kepler[264344]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 08 10:06:29 compute-0 quizzical_kepler[264344]:             "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 10:06:29 compute-0 quizzical_kepler[264344]:             "name": "ceph_lv0",
Oct 08 10:06:29 compute-0 quizzical_kepler[264344]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:06:29 compute-0 quizzical_kepler[264344]:             "tags": {
Oct 08 10:06:29 compute-0 quizzical_kepler[264344]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:06:29 compute-0 quizzical_kepler[264344]:                 "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 10:06:29 compute-0 quizzical_kepler[264344]:                 "ceph.cephx_lockbox_secret": "",
Oct 08 10:06:29 compute-0 quizzical_kepler[264344]:                 "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct 08 10:06:29 compute-0 quizzical_kepler[264344]:                 "ceph.cluster_name": "ceph",
Oct 08 10:06:29 compute-0 quizzical_kepler[264344]:                 "ceph.crush_device_class": "",
Oct 08 10:06:29 compute-0 quizzical_kepler[264344]:                 "ceph.encrypted": "0",
Oct 08 10:06:29 compute-0 quizzical_kepler[264344]:                 "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct 08 10:06:29 compute-0 quizzical_kepler[264344]:                 "ceph.osd_id": "1",
Oct 08 10:06:29 compute-0 quizzical_kepler[264344]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 08 10:06:29 compute-0 quizzical_kepler[264344]:                 "ceph.type": "block",
Oct 08 10:06:29 compute-0 quizzical_kepler[264344]:                 "ceph.vdo": "0",
Oct 08 10:06:29 compute-0 quizzical_kepler[264344]:                 "ceph.with_tpm": "0"
Oct 08 10:06:29 compute-0 quizzical_kepler[264344]:             },
Oct 08 10:06:29 compute-0 quizzical_kepler[264344]:             "type": "block",
Oct 08 10:06:29 compute-0 quizzical_kepler[264344]:             "vg_name": "ceph_vg0"
Oct 08 10:06:29 compute-0 quizzical_kepler[264344]:         }
Oct 08 10:06:29 compute-0 quizzical_kepler[264344]:     ]
Oct 08 10:06:29 compute-0 quizzical_kepler[264344]: }
Oct 08 10:06:29 compute-0 systemd[1]: libpod-bf92be93432c516bb3b6ba8ab1740c7307af394536aa060bbaf36d8140726315.scope: Deactivated successfully.
Oct 08 10:06:29 compute-0 podman[264328]: 2025-10-08 10:06:29.788377471 +0000 UTC m=+0.591712234 container died bf92be93432c516bb3b6ba8ab1740c7307af394536aa060bbaf36d8140726315 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_kepler, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:06:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:29 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0638003650 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:29 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v649: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 08 10:06:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0215fd2bda372d4972d92d0a42c3001195cb38f80a57bd3552e8b8eda85ffd6-merged.mount: Deactivated successfully.
Oct 08 10:06:30 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:30 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:30 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:06:30 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:06:30 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:06:30.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:06:30 compute-0 podman[264328]: 2025-10-08 10:06:30.192537147 +0000 UTC m=+0.995871920 container remove bf92be93432c516bb3b6ba8ab1740c7307af394536aa060bbaf36d8140726315 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_kepler, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct 08 10:06:30 compute-0 systemd[1]: libpod-conmon-bf92be93432c516bb3b6ba8ab1740c7307af394536aa060bbaf36d8140726315.scope: Deactivated successfully.
Oct 08 10:06:30 compute-0 sudo[264223]: pam_unix(sudo:session): session closed for user root
Oct 08 10:06:30 compute-0 sudo[264366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:06:30 compute-0 sudo[264366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:06:30 compute-0 sudo[264366]: pam_unix(sudo:session): session closed for user root
Oct 08 10:06:30 compute-0 sudo[264391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- raw list --format json
Oct 08 10:06:30 compute-0 sudo[264391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:06:30 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:06:30 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:06:30 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:06:30.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:06:30 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:30 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:30 compute-0 podman[264456]: 2025-10-08 10:06:30.806289958 +0000 UTC m=+0.101974673 container create c00267f071e00aa655d53cb087e0a9cafe2d6786a647bd2ebc92e1e8ec8d54cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:06:30 compute-0 podman[264456]: 2025-10-08 10:06:30.771174551 +0000 UTC m=+0.066859296 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:06:30 compute-0 systemd[1]: Started libpod-conmon-c00267f071e00aa655d53cb087e0a9cafe2d6786a647bd2ebc92e1e8ec8d54cb.scope.
Oct 08 10:06:30 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:06:30 compute-0 podman[264456]: 2025-10-08 10:06:30.967566941 +0000 UTC m=+0.263251696 container init c00267f071e00aa655d53cb087e0a9cafe2d6786a647bd2ebc92e1e8ec8d54cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_wu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 10:06:30 compute-0 podman[264456]: 2025-10-08 10:06:30.976835909 +0000 UTC m=+0.272520634 container start c00267f071e00aa655d53cb087e0a9cafe2d6786a647bd2ebc92e1e8ec8d54cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_wu, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 10:06:30 compute-0 strange_wu[264472]: 167 167
Oct 08 10:06:30 compute-0 systemd[1]: libpod-c00267f071e00aa655d53cb087e0a9cafe2d6786a647bd2ebc92e1e8ec8d54cb.scope: Deactivated successfully.
Oct 08 10:06:31 compute-0 podman[264456]: 2025-10-08 10:06:31.081907099 +0000 UTC m=+0.377592004 container attach c00267f071e00aa655d53cb087e0a9cafe2d6786a647bd2ebc92e1e8ec8d54cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_wu, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 10:06:31 compute-0 podman[264456]: 2025-10-08 10:06:31.082523669 +0000 UTC m=+0.378208424 container died c00267f071e00aa655d53cb087e0a9cafe2d6786a647bd2ebc92e1e8ec8d54cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct 08 10:06:31 compute-0 ceph-mon[73572]: pgmap v649: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 08 10:06:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba8cd11d1ae1d668a21246b39952e8380d16769782fcc3133dd6d150ba6afeeb-merged.mount: Deactivated successfully.
Oct 08 10:06:31 compute-0 podman[264456]: 2025-10-08 10:06:31.344186835 +0000 UTC m=+0.639871560 container remove c00267f071e00aa655d53cb087e0a9cafe2d6786a647bd2ebc92e1e8ec8d54cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_wu, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 08 10:06:31 compute-0 systemd[1]: libpod-conmon-c00267f071e00aa655d53cb087e0a9cafe2d6786a647bd2ebc92e1e8ec8d54cb.scope: Deactivated successfully.
Oct 08 10:06:31 compute-0 podman[264497]: 2025-10-08 10:06:31.553647794 +0000 UTC m=+0.061207864 container create 7bcf36ec01925ca31a7669205114d5a332992df57acd69a6c2d3f608d5b94b57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 08 10:06:31 compute-0 podman[264497]: 2025-10-08 10:06:31.519142257 +0000 UTC m=+0.026702347 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:06:31 compute-0 systemd[1]: Started libpod-conmon-7bcf36ec01925ca31a7669205114d5a332992df57acd69a6c2d3f608d5b94b57.scope.
Oct 08 10:06:31 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:06:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b791556e06a6048b8fa52894f58c4c642b05c140e7b370517321f25d71f85698/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:06:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b791556e06a6048b8fa52894f58c4c642b05c140e7b370517321f25d71f85698/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:06:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b791556e06a6048b8fa52894f58c4c642b05c140e7b370517321f25d71f85698/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:06:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b791556e06a6048b8fa52894f58c4c642b05c140e7b370517321f25d71f85698/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:06:31 compute-0 podman[264497]: 2025-10-08 10:06:31.759771797 +0000 UTC m=+0.267331887 container init 7bcf36ec01925ca31a7669205114d5a332992df57acd69a6c2d3f608d5b94b57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_williams, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Oct 08 10:06:31 compute-0 podman[264497]: 2025-10-08 10:06:31.767130932 +0000 UTC m=+0.274691002 container start 7bcf36ec01925ca31a7669205114d5a332992df57acd69a6c2d3f608d5b94b57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_williams, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 10:06:31 compute-0 podman[264497]: 2025-10-08 10:06:31.875522211 +0000 UTC m=+0.383082331 container attach 7bcf36ec01925ca31a7669205114d5a332992df57acd69a6c2d3f608d5b94b57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_williams, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:06:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:31 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:31 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v650: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:06:32 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:32 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0638003650 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:32 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:06:32 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:06:32 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:06:32.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:06:32 compute-0 lvm[264589]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 08 10:06:32 compute-0 lvm[264589]: VG ceph_vg0 finished
Oct 08 10:06:32 compute-0 nifty_williams[264514]: {}
Oct 08 10:06:32 compute-0 systemd[1]: libpod-7bcf36ec01925ca31a7669205114d5a332992df57acd69a6c2d3f608d5b94b57.scope: Deactivated successfully.
Oct 08 10:06:32 compute-0 systemd[1]: libpod-7bcf36ec01925ca31a7669205114d5a332992df57acd69a6c2d3f608d5b94b57.scope: Consumed 1.077s CPU time.
Oct 08 10:06:32 compute-0 conmon[264514]: conmon 7bcf36ec01925ca31a76 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7bcf36ec01925ca31a7669205114d5a332992df57acd69a6c2d3f608d5b94b57.scope/container/memory.events
Oct 08 10:06:32 compute-0 podman[264497]: 2025-10-08 10:06:32.47866254 +0000 UTC m=+0.986222610 container died 7bcf36ec01925ca31a7669205114d5a332992df57acd69a6c2d3f608d5b94b57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_williams, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 08 10:06:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-b791556e06a6048b8fa52894f58c4c642b05c140e7b370517321f25d71f85698-merged.mount: Deactivated successfully.
Oct 08 10:06:32 compute-0 podman[264497]: 2025-10-08 10:06:32.530195604 +0000 UTC m=+1.037755684 container remove 7bcf36ec01925ca31a7669205114d5a332992df57acd69a6c2d3f608d5b94b57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_williams, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:06:32 compute-0 systemd[1]: libpod-conmon-7bcf36ec01925ca31a7669205114d5a332992df57acd69a6c2d3f608d5b94b57.scope: Deactivated successfully.
Oct 08 10:06:32 compute-0 sudo[264391]: pam_unix(sudo:session): session closed for user root
Oct 08 10:06:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 10:06:32 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:06:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 10:06:32 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:06:32 compute-0 sudo[264606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 08 10:06:32 compute-0 sudo[264606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:06:32 compute-0 sudo[264606]: pam_unix(sudo:session): session closed for user root
Oct 08 10:06:32 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:06:32 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:06:32 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:06:32.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:06:32 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:32 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:06:32 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:06:33 compute-0 ceph-mon[73572]: pgmap v650: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:06:33 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:06:33 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:06:33 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:06:33 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:33 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:06:33 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v651: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:06:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:34 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:34 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:06:34 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:06:34 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:06:34.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:06:34 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:06:34 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:06:34 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:06:34.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:06:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:34 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0638003650 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:35 compute-0 ceph-mon[73572]: pgmap v651: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:06:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:06:35] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Oct 08 10:06:35 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:06:35] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Oct 08 10:06:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:35 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:35 compute-0 podman[264634]: 2025-10-08 10:06:35.95414687 +0000 UTC m=+0.105335201 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent)
Oct 08 10:06:35 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v652: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:06:36 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:36 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:36 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:06:36 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:06:36 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:06:36.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:06:36 compute-0 nova_compute[262220]: 2025-10-08 10:06:36.274 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:06:36 compute-0 nova_compute[262220]: 2025-10-08 10:06:36.275 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:06:36 compute-0 nova_compute[262220]: 2025-10-08 10:06:36.294 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:06:36 compute-0 nova_compute[262220]: 2025-10-08 10:06:36.295 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 08 10:06:36 compute-0 nova_compute[262220]: 2025-10-08 10:06:36.295 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 08 10:06:36 compute-0 nova_compute[262220]: 2025-10-08 10:06:36.308 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 08 10:06:36 compute-0 nova_compute[262220]: 2025-10-08 10:06:36.308 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:06:36 compute-0 nova_compute[262220]: 2025-10-08 10:06:36.309 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:06:36 compute-0 nova_compute[262220]: 2025-10-08 10:06:36.309 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:06:36 compute-0 nova_compute[262220]: 2025-10-08 10:06:36.309 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:06:36 compute-0 nova_compute[262220]: 2025-10-08 10:06:36.309 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 08 10:06:36 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:06:36 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:06:36 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:06:36.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:06:36 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:36 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:36 compute-0 nova_compute[262220]: 2025-10-08 10:06:36.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:06:36 compute-0 nova_compute[262220]: 2025-10-08 10:06:36.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:06:36 compute-0 nova_compute[262220]: 2025-10-08 10:06:36.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:06:36 compute-0 nova_compute[262220]: 2025-10-08 10:06:36.913 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:06:36 compute-0 nova_compute[262220]: 2025-10-08 10:06:36.913 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:06:36 compute-0 nova_compute[262220]: 2025-10-08 10:06:36.914 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:06:36 compute-0 nova_compute[262220]: 2025-10-08 10:06:36.914 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 08 10:06:36 compute-0 nova_compute[262220]: 2025-10-08 10:06:36.914 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:06:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:06:37.097Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:06:37 compute-0 ceph-mon[73572]: pgmap v652: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:06:37 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 08 10:06:37 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4207116258' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:06:37 compute-0 nova_compute[262220]: 2025-10-08 10:06:37.376 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:06:37 compute-0 nova_compute[262220]: 2025-10-08 10:06:37.595 2 WARNING nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 10:06:37 compute-0 nova_compute[262220]: 2025-10-08 10:06:37.597 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4898MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 08 10:06:37 compute-0 nova_compute[262220]: 2025-10-08 10:06:37.597 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:06:37 compute-0 nova_compute[262220]: 2025-10-08 10:06:37.597 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:06:37 compute-0 nova_compute[262220]: 2025-10-08 10:06:37.691 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 08 10:06:37 compute-0 nova_compute[262220]: 2025-10-08 10:06:37.691 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 08 10:06:37 compute-0 nova_compute[262220]: 2025-10-08 10:06:37.721 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:06:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:37 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0638003650 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:37 compute-0 podman[264680]: 2025-10-08 10:06:37.903945663 +0000 UTC m=+0.065094749 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:06:37 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v653: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:06:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 08 10:06:38 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1440575470' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:06:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:38 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:38 compute-0 nova_compute[262220]: 2025-10-08 10:06:38.173 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:06:38 compute-0 nova_compute[262220]: 2025-10-08 10:06:38.180 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 08 10:06:38 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:06:38 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:06:38 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:06:38.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:06:38 compute-0 nova_compute[262220]: 2025-10-08 10:06:38.202 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 08 10:06:38 compute-0 nova_compute[262220]: 2025-10-08 10:06:38.203 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 08 10:06:38 compute-0 nova_compute[262220]: 2025-10-08 10:06:38.204 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:06:38 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/4207116258' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:06:38 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/1440575470' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:06:38 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:06:38 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:06:38 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:06:38.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:06:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:38 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0620002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:06:39 compute-0 ceph-mon[73572]: pgmap v653: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:06:39 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/1264140553' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:06:39 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/2931147659' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:06:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:39 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:40 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v654: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 08 10:06:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:40 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0638003650 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:40 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:06:40 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:06:40 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:06:40.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:06:40 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/3064354339' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:06:40 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/74230358' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:06:40 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:06:40 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:06:40 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:06:40.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:06:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:40 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0610002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:41 compute-0 ceph-mon[73572]: pgmap v654: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 08 10:06:41 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:41 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0620002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:42 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v655: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:06:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:42 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:06:42 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:06:42 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:06:42.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:06:42 compute-0 ceph-mon[73572]: pgmap v655: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:06:42 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:06:42 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:06:42 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:06:42.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:06:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0638003650 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:43 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/100643 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 08 10:06:43 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:43 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0610002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:06:44 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v656: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:06:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:44 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0620002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:44 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:06:44 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:06:44 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:06:44.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:06:44 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:06:44 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:06:44 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:06:44.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:06:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:44 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:45 compute-0 ceph-mon[73572]: pgmap v656: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:06:45 compute-0 sudo[264730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:06:45 compute-0 sudo[264730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:06:45 compute-0 sudo[264730]: pam_unix(sudo:session): session closed for user root
Oct 08 10:06:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:06:45] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Oct 08 10:06:45 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:06:45] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Oct 08 10:06:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:45 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0638003650 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:46 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v657: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:06:46 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:46 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0610002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:46 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:06:46 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:06:46 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:06:46.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:06:46 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:06:46 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:06:46 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:46 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0620002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:46 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:06:46.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:06:46 compute-0 podman[264756]: 2025-10-08 10:06:46.810623176 +0000 UTC m=+0.042241966 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_managed=true, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 08 10:06:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:06:47.098Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:06:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:06:47.099Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:06:47 compute-0 ceph-mon[73572]: pgmap v657: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:06:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:06:47
Oct 08 10:06:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 08 10:06:47 compute-0 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct 08 10:06:47 compute-0 ceph-mgr[73869]: [balancer INFO root] pools ['volumes', 'vms', '.rgw.root', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta', '.nfs', 'default.rgw.meta', 'cephfs.cephfs.data', 'images', 'default.rgw.control', '.mgr']
Oct 08 10:06:47 compute-0 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct 08 10:06:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:06:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:06:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:47 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:06:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:06:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct 08 10:06:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:06:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 08 10:06:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:06:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:06:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:06:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:06:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:06:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:06:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:06:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:06:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:06:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 08 10:06:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:06:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:06:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:06:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 08 10:06:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:06:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 08 10:06:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:06:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:06:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:06:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 08 10:06:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:06:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 10:06:48 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v658: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:06:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 08 10:06:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:06:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:06:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:06:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:06:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:06:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:06:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:06:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:06:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:48 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0638003650 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:06:48 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:06:48 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:06:48 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:06:48.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:06:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 08 10:06:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:06:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:06:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:06:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:06:48 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:06:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:48 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0610002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:48 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:06:48 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:06:48.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:06:48 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:06:49 compute-0 ceph-mon[73572]: pgmap v658: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:06:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:49 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0620002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:50 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v659: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 08 10:06:50 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:50 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:50 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:06:50 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:06:50 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:06:50.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:06:50 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:50 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0638003650 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:50 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:06:50 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:06:50 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:06:50.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:06:51 compute-0 ceph-mon[73572]: pgmap v659: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 08 10:06:51 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:51 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0610002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:52 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v660: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Oct 08 10:06:52 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:52 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0620003630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:52 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:06:52 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:06:52 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:06:52.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:06:52 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:52 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:52 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:06:52 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:06:52 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:06:52.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:06:53 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/100653 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 08 10:06:53 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:53 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:06:53 compute-0 ceph-mon[73572]: pgmap v660: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Oct 08 10:06:53 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:53 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0638003650 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:53 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:06:54 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v661: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Oct 08 10:06:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:54 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06100036c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:54 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:06:54 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:06:54 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:06:54.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:06:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:54 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0620003630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:54 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:06:54 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:06:54 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:06:54.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:06:55 compute-0 ceph-mon[73572]: pgmap v661: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Oct 08 10:06:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:06:55] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Oct 08 10:06:55 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:06:55] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Oct 08 10:06:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:55 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:56 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v662: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Oct 08 10:06:56 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:56 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:06:56 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:56 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:06:56 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:56 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0638003650 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:56 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:06:56 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:06:56 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:06:56.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:06:56 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:56 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06100036c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:56 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:06:56 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:06:56 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:06:56.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:06:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:06:57.099Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:06:57 compute-0 ceph-mon[73572]: pgmap v662: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Oct 08 10:06:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:06:57.405 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:06:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:06:57.405 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:06:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:06:57.405 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:06:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:57 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0620003630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:58 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v663: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 426 B/s wr, 1 op/s
Oct 08 10:06:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:58 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:58 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:06:58 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:06:58 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:06:58.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:06:58 compute-0 ceph-mon[73572]: pgmap v663: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 426 B/s wr, 1 op/s
Oct 08 10:06:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:58 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0638003650 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:06:58 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:06:58 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:06:58 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:06:58.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:06:58 compute-0 podman[264789]: 2025-10-08 10:06:58.938199751 +0000 UTC m=+0.098743419 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 08 10:06:58 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:06:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:59 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:06:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:59 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:06:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:59 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06100036c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:07:00 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v664: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 08 10:07:00 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:07:00 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0620003630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:07:00 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:07:00 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:07:00 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:07:00.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:07:00 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:07:00 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:07:00 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:07:00 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:07:00 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:07:00.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:07:01 compute-0 ceph-mon[73572]: pgmap v664: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 08 10:07:01 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:07:01 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0638003650 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:07:02 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v665: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 08 10:07:02 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:07:02 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 08 10:07:02 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:07:02.119 163175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2a:d6:ca', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'b2:8b:1e:40:84:a3'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 08 10:07:02 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:07:02.120 163175 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 08 10:07:02 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:07:02.120 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26869918-b723-425c-a2e1-0d697f3d0fec, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 10:07:02 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:07:02 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06100036c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:07:02 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:07:02 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:07:02 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:07:02.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:07:02 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:07:02 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0620003630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:07:02 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:07:02 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:07:02 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:07:02.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:07:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:07:02 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:07:03 compute-0 ceph-mon[73572]: pgmap v665: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 08 10:07:03 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:07:03 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:07:03 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:07:03 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:07:04 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v666: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 08 10:07:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:07:04 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0638003650 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:07:04 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:07:04 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:07:04 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:07:04.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:07:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:07:04 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06100036c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:07:04 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:07:04 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:07:04 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:07:04.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:07:05 compute-0 ceph-mon[73572]: pgmap v666: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 08 10:07:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:07:05 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:07:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:07:05] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Oct 08 10:07:05 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:07:05] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Oct 08 10:07:05 compute-0 sudo[264822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:07:05 compute-0 sudo[264822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:07:05 compute-0 sudo[264822]: pam_unix(sudo:session): session closed for user root
Oct 08 10:07:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:07:05 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0620003630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:07:06 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v667: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 597 B/s wr, 2 op/s
Oct 08 10:07:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:07:06 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:07:06 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:07:06 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:07:06 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:07:06.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:07:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:07:06 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0638003650 fd 39 proxy ignored for local
Oct 08 10:07:06 compute-0 kernel: ganesha.nfsd[263851]: segfault at 50 ip 00007f06f568532e sp 00007f06a97f9210 error 4 in libntirpc.so.5.8[7f06f566a000+2c000] likely on CPU 1 (core 0, socket 1)
Oct 08 10:07:06 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 08 10:07:06 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:07:06 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:07:06 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:07:06.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:07:06 compute-0 systemd[1]: Started Process Core Dump (PID 264848/UID 0).
Oct 08 10:07:06 compute-0 podman[264849]: 2025-10-08 10:07:06.852157286 +0000 UTC m=+0.051842294 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 08 10:07:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:07:07.100Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:07:07 compute-0 ceph-mon[73572]: pgmap v667: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 597 B/s wr, 2 op/s
Oct 08 10:07:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/100707 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 1ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 08 10:07:08 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v668: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 597 B/s wr, 2 op/s
Oct 08 10:07:08 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:07:08 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:07:08 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:07:08.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:07:08 compute-0 systemd-coredump[264850]: Process 262090 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 61:
                                                    #0  0x00007f06f568532e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Oct 08 10:07:08 compute-0 systemd[1]: systemd-coredump@9-264848-0.service: Deactivated successfully.
Oct 08 10:07:08 compute-0 systemd[1]: systemd-coredump@9-264848-0.service: Consumed 1.632s CPU time.
Oct 08 10:07:08 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 08 10:07:08 compute-0 podman[264878]: 2025-10-08 10:07:08.548073494 +0000 UTC m=+0.026449659 container died dcd28dc3b591a8ad1bbef3775b31bab43e62da06b22c6c50b9245ad61c1024bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 10:07:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-0368887a430f991d02246d619e7304973cce2d2c741718f4bff3761663df78c0-merged.mount: Deactivated successfully.
Oct 08 10:07:08 compute-0 podman[264878]: 2025-10-08 10:07:08.664857212 +0000 UTC m=+0.143233357 container remove dcd28dc3b591a8ad1bbef3775b31bab43e62da06b22c6c50b9245ad61c1024bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct 08 10:07:08 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Main process exited, code=exited, status=139/n/a
Oct 08 10:07:08 compute-0 podman[264876]: 2025-10-08 10:07:08.688945264 +0000 UTC m=+0.165652196 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 08 10:07:08 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:07:08 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:07:08 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:07:08.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:07:08 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Failed with result 'exit-code'.
Oct 08 10:07:08 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Consumed 1.589s CPU time.
Oct 08 10:07:08 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:07:09 compute-0 ceph-mon[73572]: pgmap v668: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 597 B/s wr, 2 op/s
Oct 08 10:07:10 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v669: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Oct 08 10:07:10 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:07:10 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:07:10 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:07:10.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:07:10 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:07:10 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:07:10 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:07:10.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:07:11 compute-0 ceph-mon[73572]: pgmap v669: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Oct 08 10:07:12 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v670: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Oct 08 10:07:12 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:07:12 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:07:12 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:07:12.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:07:12 compute-0 ceph-mon[73572]: pgmap v670: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Oct 08 10:07:12 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:07:12 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:07:12 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:07:12.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:07:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/100713 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 08 10:07:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:07:14 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v671: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Oct 08 10:07:14 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:07:14 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.002000065s ======
Oct 08 10:07:14 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:07:14.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000065s
Oct 08 10:07:14 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:07:14 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:07:14 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:07:14.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:07:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/100715 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 08 10:07:15 compute-0 ceph-mon[73572]: pgmap v671: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Oct 08 10:07:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:07:15] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Oct 08 10:07:15 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:07:15] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Oct 08 10:07:16 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v672: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 682 B/s wr, 2 op/s
Oct 08 10:07:16 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:07:16 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:07:16 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:07:16.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:07:16 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:07:16 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:07:16 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:07:16.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:07:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:07:17.100Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:07:17 compute-0 ceph-mon[73572]: pgmap v672: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 682 B/s wr, 2 op/s
Oct 08 10:07:17 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:07:17 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:07:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:07:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:07:17 compute-0 podman[264949]: 2025-10-08 10:07:17.929206719 +0000 UTC m=+0.092700114 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid)
Oct 08 10:07:18 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v673: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 682 B/s wr, 2 op/s
Oct 08 10:07:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:07:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:07:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:07:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:07:18 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:07:18 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:07:18 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:07:18.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:07:18 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:07:18 compute-0 ceph-mon[73572]: pgmap v673: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 682 B/s wr, 2 op/s
Oct 08 10:07:18 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:07:18 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:07:18 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:07:18.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:07:18 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Scheduled restart job, restart counter is at 10.
Oct 08 10:07:18 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct 08 10:07:18 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Consumed 1.589s CPU time.
Oct 08 10:07:18 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct 08 10:07:18 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:07:19 compute-0 podman[265021]: 2025-10-08 10:07:19.110784725 +0000 UTC m=+0.102394665 container create ff96eb6387ce0570d856673e2f9d2de072dc140ca0ed9b744a95fbf6029b655c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Oct 08 10:07:19 compute-0 podman[265021]: 2025-10-08 10:07:19.032839375 +0000 UTC m=+0.024449315 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:07:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7680890908c887a4af3f6279a54cd446656bf9035c0c45bf7374d576d707e16e/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 08 10:07:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7680890908c887a4af3f6279a54cd446656bf9035c0c45bf7374d576d707e16e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:07:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7680890908c887a4af3f6279a54cd446656bf9035c0c45bf7374d576d707e16e/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 10:07:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7680890908c887a4af3f6279a54cd446656bf9035c0c45bf7374d576d707e16e/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.uynkmx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 10:07:19 compute-0 podman[265021]: 2025-10-08 10:07:19.236369395 +0000 UTC m=+0.227979415 container init ff96eb6387ce0570d856673e2f9d2de072dc140ca0ed9b744a95fbf6029b655c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct 08 10:07:19 compute-0 podman[265021]: 2025-10-08 10:07:19.246382216 +0000 UTC m=+0.237992186 container start ff96eb6387ce0570d856673e2f9d2de072dc140ca0ed9b744a95fbf6029b655c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct 08 10:07:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:19 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 08 10:07:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:19 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 08 10:07:19 compute-0 bash[265021]: ff96eb6387ce0570d856673e2f9d2de072dc140ca0ed9b744a95fbf6029b655c
Oct 08 10:07:19 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct 08 10:07:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:19 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 08 10:07:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:19 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 08 10:07:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:19 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 08 10:07:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:19 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 08 10:07:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:19 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 08 10:07:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:19 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:07:20 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v674: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 682 B/s wr, 2 op/s
Oct 08 10:07:20 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:07:20 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:07:20 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:07:20.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:07:20 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:07:20 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:07:20 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:07:20.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:07:21 compute-0 ceph-mon[73572]: pgmap v674: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 682 B/s wr, 2 op/s
Oct 08 10:07:21 compute-0 ceph-mon[73572]: from='client.? 192.168.122.10:0/15439890' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 08 10:07:21 compute-0 ceph-mon[73572]: from='client.? 192.168.122.10:0/15439890' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 08 10:07:22 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v675: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Oct 08 10:07:22 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:07:22 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:07:22 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:07:22.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:07:22 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:07:22 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:07:22 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:07:22.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:07:23 compute-0 ceph-mon[73572]: pgmap v675: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Oct 08 10:07:23 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:07:24 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v676: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 08 10:07:24 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:07:24 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:07:24 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:07:24.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:07:24 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:07:24 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:07:24 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:07:24.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:07:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:25 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:07:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:25 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:07:25 compute-0 ceph-mon[73572]: pgmap v676: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 08 10:07:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:07:25] "GET /metrics HTTP/1.1" 200 48348 "" "Prometheus/2.51.0"
Oct 08 10:07:25 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:07:25] "GET /metrics HTTP/1.1" 200 48348 "" "Prometheus/2.51.0"
Oct 08 10:07:25 compute-0 sudo[265085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:07:25 compute-0 sudo[265085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:07:25 compute-0 sudo[265085]: pam_unix(sudo:session): session closed for user root
Oct 08 10:07:26 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v677: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Oct 08 10:07:26 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:07:26 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:07:26 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:07:26.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:07:26 compute-0 ceph-mon[73572]: pgmap v677: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Oct 08 10:07:26 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:07:26 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:07:26 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:07:26.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:07:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:07:27.102Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:07:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:07:27.103Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:07:28 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v678: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Oct 08 10:07:28 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:07:28 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:07:28 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:07:28.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:07:28 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:07:28 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:07:28 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:07:28.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:07:28 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:07:29 compute-0 ceph-mon[73572]: pgmap v678: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Oct 08 10:07:29 compute-0 podman[265114]: 2025-10-08 10:07:29.909151196 +0000 UTC m=+0.075127038 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:07:30 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v679: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 08 10:07:30 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:07:30 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:07:30 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:07:30.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:07:30 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:07:30 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:07:30 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:07:30.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:07:31 compute-0 ceph-mon[73572]: pgmap v679: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 08 10:07:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 08 10:07:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 08 10:07:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 08 10:07:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 08 10:07:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 08 10:07:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 08 10:07:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 08 10:07:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 08 10:07:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 08 10:07:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 08 10:07:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 08 10:07:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 08 10:07:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 08 10:07:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 08 10:07:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 08 10:07:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 08 10:07:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 08 10:07:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 08 10:07:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 08 10:07:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 08 10:07:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 08 10:07:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 08 10:07:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 08 10:07:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 08 10:07:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 08 10:07:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 08 10:07:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 08 10:07:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:07:32 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v680: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 08 10:07:32 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:32 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac900016e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:07:32 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:07:32 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:07:32 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:07:32.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:07:32 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:32 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:07:32 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:07:32 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:07:32 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:07:32.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:07:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:07:32 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:07:33 compute-0 sudo[265159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:07:33 compute-0 sudo[265159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:07:33 compute-0 sudo[265159]: pam_unix(sudo:session): session closed for user root
Oct 08 10:07:33 compute-0 sudo[265184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 08 10:07:33 compute-0 sudo[265184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:07:33 compute-0 ceph-mon[73572]: pgmap v680: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 08 10:07:33 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:07:33 compute-0 sudo[265184]: pam_unix(sudo:session): session closed for user root
Oct 08 10:07:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 10:07:33 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:07:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 08 10:07:33 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 10:07:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 08 10:07:33 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:07:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 08 10:07:33 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:07:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 08 10:07:33 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 10:07:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 08 10:07:33 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 10:07:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 10:07:33 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:07:33 compute-0 sudo[265242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:07:33 compute-0 sudo[265242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:07:33 compute-0 sudo[265242]: pam_unix(sudo:session): session closed for user root
Oct 08 10:07:33 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/100733 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 08 10:07:33 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:33 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:07:33 compute-0 sudo[265267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 08 10:07:33 compute-0 sudo[265267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:07:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:07:34 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v681: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 08 10:07:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:34 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0001bd0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:07:34 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:07:34 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:07:34 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:07:34.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:07:34 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:07:34 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 10:07:34 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:07:34 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:07:34 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 10:07:34 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 10:07:34 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:07:34 compute-0 podman[265334]: 2025-10-08 10:07:34.367241043 +0000 UTC m=+0.026277680 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:07:34 compute-0 podman[265334]: 2025-10-08 10:07:34.479026395 +0000 UTC m=+0.138063022 container create c65e5a024aaa432837b521bea2563e37d77b303e67f7058c4d628a5b37556ee8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_joliot, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 08 10:07:34 compute-0 systemd[1]: Started libpod-conmon-c65e5a024aaa432837b521bea2563e37d77b303e67f7058c4d628a5b37556ee8.scope.
Oct 08 10:07:34 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:07:34 compute-0 podman[265334]: 2025-10-08 10:07:34.614540124 +0000 UTC m=+0.273576761 container init c65e5a024aaa432837b521bea2563e37d77b303e67f7058c4d628a5b37556ee8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:07:34 compute-0 podman[265334]: 2025-10-08 10:07:34.625979912 +0000 UTC m=+0.285016529 container start c65e5a024aaa432837b521bea2563e37d77b303e67f7058c4d628a5b37556ee8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 08 10:07:34 compute-0 friendly_joliot[265349]: 167 167
Oct 08 10:07:34 compute-0 systemd[1]: libpod-c65e5a024aaa432837b521bea2563e37d77b303e67f7058c4d628a5b37556ee8.scope: Deactivated successfully.
Oct 08 10:07:34 compute-0 conmon[265349]: conmon c65e5a024aaa432837b5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c65e5a024aaa432837b521bea2563e37d77b303e67f7058c4d628a5b37556ee8.scope/container/memory.events
Oct 08 10:07:34 compute-0 podman[265334]: 2025-10-08 10:07:34.654114401 +0000 UTC m=+0.313151018 container attach c65e5a024aaa432837b521bea2563e37d77b303e67f7058c4d628a5b37556ee8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_joliot, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 08 10:07:34 compute-0 podman[265334]: 2025-10-08 10:07:34.655645521 +0000 UTC m=+0.314682148 container died c65e5a024aaa432837b521bea2563e37d77b303e67f7058c4d628a5b37556ee8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:07:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd9399fce7d523dbeaeb4aefac73d5ea7434c88b68447df453b9ca977789a98a-merged.mount: Deactivated successfully.
Oct 08 10:07:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:34 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90002000 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:07:34 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:07:34 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:07:34 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:07:34.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:07:34 compute-0 podman[265334]: 2025-10-08 10:07:34.867456204 +0000 UTC m=+0.526492831 container remove c65e5a024aaa432837b521bea2563e37d77b303e67f7058c4d628a5b37556ee8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_joliot, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Oct 08 10:07:34 compute-0 systemd[1]: libpod-conmon-c65e5a024aaa432837b521bea2563e37d77b303e67f7058c4d628a5b37556ee8.scope: Deactivated successfully.
Oct 08 10:07:35 compute-0 podman[265376]: 2025-10-08 10:07:35.058371593 +0000 UTC m=+0.057679754 container create e4cbc5a780e69ce45ddce1e00f1ebfd62c1d7d00200fff535d04e61a0a957691 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:07:35 compute-0 systemd[1]: Started libpod-conmon-e4cbc5a780e69ce45ddce1e00f1ebfd62c1d7d00200fff535d04e61a0a957691.scope.
Oct 08 10:07:35 compute-0 podman[265376]: 2025-10-08 10:07:35.029346635 +0000 UTC m=+0.028654816 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:07:35 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:07:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74a2bb37fdb3045c3371e409313df2bdae9c91a36f0023d630acf79240d06e2d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:07:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74a2bb37fdb3045c3371e409313df2bdae9c91a36f0023d630acf79240d06e2d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:07:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74a2bb37fdb3045c3371e409313df2bdae9c91a36f0023d630acf79240d06e2d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:07:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74a2bb37fdb3045c3371e409313df2bdae9c91a36f0023d630acf79240d06e2d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:07:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74a2bb37fdb3045c3371e409313df2bdae9c91a36f0023d630acf79240d06e2d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 10:07:35 compute-0 podman[265376]: 2025-10-08 10:07:35.197812898 +0000 UTC m=+0.197121079 container init e4cbc5a780e69ce45ddce1e00f1ebfd62c1d7d00200fff535d04e61a0a957691 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_blackburn, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 08 10:07:35 compute-0 podman[265376]: 2025-10-08 10:07:35.205843628 +0000 UTC m=+0.205151789 container start e4cbc5a780e69ce45ddce1e00f1ebfd62c1d7d00200fff535d04e61a0a957691 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_blackburn, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 08 10:07:35 compute-0 podman[265376]: 2025-10-08 10:07:35.213372981 +0000 UTC m=+0.212681162 container attach e4cbc5a780e69ce45ddce1e00f1ebfd62c1d7d00200fff535d04e61a0a957691 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_blackburn, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 08 10:07:35 compute-0 ceph-mon[73572]: pgmap v681: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 08 10:07:35 compute-0 vibrant_blackburn[265394]: --> passed data devices: 0 physical, 1 LVM
Oct 08 10:07:35 compute-0 vibrant_blackburn[265394]: --> All data devices are unavailable
Oct 08 10:07:35 compute-0 systemd[1]: libpod-e4cbc5a780e69ce45ddce1e00f1ebfd62c1d7d00200fff535d04e61a0a957691.scope: Deactivated successfully.
Oct 08 10:07:35 compute-0 podman[265376]: 2025-10-08 10:07:35.546621168 +0000 UTC m=+0.545929329 container died e4cbc5a780e69ce45ddce1e00f1ebfd62c1d7d00200fff535d04e61a0a957691 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct 08 10:07:35 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:07:35] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Oct 08 10:07:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:07:35] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Oct 08 10:07:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-74a2bb37fdb3045c3371e409313df2bdae9c91a36f0023d630acf79240d06e2d-merged.mount: Deactivated successfully.
Oct 08 10:07:35 compute-0 podman[265376]: 2025-10-08 10:07:35.888934128 +0000 UTC m=+0.888242279 container remove e4cbc5a780e69ce45ddce1e00f1ebfd62c1d7d00200fff535d04e61a0a957691 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 08 10:07:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:35 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c0016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:07:35 compute-0 systemd[1]: libpod-conmon-e4cbc5a780e69ce45ddce1e00f1ebfd62c1d7d00200fff535d04e61a0a957691.scope: Deactivated successfully.
Oct 08 10:07:35 compute-0 sudo[265267]: pam_unix(sudo:session): session closed for user root
Oct 08 10:07:36 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v682: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 08 10:07:36 compute-0 sudo[265425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:07:36 compute-0 sudo[265425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:07:36 compute-0 sudo[265425]: pam_unix(sudo:session): session closed for user root
Oct 08 10:07:36 compute-0 sudo[265450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- lvm list --format json
Oct 08 10:07:36 compute-0 sudo[265450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:07:36 compute-0 nova_compute[262220]: 2025-10-08 10:07:36.204 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:07:36 compute-0 nova_compute[262220]: 2025-10-08 10:07:36.205 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:07:36 compute-0 nova_compute[262220]: 2025-10-08 10:07:36.205 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:07:36 compute-0 nova_compute[262220]: 2025-10-08 10:07:36.205 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 08 10:07:36 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:36 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:07:36 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:07:36 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:07:36 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:07:36.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:07:36 compute-0 ceph-mon[73572]: pgmap v682: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 08 10:07:36 compute-0 podman[265516]: 2025-10-08 10:07:36.526240319 +0000 UTC m=+0.072450812 container create 404943f1a74397422b25126dc66d2d558fd4f3d853ef3fa574bb2a7132dd0532 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 10:07:36 compute-0 systemd[1]: Started libpod-conmon-404943f1a74397422b25126dc66d2d558fd4f3d853ef3fa574bb2a7132dd0532.scope.
Oct 08 10:07:36 compute-0 podman[265516]: 2025-10-08 10:07:36.479000153 +0000 UTC m=+0.025210666 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:07:36 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:07:36 compute-0 podman[265516]: 2025-10-08 10:07:36.636986997 +0000 UTC m=+0.183197520 container init 404943f1a74397422b25126dc66d2d558fd4f3d853ef3fa574bb2a7132dd0532 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_goldwasser, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Oct 08 10:07:36 compute-0 podman[265516]: 2025-10-08 10:07:36.64513371 +0000 UTC m=+0.191344203 container start 404943f1a74397422b25126dc66d2d558fd4f3d853ef3fa574bb2a7132dd0532 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_goldwasser, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1)
Oct 08 10:07:36 compute-0 goofy_goldwasser[265532]: 167 167
Oct 08 10:07:36 compute-0 systemd[1]: libpod-404943f1a74397422b25126dc66d2d558fd4f3d853ef3fa574bb2a7132dd0532.scope: Deactivated successfully.
Oct 08 10:07:36 compute-0 conmon[265532]: conmon 404943f1a74397422b25 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-404943f1a74397422b25126dc66d2d558fd4f3d853ef3fa574bb2a7132dd0532.scope/container/memory.events
Oct 08 10:07:36 compute-0 podman[265516]: 2025-10-08 10:07:36.68197841 +0000 UTC m=+0.228188903 container attach 404943f1a74397422b25126dc66d2d558fd4f3d853ef3fa574bb2a7132dd0532 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_goldwasser, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 10:07:36 compute-0 podman[265516]: 2025-10-08 10:07:36.683292523 +0000 UTC m=+0.229503016 container died 404943f1a74397422b25126dc66d2d558fd4f3d853ef3fa574bb2a7132dd0532 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct 08 10:07:36 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:36 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0001bd0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:07:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-fdcab4249034ca836bca7e06fedad241ca385d97e076e7de38e6dfeb1e4b2afe-merged.mount: Deactivated successfully.
Oct 08 10:07:36 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:07:36 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:07:36 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:07:36.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:07:36 compute-0 podman[265516]: 2025-10-08 10:07:36.849354058 +0000 UTC m=+0.395564551 container remove 404943f1a74397422b25126dc66d2d558fd4f3d853ef3fa574bb2a7132dd0532 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 08 10:07:36 compute-0 systemd[1]: libpod-conmon-404943f1a74397422b25126dc66d2d558fd4f3d853ef3fa574bb2a7132dd0532.scope: Deactivated successfully.
Oct 08 10:07:36 compute-0 nova_compute[262220]: 2025-10-08 10:07:36.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:07:36 compute-0 nova_compute[262220]: 2025-10-08 10:07:36.886 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 08 10:07:36 compute-0 nova_compute[262220]: 2025-10-08 10:07:36.886 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 08 10:07:36 compute-0 nova_compute[262220]: 2025-10-08 10:07:36.902 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 08 10:07:36 compute-0 nova_compute[262220]: 2025-10-08 10:07:36.902 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:07:36 compute-0 nova_compute[262220]: 2025-10-08 10:07:36.902 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:07:37 compute-0 podman[265558]: 2025-10-08 10:07:37.014952318 +0000 UTC m=+0.049022404 container create a8df51854524af450bb4829d3e46e302f86c4e7852dfa124348c2be573c265c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_feynman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 08 10:07:37 compute-0 systemd[1]: Started libpod-conmon-a8df51854524af450bb4829d3e46e302f86c4e7852dfa124348c2be573c265c8.scope.
Oct 08 10:07:37 compute-0 podman[265558]: 2025-10-08 10:07:36.988811333 +0000 UTC m=+0.022881439 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:07:37 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:07:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df45afdd3038e5dc7ff1b803e53e79a4a98569a06fb6999202194521d791796b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:07:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df45afdd3038e5dc7ff1b803e53e79a4a98569a06fb6999202194521d791796b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:07:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df45afdd3038e5dc7ff1b803e53e79a4a98569a06fb6999202194521d791796b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:07:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df45afdd3038e5dc7ff1b803e53e79a4a98569a06fb6999202194521d791796b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:07:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:07:37.104Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:07:37 compute-0 podman[265558]: 2025-10-08 10:07:37.191808063 +0000 UTC m=+0.225878179 container init a8df51854524af450bb4829d3e46e302f86c4e7852dfa124348c2be573c265c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 10:07:37 compute-0 podman[265573]: 2025-10-08 10:07:37.199837642 +0000 UTC m=+0.149587684 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Oct 08 10:07:37 compute-0 podman[265558]: 2025-10-08 10:07:37.204623036 +0000 UTC m=+0.238693142 container start a8df51854524af450bb4829d3e46e302f86c4e7852dfa124348c2be573c265c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:07:37 compute-0 podman[265558]: 2025-10-08 10:07:37.244868297 +0000 UTC m=+0.278938383 container attach a8df51854524af450bb4829d3e46e302f86c4e7852dfa124348c2be573c265c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_feynman, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 10:07:37 compute-0 silly_feynman[265581]: {
Oct 08 10:07:37 compute-0 silly_feynman[265581]:     "1": [
Oct 08 10:07:37 compute-0 silly_feynman[265581]:         {
Oct 08 10:07:37 compute-0 silly_feynman[265581]:             "devices": [
Oct 08 10:07:37 compute-0 silly_feynman[265581]:                 "/dev/loop3"
Oct 08 10:07:37 compute-0 silly_feynman[265581]:             ],
Oct 08 10:07:37 compute-0 silly_feynman[265581]:             "lv_name": "ceph_lv0",
Oct 08 10:07:37 compute-0 silly_feynman[265581]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:07:37 compute-0 silly_feynman[265581]:             "lv_size": "21470642176",
Oct 08 10:07:37 compute-0 silly_feynman[265581]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 08 10:07:37 compute-0 silly_feynman[265581]:             "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 10:07:37 compute-0 silly_feynman[265581]:             "name": "ceph_lv0",
Oct 08 10:07:37 compute-0 silly_feynman[265581]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:07:37 compute-0 silly_feynman[265581]:             "tags": {
Oct 08 10:07:37 compute-0 silly_feynman[265581]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:07:37 compute-0 silly_feynman[265581]:                 "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 10:07:37 compute-0 silly_feynman[265581]:                 "ceph.cephx_lockbox_secret": "",
Oct 08 10:07:37 compute-0 silly_feynman[265581]:                 "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct 08 10:07:37 compute-0 silly_feynman[265581]:                 "ceph.cluster_name": "ceph",
Oct 08 10:07:37 compute-0 silly_feynman[265581]:                 "ceph.crush_device_class": "",
Oct 08 10:07:37 compute-0 silly_feynman[265581]:                 "ceph.encrypted": "0",
Oct 08 10:07:37 compute-0 silly_feynman[265581]:                 "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct 08 10:07:37 compute-0 silly_feynman[265581]:                 "ceph.osd_id": "1",
Oct 08 10:07:37 compute-0 silly_feynman[265581]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 08 10:07:37 compute-0 silly_feynman[265581]:                 "ceph.type": "block",
Oct 08 10:07:37 compute-0 silly_feynman[265581]:                 "ceph.vdo": "0",
Oct 08 10:07:37 compute-0 silly_feynman[265581]:                 "ceph.with_tpm": "0"
Oct 08 10:07:37 compute-0 silly_feynman[265581]:             },
Oct 08 10:07:37 compute-0 silly_feynman[265581]:             "type": "block",
Oct 08 10:07:37 compute-0 silly_feynman[265581]:             "vg_name": "ceph_vg0"
Oct 08 10:07:37 compute-0 silly_feynman[265581]:         }
Oct 08 10:07:37 compute-0 silly_feynman[265581]:     ]
Oct 08 10:07:37 compute-0 silly_feynman[265581]: }
Oct 08 10:07:37 compute-0 systemd[1]: libpod-a8df51854524af450bb4829d3e46e302f86c4e7852dfa124348c2be573c265c8.scope: Deactivated successfully.
Oct 08 10:07:37 compute-0 podman[265558]: 2025-10-08 10:07:37.544776647 +0000 UTC m=+0.578846733 container died a8df51854524af450bb4829d3e46e302f86c4e7852dfa124348c2be573c265c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct 08 10:07:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-df45afdd3038e5dc7ff1b803e53e79a4a98569a06fb6999202194521d791796b-merged.mount: Deactivated successfully.
Oct 08 10:07:37 compute-0 podman[265558]: 2025-10-08 10:07:37.862781471 +0000 UTC m=+0.896851557 container remove a8df51854524af450bb4829d3e46e302f86c4e7852dfa124348c2be573c265c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_feynman, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 08 10:07:37 compute-0 systemd[1]: libpod-conmon-a8df51854524af450bb4829d3e46e302f86c4e7852dfa124348c2be573c265c8.scope: Deactivated successfully.
Oct 08 10:07:37 compute-0 nova_compute[262220]: 2025-10-08 10:07:37.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:07:37 compute-0 nova_compute[262220]: 2025-10-08 10:07:37.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:07:37 compute-0 sudo[265450]: pam_unix(sudo:session): session closed for user root
Oct 08 10:07:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:37 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90002000 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:07:37 compute-0 sudo[265619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:07:37 compute-0 sudo[265619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:07:37 compute-0 sudo[265619]: pam_unix(sudo:session): session closed for user root
Oct 08 10:07:38 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v683: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 08 10:07:38 compute-0 sudo[265644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- raw list --format json
Oct 08 10:07:38 compute-0 sudo[265644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:07:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:38 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c0016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:07:38 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:07:38 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:07:38 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:07:38.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:07:38 compute-0 podman[265708]: 2025-10-08 10:07:38.422289068 +0000 UTC m=+0.039695453 container create a43fb55bdc712d90fa2e66277700345da36fd331eaa452a1a662fb43e32dc991 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:07:38 compute-0 systemd[1]: Started libpod-conmon-a43fb55bdc712d90fa2e66277700345da36fd331eaa452a1a662fb43e32dc991.scope.
Oct 08 10:07:38 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:07:38 compute-0 podman[265708]: 2025-10-08 10:07:38.403361597 +0000 UTC m=+0.020768012 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:07:38 compute-0 podman[265708]: 2025-10-08 10:07:38.520538203 +0000 UTC m=+0.137944688 container init a43fb55bdc712d90fa2e66277700345da36fd331eaa452a1a662fb43e32dc991 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_wescoff, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct 08 10:07:38 compute-0 podman[265708]: 2025-10-08 10:07:38.527964053 +0000 UTC m=+0.145370438 container start a43fb55bdc712d90fa2e66277700345da36fd331eaa452a1a662fb43e32dc991 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:07:38 compute-0 crazy_wescoff[265724]: 167 167
Oct 08 10:07:38 compute-0 systemd[1]: libpod-a43fb55bdc712d90fa2e66277700345da36fd331eaa452a1a662fb43e32dc991.scope: Deactivated successfully.
Oct 08 10:07:38 compute-0 podman[265708]: 2025-10-08 10:07:38.544782987 +0000 UTC m=+0.162189372 container attach a43fb55bdc712d90fa2e66277700345da36fd331eaa452a1a662fb43e32dc991 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_wescoff, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True)
Oct 08 10:07:38 compute-0 podman[265708]: 2025-10-08 10:07:38.545281792 +0000 UTC m=+0.162688177 container died a43fb55bdc712d90fa2e66277700345da36fd331eaa452a1a662fb43e32dc991 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_wescoff, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct 08 10:07:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-c612f9adb751a781b4e1ff705e4b0e9a971ea59168795f3113a6382b31008081-merged.mount: Deactivated successfully.
Oct 08 10:07:38 compute-0 podman[265708]: 2025-10-08 10:07:38.619725818 +0000 UTC m=+0.237132203 container remove a43fb55bdc712d90fa2e66277700345da36fd331eaa452a1a662fb43e32dc991 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_wescoff, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 10:07:38 compute-0 systemd[1]: libpod-conmon-a43fb55bdc712d90fa2e66277700345da36fd331eaa452a1a662fb43e32dc991.scope: Deactivated successfully.
Oct 08 10:07:38 compute-0 podman[265750]: 2025-10-08 10:07:38.791108885 +0000 UTC m=+0.051992911 container create bec79ca95945ec6d3512c844218570be028d19e91fcaa17448de05ff4307489d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:07:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:38 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:07:38 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:07:38 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:07:38 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:07:38.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:07:38 compute-0 systemd[1]: Started libpod-conmon-bec79ca95945ec6d3512c844218570be028d19e91fcaa17448de05ff4307489d.scope.
Oct 08 10:07:38 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:07:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4297b4c95527b396a70fb14a991e0e46bbc5b8c622395c1a3308fc1558eb557d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:07:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4297b4c95527b396a70fb14a991e0e46bbc5b8c622395c1a3308fc1558eb557d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:07:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4297b4c95527b396a70fb14a991e0e46bbc5b8c622395c1a3308fc1558eb557d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:07:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4297b4c95527b396a70fb14a991e0e46bbc5b8c622395c1a3308fc1558eb557d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:07:38 compute-0 podman[265750]: 2025-10-08 10:07:38.761513668 +0000 UTC m=+0.022397674 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:07:38 compute-0 podman[265750]: 2025-10-08 10:07:38.86927808 +0000 UTC m=+0.130162086 container init bec79ca95945ec6d3512c844218570be028d19e91fcaa17448de05ff4307489d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_carver, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:07:38 compute-0 podman[265750]: 2025-10-08 10:07:38.876317068 +0000 UTC m=+0.137201054 container start bec79ca95945ec6d3512c844218570be028d19e91fcaa17448de05ff4307489d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_carver, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct 08 10:07:38 compute-0 nova_compute[262220]: 2025-10-08 10:07:38.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:07:38 compute-0 podman[265750]: 2025-10-08 10:07:38.901387057 +0000 UTC m=+0.162271043 container attach bec79ca95945ec6d3512c844218570be028d19e91fcaa17448de05ff4307489d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_carver, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 10:07:38 compute-0 nova_compute[262220]: 2025-10-08 10:07:38.917 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:07:38 compute-0 nova_compute[262220]: 2025-10-08 10:07:38.917 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:07:38 compute-0 nova_compute[262220]: 2025-10-08 10:07:38.918 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:07:38 compute-0 nova_compute[262220]: 2025-10-08 10:07:38.918 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 08 10:07:38 compute-0 nova_compute[262220]: 2025-10-08 10:07:38.918 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:07:38 compute-0 podman[265764]: 2025-10-08 10:07:38.947724605 +0000 UTC m=+0.118847261 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 08 10:07:39 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:07:39 compute-0 ceph-mon[73572]: pgmap v683: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 08 10:07:39 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 08 10:07:39 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3616470647' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:07:39 compute-0 nova_compute[262220]: 2025-10-08 10:07:39.397 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:07:39 compute-0 lvm[265884]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 08 10:07:39 compute-0 lvm[265884]: VG ceph_vg0 finished
Oct 08 10:07:39 compute-0 cool_carver[265767]: {}
Oct 08 10:07:39 compute-0 systemd[1]: libpod-bec79ca95945ec6d3512c844218570be028d19e91fcaa17448de05ff4307489d.scope: Deactivated successfully.
Oct 08 10:07:39 compute-0 systemd[1]: libpod-bec79ca95945ec6d3512c844218570be028d19e91fcaa17448de05ff4307489d.scope: Consumed 1.082s CPU time.
Oct 08 10:07:39 compute-0 podman[265750]: 2025-10-08 10:07:39.561293629 +0000 UTC m=+0.822177615 container died bec79ca95945ec6d3512c844218570be028d19e91fcaa17448de05ff4307489d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_carver, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Oct 08 10:07:39 compute-0 nova_compute[262220]: 2025-10-08 10:07:39.565 2 WARNING nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 10:07:39 compute-0 nova_compute[262220]: 2025-10-08 10:07:39.568 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4866MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 08 10:07:39 compute-0 nova_compute[262220]: 2025-10-08 10:07:39.568 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:07:39 compute-0 nova_compute[262220]: 2025-10-08 10:07:39.568 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:07:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-4297b4c95527b396a70fb14a991e0e46bbc5b8c622395c1a3308fc1558eb557d-merged.mount: Deactivated successfully.
Oct 08 10:07:39 compute-0 podman[265750]: 2025-10-08 10:07:39.757698894 +0000 UTC m=+1.018582890 container remove bec79ca95945ec6d3512c844218570be028d19e91fcaa17448de05ff4307489d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True)
Oct 08 10:07:39 compute-0 systemd[1]: libpod-conmon-bec79ca95945ec6d3512c844218570be028d19e91fcaa17448de05ff4307489d.scope: Deactivated successfully.
Oct 08 10:07:39 compute-0 sudo[265644]: pam_unix(sudo:session): session closed for user root
Oct 08 10:07:39 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 10:07:39 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:07:39 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 10:07:39 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:07:39 compute-0 nova_compute[262220]: 2025-10-08 10:07:39.906 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 08 10:07:39 compute-0 nova_compute[262220]: 2025-10-08 10:07:39.906 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 08 10:07:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:39 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca00089d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:07:39 compute-0 sudo[265903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 08 10:07:39 compute-0 sudo[265903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:07:39 compute-0 sudo[265903]: pam_unix(sudo:session): session closed for user root
Oct 08 10:07:39 compute-0 nova_compute[262220]: 2025-10-08 10:07:39.976 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:07:40 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v684: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 08 10:07:40 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/1559491885' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:07:40 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3616470647' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:07:40 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:07:40 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:07:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:40 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90002000 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:07:40 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:07:40 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:07:40 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:07:40.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:07:40 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 08 10:07:40 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3850252230' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:07:40 compute-0 nova_compute[262220]: 2025-10-08 10:07:40.427 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:07:40 compute-0 nova_compute[262220]: 2025-10-08 10:07:40.433 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 08 10:07:40 compute-0 nova_compute[262220]: 2025-10-08 10:07:40.454 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 08 10:07:40 compute-0 nova_compute[262220]: 2025-10-08 10:07:40.456 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 08 10:07:40 compute-0 nova_compute[262220]: 2025-10-08 10:07:40.456 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.887s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:07:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:40 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c0016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:07:40 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:07:40 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:07:40 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:07:40.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:07:40 compute-0 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Oct 08 10:07:40 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:07:40.923507) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 08 10:07:40 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Oct 08 10:07:40 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918060923544, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 2124, "num_deletes": 251, "total_data_size": 4180276, "memory_usage": 4230352, "flush_reason": "Manual Compaction"}
Oct 08 10:07:40 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Oct 08 10:07:41 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918061008260, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 4077896, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20001, "largest_seqno": 22124, "table_properties": {"data_size": 4068457, "index_size": 5933, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19458, "raw_average_key_size": 20, "raw_value_size": 4049613, "raw_average_value_size": 4192, "num_data_blocks": 261, "num_entries": 966, "num_filter_entries": 966, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759917843, "oldest_key_time": 1759917843, "file_creation_time": 1759918060, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Oct 08 10:07:41 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 84806 microseconds, and 9017 cpu microseconds.
Oct 08 10:07:41 compute-0 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 08 10:07:41 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:07:41.008307) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 4077896 bytes OK
Oct 08 10:07:41 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:07:41.008330) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Oct 08 10:07:41 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:07:41.012711) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Oct 08 10:07:41 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:07:41.012739) EVENT_LOG_v1 {"time_micros": 1759918061012733, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 08 10:07:41 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:07:41.012758) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 08 10:07:41 compute-0 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 4171697, prev total WAL file size 4171697, number of live WAL files 2.
Oct 08 10:07:41 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 10:07:41 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:07:41.013763) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Oct 08 10:07:41 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 08 10:07:41 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(3982KB)], [44(12MB)]
Oct 08 10:07:41 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918061013792, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 16938325, "oldest_snapshot_seqno": -1}
Oct 08 10:07:41 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 5419 keys, 14760604 bytes, temperature: kUnknown
Oct 08 10:07:41 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918061176700, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 14760604, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14722292, "index_size": 23674, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13573, "raw_key_size": 136667, "raw_average_key_size": 25, "raw_value_size": 14622126, "raw_average_value_size": 2698, "num_data_blocks": 976, "num_entries": 5419, "num_filter_entries": 5419, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916581, "oldest_key_time": 0, "file_creation_time": 1759918061, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Oct 08 10:07:41 compute-0 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 08 10:07:41 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:07:41.177025) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 14760604 bytes
Oct 08 10:07:41 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:07:41.203849) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 103.9 rd, 90.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.9, 12.3 +0.0 blob) out(14.1 +0.0 blob), read-write-amplify(7.8) write-amplify(3.6) OK, records in: 5937, records dropped: 518 output_compression: NoCompression
Oct 08 10:07:41 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:07:41.203891) EVENT_LOG_v1 {"time_micros": 1759918061203876, "job": 22, "event": "compaction_finished", "compaction_time_micros": 163095, "compaction_time_cpu_micros": 26132, "output_level": 6, "num_output_files": 1, "total_output_size": 14760604, "num_input_records": 5937, "num_output_records": 5419, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 08 10:07:41 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 10:07:41 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918061205857, "job": 22, "event": "table_file_deletion", "file_number": 46}
Oct 08 10:07:41 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 10:07:41 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918061208739, "job": 22, "event": "table_file_deletion", "file_number": 44}
Oct 08 10:07:41 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:07:41.013663) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:07:41 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:07:41.208865) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:07:41 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:07:41.208870) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:07:41 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:07:41.208872) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:07:41 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:07:41.208874) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:07:41 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:07:41.208875) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:07:41 compute-0 ceph-mon[73572]: pgmap v684: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 08 10:07:41 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3850252230' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:07:41 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/996559225' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:07:41 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/1096494060' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:07:41 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:41 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94002ee0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:07:42 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v685: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 08 10:07:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:42 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca00089d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:07:42 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:07:42 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.002000065s ======
Oct 08 10:07:42 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:07:42.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000065s
Oct 08 10:07:42 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/2155949947' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:07:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:42 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca00089d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:07:42 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:07:42 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:07:42 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:07:42.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:07:43 compute-0 ceph-mon[73572]: pgmap v685: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 08 10:07:43 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:43 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:07:44 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:07:44 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v686: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Oct 08 10:07:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:44 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94002ee0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:07:44 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:07:44 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:07:44 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:07:44.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:07:44 compute-0 ceph-mon[73572]: pgmap v686: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Oct 08 10:07:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:44 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca00089d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:07:44 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:07:44 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:07:44 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:07:44.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:07:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:07:45] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Oct 08 10:07:45 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:07:45] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Oct 08 10:07:45 compute-0 sudo[265956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:07:45 compute-0 sudo[265956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:07:45 compute-0 sudo[265956]: pam_unix(sudo:session): session closed for user root
Oct 08 10:07:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:45 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca00089d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:07:46 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v687: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:07:46 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:46 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:07:46 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:07:46 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:07:46 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:07:46.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:07:46 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:46 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:07:46 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:07:46 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:07:46 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:07:46.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:07:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:07:47.105Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:07:47 compute-0 ceph-mon[73572]: pgmap v687: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:07:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:07:47
Oct 08 10:07:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 08 10:07:47 compute-0 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct 08 10:07:47 compute-0 ceph-mgr[73869]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.control', 'backups', 'cephfs.cephfs.data', '.mgr', 'volumes', 'vms', 'default.rgw.log', 'images', '.rgw.root', '.nfs', 'default.rgw.meta']
Oct 08 10:07:47 compute-0 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct 08 10:07:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:07:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:07:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:07:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:07:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:47 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca00089d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:07:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct 08 10:07:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:07:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 08 10:07:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:07:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:07:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:07:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:07:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:07:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:07:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:07:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:07:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:07:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 08 10:07:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:07:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:07:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:07:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 08 10:07:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:07:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 08 10:07:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:07:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:07:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:07:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 08 10:07:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:07:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 10:07:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 08 10:07:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:07:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:07:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:07:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:07:48 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v688: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:07:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:07:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:07:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:07:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:07:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 08 10:07:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:07:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:07:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:07:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:07:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:48 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90003100 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:07:48 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:07:48 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:07:48 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:07:48.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:07:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:07:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:48 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca00089d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:07:48 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:07:48 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:07:48 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:07:48.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:07:48 compute-0 podman[265984]: 2025-10-08 10:07:48.888268646 +0000 UTC m=+0.052372933 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct 08 10:07:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:07:49 compute-0 ceph-mon[73572]: pgmap v688: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:07:49 compute-0 ceph-mgr[73869]: [devicehealth INFO root] Check health
Oct 08 10:07:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:49 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:07:50 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v689: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 08 10:07:50 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:50 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003800 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:07:50 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:07:50 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:07:50 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:07:50.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:07:50 compute-0 ceph-mon[73572]: pgmap v689: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 08 10:07:50 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:50 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90003100 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:07:50 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:07:50 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:07:50 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:07:50.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:07:51 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:51 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:07:52 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v690: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:07:52 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:52 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:07:52 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:07:52 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:07:52 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:07:52.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:07:52 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:52 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003800 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:07:52 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:07:52 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:07:52 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:07:52.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:07:53 compute-0 ceph-mon[73572]: pgmap v690: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:07:53 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:53 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90003100 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:07:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:07:54 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v691: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 08 10:07:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:54 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:07:54 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:07:54 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:07:54 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:07:54.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:07:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:54 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:07:54 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:07:54 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:07:54 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:07:54.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:07:55 compute-0 ceph-mon[73572]: pgmap v691: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 08 10:07:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:07:55] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Oct 08 10:07:55 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:07:55] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Oct 08 10:07:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:55 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003800 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:07:56 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v692: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:07:56 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:56 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90003100 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:07:56 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:07:56 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:07:56 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:07:56.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:07:56 compute-0 PackageKit[193649]: daemon quit
Oct 08 10:07:56 compute-0 systemd[1]: packagekit.service: Deactivated successfully.
Oct 08 10:07:56 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:56 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:07:56 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:07:56 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:07:56 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:07:56.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:07:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:07:57.106Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:07:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:07:57.106Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:07:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:07:57.107Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:07:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:07:57.406 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:07:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:07:57.406 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:07:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:07:57.406 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:07:57 compute-0 ceph-mon[73572]: pgmap v692: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:07:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:57 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:07:58 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v693: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:07:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:58 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003800 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:07:58 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:07:58 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:07:58 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:07:58.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:07:58 compute-0 ceph-mon[73572]: pgmap v693: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:07:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:58 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90003100 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:07:58 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:07:58 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:07:58 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:07:58.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:07:59 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:07:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:59 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:00 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v694: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 08 10:08:00 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:00 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:00 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:00 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:08:00 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:00.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:08:00 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:00 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:00 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:00 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:08:00 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:08:00.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:08:00 compute-0 podman[266017]: 2025-10-08 10:08:00.920729315 +0000 UTC m=+0.081370320 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:08:01 compute-0 ceph-mon[73572]: pgmap v694: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 08 10:08:01 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:01 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90003100 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:02 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v695: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:08:02 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:02 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:02 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:02 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:08:02 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:02.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:08:02 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:02 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:08:02 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:08:02 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:02 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:08:02 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:08:02.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:08:03 compute-0 ceph-mon[73572]: pgmap v695: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:08:03 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:08:03 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:03 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003800 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:04 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:08:04 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v696: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 08 10:08:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:04 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90003100 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:04 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:04 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:08:04 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:04.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:08:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:04 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90003100 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:04 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:04 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:08:04 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:08:04.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:08:05 compute-0 ceph-mon[73572]: pgmap v696: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 08 10:08:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:08:05] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Oct 08 10:08:05 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:08:05] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Oct 08 10:08:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:05 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:05 compute-0 sudo[266052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:08:05 compute-0 sudo[266052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:08:06 compute-0 sudo[266052]: pam_unix(sudo:session): session closed for user root
Oct 08 10:08:06 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v697: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:08:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:06 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78000d90 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:06 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:06 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:08:06 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:06.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:08:06 compute-0 ceph-mon[73572]: pgmap v697: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:08:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:06 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70000d00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:06 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:06 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:08:06 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:08:06.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:08:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:08:07.107Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:08:07 compute-0 podman[266078]: 2025-10-08 10:08:07.902286323 +0000 UTC m=+0.058776890 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Oct 08 10:08:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:07 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90003100 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:08 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v698: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:08:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:08 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:08 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:08 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:08:08 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:08.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:08:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:08 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac780018b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:08 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:08 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:08:08 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:08:08.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:08:09 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:08:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/100809 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 08 10:08:09 compute-0 ceph-mon[73572]: pgmap v698: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:08:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=cleanup t=2025-10-08T10:08:09.522414799Z level=info msg="Completed cleanup jobs" duration=86.99335ms
Oct 08 10:08:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=plugins.update.checker t=2025-10-08T10:08:09.573661374Z level=info msg="Update check succeeded" duration=58.483919ms
Oct 08 10:08:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=grafana.update.checker t=2025-10-08T10:08:09.576582359Z level=info msg="Update check succeeded" duration=61.750856ms
Oct 08 10:08:09 compute-0 podman[266099]: 2025-10-08 10:08:09.897135746 +0000 UTC m=+0.056861908 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 08 10:08:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:09 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001820 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:10 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v699: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 08 10:08:10 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:10 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90003100 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:10 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:10 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:08:10 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:10.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:08:10 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:10 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90003100 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:10 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:10 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:08:10 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:08:10.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:08:11 compute-0 ceph-mon[73572]: pgmap v699: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 08 10:08:11 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:11 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac780018b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:12 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v700: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 10:08:12 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:12 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001820 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:12 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:12 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:08:12 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:12.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:08:12 compute-0 ceph-mon[73572]: pgmap v700: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 10:08:12 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:12 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:12 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:12 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:08:12 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:08:12.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:08:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:13 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90003100 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:14 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:08:14 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v701: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:08:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:14 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90003100 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:14 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:14 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:08:14 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:14.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:08:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:14 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001820 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:14 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:14 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:08:14 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:08:14.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:08:15 compute-0 ceph-mon[73572]: pgmap v701: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:08:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:08:15] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Oct 08 10:08:15 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:08:15] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Oct 08 10:08:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:15 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:16 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v702: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 10:08:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:16 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78002950 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:16 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:16 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:08:16 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:16.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:08:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:16 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90003100 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:16 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:16 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:08:16 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:08:16.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:08:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:08:17.108Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:08:17 compute-0 ceph-mon[73572]: pgmap v702: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 10:08:17 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:08:17 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:08:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:08:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:08:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:17 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70002cb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:18 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v703: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 10:08:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:08:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:08:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:08:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:08:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:18 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:18 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:18 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:08:18 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:18.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:08:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:18 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:08:18 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:08:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:18 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78002950 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:18 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:18 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:08:18 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:08:18.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:08:19 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:08:19 compute-0 ceph-mon[73572]: pgmap v703: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 10:08:19 compute-0 podman[266129]: 2025-10-08 10:08:19.893702188 +0000 UTC m=+0.054233894 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=iscsid, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:08:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:19 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90003100 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:20 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v704: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 08 10:08:20 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:20 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70002cb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:20 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:20 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:08:20 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:20.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:08:20 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 08 10:08:20 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2658350131' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 08 10:08:20 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 08 10:08:20 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2658350131' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 08 10:08:20 compute-0 ceph-mon[73572]: pgmap v704: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 08 10:08:20 compute-0 ceph-mon[73572]: from='client.? 192.168.122.10:0/2658350131' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 08 10:08:20 compute-0 ceph-mon[73572]: from='client.? 192.168.122.10:0/2658350131' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 08 10:08:20 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:20 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:20 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:20 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:08:20 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:08:20.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:08:21 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:21 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:08:21 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:21 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:08:21 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:21 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78002950 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:22 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v705: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 08 10:08:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:22 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90003100 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:22 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:22 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:08:22 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:22.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:08:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:22 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70002cb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:22 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:22 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:08:22 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:08:22.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:08:23 compute-0 ceph-mon[73572]: pgmap v705: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 08 10:08:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:23 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:24 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:08:24 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v706: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 10:08:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:24 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78003a50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:24 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:24 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:08:24 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:24.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:08:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:24 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 08 10:08:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:24 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90003100 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:24 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:24 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:08:24 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:08:24.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:08:25 compute-0 ceph-mon[73572]: pgmap v706: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 10:08:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:08:25] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Oct 08 10:08:25 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:08:25] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Oct 08 10:08:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:25 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70003db0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:26 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v707: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Oct 08 10:08:26 compute-0 sudo[266157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:08:26 compute-0 sudo[266157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:08:26 compute-0 sudo[266157]: pam_unix(sudo:session): session closed for user root
Oct 08 10:08:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:26 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:26 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:26 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:08:26 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:26.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:08:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:26 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78003a50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:26 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:26 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:08:26 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:08:26.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:08:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:08:27.108Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:08:27 compute-0 ceph-mon[73572]: pgmap v707: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Oct 08 10:08:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:27 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90003100 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:28 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v708: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Oct 08 10:08:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:28 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70003db0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:28 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:28 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:08:28 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:28.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:08:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:28 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:28 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:28 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:08:28 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:08:28.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:08:29 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:08:29 compute-0 ceph-mon[73572]: pgmap v708: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Oct 08 10:08:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:29 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78003a50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:30 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v709: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 08 10:08:30 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:30 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90003100 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:30 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:30 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:08:30 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:30.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:08:30 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:30 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70003db0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:30 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:30 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:08:30 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:08:30.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:08:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/100831 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 08 10:08:31 compute-0 ceph-mon[73572]: pgmap v709: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 08 10:08:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:31 compute-0 podman[266188]: 2025-10-08 10:08:31.991115646 +0000 UTC m=+0.157800070 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 08 10:08:32 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v710: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 08 10:08:32 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:32 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78003a50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:32 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:32 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:08:32 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:32.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:08:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:08:32 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:08:32 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:32 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90003100 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:32 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:32 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:08:32 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:08:32.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:08:33 compute-0 ceph-mon[73572]: pgmap v710: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 08 10:08:33 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:08:33 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:33 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70003db0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:08:34 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v711: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 08 10:08:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:34 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:34 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:34 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:08:34 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:34.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:08:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:34 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c001090 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:34 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:34 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:08:34 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:08:34.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:08:35 compute-0 ceph-mon[73572]: pgmap v711: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 08 10:08:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:08:35] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Oct 08 10:08:35 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:08:35] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Oct 08 10:08:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:35 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90003100 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:36 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v712: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct 08 10:08:36 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:36 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90003100 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:36 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:36 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:08:36 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:36.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:08:36 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:36 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94001080 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:36 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:36 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:08:36 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:08:36.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:08:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:08:37.109Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:08:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:08:37.109Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:08:37 compute-0 nova_compute[262220]: 2025-10-08 10:08:37.452 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:08:37 compute-0 nova_compute[262220]: 2025-10-08 10:08:37.452 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:08:37 compute-0 nova_compute[262220]: 2025-10-08 10:08:37.452 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:08:37 compute-0 nova_compute[262220]: 2025-10-08 10:08:37.453 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 08 10:08:37 compute-0 ceph-mon[73572]: pgmap v712: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct 08 10:08:37 compute-0 nova_compute[262220]: 2025-10-08 10:08:37.882 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:08:37 compute-0 nova_compute[262220]: 2025-10-08 10:08:37.899 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:08:37 compute-0 nova_compute[262220]: 2025-10-08 10:08:37.899 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:08:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:37 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c001090 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:38 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v713: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct 08 10:08:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:38 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70003db0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:38 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:38 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:08:38 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:38.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:08:38 compute-0 ceph-mon[73572]: pgmap v713: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct 08 10:08:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:38 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90003100 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:38 compute-0 podman[266223]: 2025-10-08 10:08:38.881454707 +0000 UTC m=+0.047274899 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 08 10:08:38 compute-0 nova_compute[262220]: 2025-10-08 10:08:38.885 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:08:38 compute-0 nova_compute[262220]: 2025-10-08 10:08:38.886 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 08 10:08:38 compute-0 nova_compute[262220]: 2025-10-08 10:08:38.886 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 08 10:08:38 compute-0 nova_compute[262220]: 2025-10-08 10:08:38.909 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 08 10:08:38 compute-0 nova_compute[262220]: 2025-10-08 10:08:38.910 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:08:38 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:38 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:08:38 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:08:38.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:08:38 compute-0 nova_compute[262220]: 2025-10-08 10:08:38.934 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:08:38 compute-0 nova_compute[262220]: 2025-10-08 10:08:38.934 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:08:38 compute-0 nova_compute[262220]: 2025-10-08 10:08:38.934 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:08:38 compute-0 nova_compute[262220]: 2025-10-08 10:08:38.935 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 08 10:08:38 compute-0 nova_compute[262220]: 2025-10-08 10:08:38.935 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:08:39 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:08:39 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 08 10:08:39 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2761483830' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:08:39 compute-0 nova_compute[262220]: 2025-10-08 10:08:39.398 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:08:39 compute-0 nova_compute[262220]: 2025-10-08 10:08:39.539 2 WARNING nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 10:08:39 compute-0 nova_compute[262220]: 2025-10-08 10:08:39.540 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4885MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 08 10:08:39 compute-0 nova_compute[262220]: 2025-10-08 10:08:39.540 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:08:39 compute-0 nova_compute[262220]: 2025-10-08 10:08:39.540 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:08:39 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/2761483830' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:08:39 compute-0 nova_compute[262220]: 2025-10-08 10:08:39.598 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 08 10:08:39 compute-0 nova_compute[262220]: 2025-10-08 10:08:39.599 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 08 10:08:39 compute-0 nova_compute[262220]: 2025-10-08 10:08:39.621 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:08:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:39 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94001080 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:40 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 08 10:08:40 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/574481894' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:08:40 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v714: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct 08 10:08:40 compute-0 nova_compute[262220]: 2025-10-08 10:08:40.065 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:08:40 compute-0 nova_compute[262220]: 2025-10-08 10:08:40.071 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 08 10:08:40 compute-0 nova_compute[262220]: 2025-10-08 10:08:40.086 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 08 10:08:40 compute-0 nova_compute[262220]: 2025-10-08 10:08:40.088 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 08 10:08:40 compute-0 nova_compute[262220]: 2025-10-08 10:08:40.088 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.548s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:08:40 compute-0 sudo[266289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:08:40 compute-0 sudo[266289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:08:40 compute-0 sudo[266289]: pam_unix(sudo:session): session closed for user root
Oct 08 10:08:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:40 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c001090 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:40 compute-0 sudo[266320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 08 10:08:40 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:40 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:08:40 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:40.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:08:40 compute-0 sudo[266320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:08:40 compute-0 podman[266313]: 2025-10-08 10:08:40.32449738 +0000 UTC m=+0.060739173 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 08 10:08:40 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/574481894' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:08:40 compute-0 ceph-mon[73572]: pgmap v714: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct 08 10:08:40 compute-0 sudo[266320]: pam_unix(sudo:session): session closed for user root
Oct 08 10:08:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:40 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70003db0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:40 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Oct 08 10:08:40 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 08 10:08:40 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:40 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:08:40 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:08:40.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:08:41 compute-0 nova_compute[262220]: 2025-10-08 10:08:41.065 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:08:41 compute-0 nova_compute[262220]: 2025-10-08 10:08:41.065 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:08:41 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 08 10:08:41 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/3570722334' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:08:41 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:41 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90003100 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:42 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v715: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:08:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:42 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94002480 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:42 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:42 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:08:42 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:42.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:08:42 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 08 10:08:42 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:08:42 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 08 10:08:42 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:08:42 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 08 10:08:42 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:08:42 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 08 10:08:42 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:08:42 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/2158667941' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:08:42 compute-0 ceph-mon[73572]: pgmap v715: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:08:42 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:08:42 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:08:42 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:08:42 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:08:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:42 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c001090 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:42 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:42 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:08:42 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:08:42.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:08:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Oct 08 10:08:43 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 08 10:08:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Oct 08 10:08:43 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 08 10:08:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 10:08:43 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:08:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 08 10:08:43 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 10:08:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 08 10:08:43 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:08:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 08 10:08:43 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:08:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 08 10:08:43 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 10:08:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 08 10:08:43 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 10:08:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 10:08:43 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:08:43 compute-0 sudo[266394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:08:43 compute-0 sudo[266394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:08:43 compute-0 sudo[266394]: pam_unix(sudo:session): session closed for user root
Oct 08 10:08:43 compute-0 sudo[266419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 08 10:08:43 compute-0 sudo[266419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:08:43 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/3596140279' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:08:43 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 08 10:08:43 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 08 10:08:43 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:08:43 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 10:08:43 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:08:43 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:08:43 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 10:08:43 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 10:08:43 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:08:43 compute-0 podman[266484]: 2025-10-08 10:08:43.866942874 +0000 UTC m=+0.046887456 container create c33b8b83622effa95987ae698b65f0f5a4c697e29f58eb500b94bf0992fda9f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct 08 10:08:43 compute-0 systemd[1]: Started libpod-conmon-c33b8b83622effa95987ae698b65f0f5a4c697e29f58eb500b94bf0992fda9f4.scope.
Oct 08 10:08:43 compute-0 podman[266484]: 2025-10-08 10:08:43.843546928 +0000 UTC m=+0.023491530 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:08:43 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:08:43 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:43 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70003db0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:43 compute-0 podman[266484]: 2025-10-08 10:08:43.993429691 +0000 UTC m=+0.173374293 container init c33b8b83622effa95987ae698b65f0f5a4c697e29f58eb500b94bf0992fda9f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_greider, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:08:44 compute-0 podman[266484]: 2025-10-08 10:08:44.003289089 +0000 UTC m=+0.183233671 container start c33b8b83622effa95987ae698b65f0f5a4c697e29f58eb500b94bf0992fda9f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_greider, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:08:44 compute-0 podman[266484]: 2025-10-08 10:08:44.008354002 +0000 UTC m=+0.188298584 container attach c33b8b83622effa95987ae698b65f0f5a4c697e29f58eb500b94bf0992fda9f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_greider, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 08 10:08:44 compute-0 romantic_greider[266501]: 167 167
Oct 08 10:08:44 compute-0 systemd[1]: libpod-c33b8b83622effa95987ae698b65f0f5a4c697e29f58eb500b94bf0992fda9f4.scope: Deactivated successfully.
Oct 08 10:08:44 compute-0 podman[266484]: 2025-10-08 10:08:44.013526139 +0000 UTC m=+0.193470721 container died c33b8b83622effa95987ae698b65f0f5a4c697e29f58eb500b94bf0992fda9f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 10:08:44 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:08:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-be3c5ee330fe83beed7cf990a70374fe136c8d58c66c2aedb1b834bef817019c-merged.mount: Deactivated successfully.
Oct 08 10:08:44 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v716: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:08:44 compute-0 podman[266484]: 2025-10-08 10:08:44.080453633 +0000 UTC m=+0.260398225 container remove c33b8b83622effa95987ae698b65f0f5a4c697e29f58eb500b94bf0992fda9f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 08 10:08:44 compute-0 systemd[1]: libpod-conmon-c33b8b83622effa95987ae698b65f0f5a4c697e29f58eb500b94bf0992fda9f4.scope: Deactivated successfully.
Oct 08 10:08:44 compute-0 podman[266525]: 2025-10-08 10:08:44.259593491 +0000 UTC m=+0.043143136 container create cc9ccd93b8e1ebb06a70a10d6cfcb2936cf087db2a95588d07242895b5f995a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_colden, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 08 10:08:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:44 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90003100 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:44 compute-0 systemd[1]: Started libpod-conmon-cc9ccd93b8e1ebb06a70a10d6cfcb2936cf087db2a95588d07242895b5f995a4.scope.
Oct 08 10:08:44 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:08:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88217652ede564aebecae3948d5e4d134eade25ded5549aa3febd96661e0bc8d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:08:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88217652ede564aebecae3948d5e4d134eade25ded5549aa3febd96661e0bc8d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:08:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88217652ede564aebecae3948d5e4d134eade25ded5549aa3febd96661e0bc8d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:08:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88217652ede564aebecae3948d5e4d134eade25ded5549aa3febd96661e0bc8d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:08:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88217652ede564aebecae3948d5e4d134eade25ded5549aa3febd96661e0bc8d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 10:08:44 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:44 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:08:44 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:44.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:08:44 compute-0 podman[266525]: 2025-10-08 10:08:44.243300823 +0000 UTC m=+0.026850498 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:08:44 compute-0 podman[266525]: 2025-10-08 10:08:44.360714337 +0000 UTC m=+0.144264002 container init cc9ccd93b8e1ebb06a70a10d6cfcb2936cf087db2a95588d07242895b5f995a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_colden, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct 08 10:08:44 compute-0 podman[266525]: 2025-10-08 10:08:44.369193381 +0000 UTC m=+0.152743026 container start cc9ccd93b8e1ebb06a70a10d6cfcb2936cf087db2a95588d07242895b5f995a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 08 10:08:44 compute-0 podman[266525]: 2025-10-08 10:08:44.377414436 +0000 UTC m=+0.160964101 container attach cc9ccd93b8e1ebb06a70a10d6cfcb2936cf087db2a95588d07242895b5f995a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_colden, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1)
Oct 08 10:08:44 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/2238772314' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:08:44 compute-0 ceph-mon[73572]: pgmap v716: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:08:44 compute-0 dazzling_colden[266542]: --> passed data devices: 0 physical, 1 LVM
Oct 08 10:08:44 compute-0 dazzling_colden[266542]: --> All data devices are unavailable
Oct 08 10:08:44 compute-0 systemd[1]: libpod-cc9ccd93b8e1ebb06a70a10d6cfcb2936cf087db2a95588d07242895b5f995a4.scope: Deactivated successfully.
Oct 08 10:08:44 compute-0 podman[266525]: 2025-10-08 10:08:44.73499591 +0000 UTC m=+0.518545585 container died cc9ccd93b8e1ebb06a70a10d6cfcb2936cf087db2a95588d07242895b5f995a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct 08 10:08:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-88217652ede564aebecae3948d5e4d134eade25ded5549aa3febd96661e0bc8d-merged.mount: Deactivated successfully.
Oct 08 10:08:44 compute-0 podman[266525]: 2025-10-08 10:08:44.80339612 +0000 UTC m=+0.586945765 container remove cc9ccd93b8e1ebb06a70a10d6cfcb2936cf087db2a95588d07242895b5f995a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_colden, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct 08 10:08:44 compute-0 systemd[1]: libpod-conmon-cc9ccd93b8e1ebb06a70a10d6cfcb2936cf087db2a95588d07242895b5f995a4.scope: Deactivated successfully.
Oct 08 10:08:44 compute-0 sudo[266419]: pam_unix(sudo:session): session closed for user root
Oct 08 10:08:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:44 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94002480 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:44 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:44 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:08:44 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:08:44.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:08:44 compute-0 sudo[266567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:08:44 compute-0 sudo[266567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:08:44 compute-0 sudo[266567]: pam_unix(sudo:session): session closed for user root
Oct 08 10:08:45 compute-0 sudo[266592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- lvm list --format json
Oct 08 10:08:45 compute-0 sudo[266592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:08:45 compute-0 podman[266659]: 2025-10-08 10:08:45.428661952 +0000 UTC m=+0.039580300 container create ff81dbe37c26dcd359c4dfd9f68c95cb46a7740be5382983253d9f0aedd1f439 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 10:08:45 compute-0 systemd[1]: Started libpod-conmon-ff81dbe37c26dcd359c4dfd9f68c95cb46a7740be5382983253d9f0aedd1f439.scope.
Oct 08 10:08:45 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:08:45 compute-0 podman[266659]: 2025-10-08 10:08:45.412650845 +0000 UTC m=+0.023569223 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:08:45 compute-0 podman[266659]: 2025-10-08 10:08:45.516893472 +0000 UTC m=+0.127811840 container init ff81dbe37c26dcd359c4dfd9f68c95cb46a7740be5382983253d9f0aedd1f439 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_kowalevski, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct 08 10:08:45 compute-0 podman[266659]: 2025-10-08 10:08:45.52394392 +0000 UTC m=+0.134862268 container start ff81dbe37c26dcd359c4dfd9f68c95cb46a7740be5382983253d9f0aedd1f439 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_kowalevski, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 10:08:45 compute-0 systemd[1]: libpod-ff81dbe37c26dcd359c4dfd9f68c95cb46a7740be5382983253d9f0aedd1f439.scope: Deactivated successfully.
Oct 08 10:08:45 compute-0 podman[266659]: 2025-10-08 10:08:45.529958345 +0000 UTC m=+0.140876693 container attach ff81dbe37c26dcd359c4dfd9f68c95cb46a7740be5382983253d9f0aedd1f439 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid)
Oct 08 10:08:45 compute-0 determined_kowalevski[266676]: 167 167
Oct 08 10:08:45 compute-0 conmon[266676]: conmon ff81dbe37c26dcd359c4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ff81dbe37c26dcd359c4dfd9f68c95cb46a7740be5382983253d9f0aedd1f439.scope/container/memory.events
Oct 08 10:08:45 compute-0 podman[266659]: 2025-10-08 10:08:45.531830855 +0000 UTC m=+0.142749203 container died ff81dbe37c26dcd359c4dfd9f68c95cb46a7740be5382983253d9f0aedd1f439 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_kowalevski, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Oct 08 10:08:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-afe2181951b5189c6947a50381f8d73ab785587c791c6c934d1a6be3191d7b9c-merged.mount: Deactivated successfully.
Oct 08 10:08:45 compute-0 podman[266659]: 2025-10-08 10:08:45.587336918 +0000 UTC m=+0.198255266 container remove ff81dbe37c26dcd359c4dfd9f68c95cb46a7740be5382983253d9f0aedd1f439 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 08 10:08:45 compute-0 systemd[1]: libpod-conmon-ff81dbe37c26dcd359c4dfd9f68c95cb46a7740be5382983253d9f0aedd1f439.scope: Deactivated successfully.
Oct 08 10:08:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:08:45] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Oct 08 10:08:45 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:08:45] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Oct 08 10:08:45 compute-0 podman[266698]: 2025-10-08 10:08:45.793117907 +0000 UTC m=+0.060420363 container create 2130eb7f40969e3185df25d7e9c1746704e4361d1de4c87e52bfb836918f802e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_davinci, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 08 10:08:45 compute-0 systemd[1]: Started libpod-conmon-2130eb7f40969e3185df25d7e9c1746704e4361d1de4c87e52bfb836918f802e.scope.
Oct 08 10:08:45 compute-0 podman[266698]: 2025-10-08 10:08:45.766614111 +0000 UTC m=+0.033916627 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:08:45 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:08:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8df95c32e4b5a50c21158f9051036e0110f0a9110131d879fc8a7d0af79beae3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:08:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8df95c32e4b5a50c21158f9051036e0110f0a9110131d879fc8a7d0af79beae3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:08:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8df95c32e4b5a50c21158f9051036e0110f0a9110131d879fc8a7d0af79beae3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:08:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8df95c32e4b5a50c21158f9051036e0110f0a9110131d879fc8a7d0af79beae3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:08:45 compute-0 podman[266698]: 2025-10-08 10:08:45.88669407 +0000 UTC m=+0.153996576 container init 2130eb7f40969e3185df25d7e9c1746704e4361d1de4c87e52bfb836918f802e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_davinci, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 10:08:45 compute-0 podman[266698]: 2025-10-08 10:08:45.893508671 +0000 UTC m=+0.160811137 container start 2130eb7f40969e3185df25d7e9c1746704e4361d1de4c87e52bfb836918f802e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Oct 08 10:08:45 compute-0 podman[266698]: 2025-10-08 10:08:45.900223387 +0000 UTC m=+0.167525853 container attach 2130eb7f40969e3185df25d7e9c1746704e4361d1de4c87e52bfb836918f802e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 08 10:08:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:45 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c0030a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:46 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v717: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:08:46 compute-0 unruffled_davinci[266715]: {
Oct 08 10:08:46 compute-0 unruffled_davinci[266715]:     "1": [
Oct 08 10:08:46 compute-0 unruffled_davinci[266715]:         {
Oct 08 10:08:46 compute-0 unruffled_davinci[266715]:             "devices": [
Oct 08 10:08:46 compute-0 unruffled_davinci[266715]:                 "/dev/loop3"
Oct 08 10:08:46 compute-0 unruffled_davinci[266715]:             ],
Oct 08 10:08:46 compute-0 unruffled_davinci[266715]:             "lv_name": "ceph_lv0",
Oct 08 10:08:46 compute-0 unruffled_davinci[266715]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:08:46 compute-0 unruffled_davinci[266715]:             "lv_size": "21470642176",
Oct 08 10:08:46 compute-0 unruffled_davinci[266715]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 08 10:08:46 compute-0 unruffled_davinci[266715]:             "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 10:08:46 compute-0 unruffled_davinci[266715]:             "name": "ceph_lv0",
Oct 08 10:08:46 compute-0 unruffled_davinci[266715]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:08:46 compute-0 unruffled_davinci[266715]:             "tags": {
Oct 08 10:08:46 compute-0 unruffled_davinci[266715]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:08:46 compute-0 unruffled_davinci[266715]:                 "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 10:08:46 compute-0 unruffled_davinci[266715]:                 "ceph.cephx_lockbox_secret": "",
Oct 08 10:08:46 compute-0 unruffled_davinci[266715]:                 "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct 08 10:08:46 compute-0 unruffled_davinci[266715]:                 "ceph.cluster_name": "ceph",
Oct 08 10:08:46 compute-0 unruffled_davinci[266715]:                 "ceph.crush_device_class": "",
Oct 08 10:08:46 compute-0 unruffled_davinci[266715]:                 "ceph.encrypted": "0",
Oct 08 10:08:46 compute-0 unruffled_davinci[266715]:                 "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct 08 10:08:46 compute-0 unruffled_davinci[266715]:                 "ceph.osd_id": "1",
Oct 08 10:08:46 compute-0 unruffled_davinci[266715]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 08 10:08:46 compute-0 unruffled_davinci[266715]:                 "ceph.type": "block",
Oct 08 10:08:46 compute-0 unruffled_davinci[266715]:                 "ceph.vdo": "0",
Oct 08 10:08:46 compute-0 unruffled_davinci[266715]:                 "ceph.with_tpm": "0"
Oct 08 10:08:46 compute-0 unruffled_davinci[266715]:             },
Oct 08 10:08:46 compute-0 unruffled_davinci[266715]:             "type": "block",
Oct 08 10:08:46 compute-0 unruffled_davinci[266715]:             "vg_name": "ceph_vg0"
Oct 08 10:08:46 compute-0 unruffled_davinci[266715]:         }
Oct 08 10:08:46 compute-0 unruffled_davinci[266715]:     ]
Oct 08 10:08:46 compute-0 unruffled_davinci[266715]: }
Oct 08 10:08:46 compute-0 sudo[266725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:08:46 compute-0 sudo[266725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:08:46 compute-0 sudo[266725]: pam_unix(sudo:session): session closed for user root
Oct 08 10:08:46 compute-0 systemd[1]: libpod-2130eb7f40969e3185df25d7e9c1746704e4361d1de4c87e52bfb836918f802e.scope: Deactivated successfully.
Oct 08 10:08:46 compute-0 podman[266698]: 2025-10-08 10:08:46.196741138 +0000 UTC m=+0.464043604 container died 2130eb7f40969e3185df25d7e9c1746704e4361d1de4c87e52bfb836918f802e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_davinci, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Oct 08 10:08:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-8df95c32e4b5a50c21158f9051036e0110f0a9110131d879fc8a7d0af79beae3-merged.mount: Deactivated successfully.
Oct 08 10:08:46 compute-0 podman[266698]: 2025-10-08 10:08:46.260290661 +0000 UTC m=+0.527593137 container remove 2130eb7f40969e3185df25d7e9c1746704e4361d1de4c87e52bfb836918f802e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 08 10:08:46 compute-0 systemd[1]: libpod-conmon-2130eb7f40969e3185df25d7e9c1746704e4361d1de4c87e52bfb836918f802e.scope: Deactivated successfully.
Oct 08 10:08:46 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:46 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c0030a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:46 compute-0 sudo[266592]: pam_unix(sudo:session): session closed for user root
Oct 08 10:08:46 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:46 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:08:46 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:46.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:08:46 compute-0 sudo[266764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:08:46 compute-0 sudo[266764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:08:46 compute-0 sudo[266764]: pam_unix(sudo:session): session closed for user root
Oct 08 10:08:46 compute-0 sudo[266789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- raw list --format json
Oct 08 10:08:46 compute-0 sudo[266789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:08:46 compute-0 podman[266855]: 2025-10-08 10:08:46.839332069 +0000 UTC m=+0.071553452 container create 56f2bcf0900a2d937a08557b26bf7811114708b0c28297bbef79203a54464b85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 08 10:08:46 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:46 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94002480 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:46 compute-0 podman[266855]: 2025-10-08 10:08:46.790103249 +0000 UTC m=+0.022324662 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:08:46 compute-0 systemd[1]: Started libpod-conmon-56f2bcf0900a2d937a08557b26bf7811114708b0c28297bbef79203a54464b85.scope.
Oct 08 10:08:46 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:46 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:08:46 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:08:46.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:08:46 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:08:46 compute-0 podman[266855]: 2025-10-08 10:08:46.955522604 +0000 UTC m=+0.187744007 container init 56f2bcf0900a2d937a08557b26bf7811114708b0c28297bbef79203a54464b85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_galois, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 08 10:08:46 compute-0 podman[266855]: 2025-10-08 10:08:46.961884368 +0000 UTC m=+0.194105772 container start 56f2bcf0900a2d937a08557b26bf7811114708b0c28297bbef79203a54464b85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_galois, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1)
Oct 08 10:08:46 compute-0 zen_galois[266871]: 167 167
Oct 08 10:08:46 compute-0 systemd[1]: libpod-56f2bcf0900a2d937a08557b26bf7811114708b0c28297bbef79203a54464b85.scope: Deactivated successfully.
Oct 08 10:08:46 compute-0 conmon[266871]: conmon 56f2bcf0900a2d937a08 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-56f2bcf0900a2d937a08557b26bf7811114708b0c28297bbef79203a54464b85.scope/container/memory.events
Oct 08 10:08:46 compute-0 podman[266855]: 2025-10-08 10:08:46.969292268 +0000 UTC m=+0.201513671 container attach 56f2bcf0900a2d937a08557b26bf7811114708b0c28297bbef79203a54464b85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_galois, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Oct 08 10:08:46 compute-0 podman[266855]: 2025-10-08 10:08:46.969653779 +0000 UTC m=+0.201875192 container died 56f2bcf0900a2d937a08557b26bf7811114708b0c28297bbef79203a54464b85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_galois, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:08:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-cff83e4e7b7111cd13993edb31bbbeb04e8ebbd8e70d314c0c74e2f0c2c55791-merged.mount: Deactivated successfully.
Oct 08 10:08:47 compute-0 podman[266855]: 2025-10-08 10:08:47.022310511 +0000 UTC m=+0.254531884 container remove 56f2bcf0900a2d937a08557b26bf7811114708b0c28297bbef79203a54464b85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_galois, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 08 10:08:47 compute-0 systemd[1]: libpod-conmon-56f2bcf0900a2d937a08557b26bf7811114708b0c28297bbef79203a54464b85.scope: Deactivated successfully.
Oct 08 10:08:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:08:47.110Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:08:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:08:47.111Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:08:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:08:47.111Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:08:47 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:47 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=404 latency=0.003000098s ======
Oct 08 10:08:47 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:47.123 +0000] "GET /info HTTP/1.1" 404 152 - "python-urllib3/1.26.5" - latency=0.003000098s
Oct 08 10:08:47 compute-0 ceph-mon[73572]: pgmap v717: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:08:47 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:47 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:08:47 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - - [08/Oct/2025:10:08:47.139 +0000] "GET /swift/healthcheck HTTP/1.1" 200 0 - "python-urllib3/1.26.5" - latency=0.001000032s
Oct 08 10:08:47 compute-0 podman[266897]: 2025-10-08 10:08:47.193264575 +0000 UTC m=+0.044155778 container create 8e5277bb753ff060d9258a2c04084c96d0184ea183261716a66084656a1a72b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_goodall, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 10:08:47 compute-0 systemd[1]: Started libpod-conmon-8e5277bb753ff060d9258a2c04084c96d0184ea183261716a66084656a1a72b5.scope.
Oct 08 10:08:47 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:08:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c263d62481d85d3a63ac8098acd147d7c93f09e5ea5f47a88b98c104d393020f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:08:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c263d62481d85d3a63ac8098acd147d7c93f09e5ea5f47a88b98c104d393020f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:08:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c263d62481d85d3a63ac8098acd147d7c93f09e5ea5f47a88b98c104d393020f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:08:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c263d62481d85d3a63ac8098acd147d7c93f09e5ea5f47a88b98c104d393020f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:08:47 compute-0 podman[266897]: 2025-10-08 10:08:47.170371285 +0000 UTC m=+0.021262498 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:08:47 compute-0 podman[266897]: 2025-10-08 10:08:47.270434448 +0000 UTC m=+0.121325681 container init 8e5277bb753ff060d9258a2c04084c96d0184ea183261716a66084656a1a72b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_goodall, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 10:08:47 compute-0 podman[266897]: 2025-10-08 10:08:47.277491506 +0000 UTC m=+0.128382709 container start 8e5277bb753ff060d9258a2c04084c96d0184ea183261716a66084656a1a72b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default)
Oct 08 10:08:47 compute-0 podman[266897]: 2025-10-08 10:08:47.31383084 +0000 UTC m=+0.164722083 container attach 8e5277bb753ff060d9258a2c04084c96d0184ea183261716a66084656a1a72b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 08 10:08:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:08:47
Oct 08 10:08:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 08 10:08:47 compute-0 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct 08 10:08:47 compute-0 ceph-mgr[73869]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', '.rgw.root', 'backups', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr', 'images', '.nfs', 'volumes', 'default.rgw.meta']
Oct 08 10:08:47 compute-0 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct 08 10:08:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:08:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:08:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:08:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:08:47 compute-0 lvm[266988]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 08 10:08:47 compute-0 lvm[266988]: VG ceph_vg0 finished
Oct 08 10:08:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:47 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004200 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct 08 10:08:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:08:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 08 10:08:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:08:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:08:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:08:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:08:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:08:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:08:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:08:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:08:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:08:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 08 10:08:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:08:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:08:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:08:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 08 10:08:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:08:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 08 10:08:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:08:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:08:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:08:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 08 10:08:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:08:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 10:08:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 08 10:08:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:08:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:08:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:08:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:08:48 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v718: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:08:48 compute-0 hungry_goodall[266914]: {}
Oct 08 10:08:48 compute-0 systemd[1]: libpod-8e5277bb753ff060d9258a2c04084c96d0184ea183261716a66084656a1a72b5.scope: Deactivated successfully.
Oct 08 10:08:48 compute-0 systemd[1]: libpod-8e5277bb753ff060d9258a2c04084c96d0184ea183261716a66084656a1a72b5.scope: Consumed 1.152s CPU time.
Oct 08 10:08:48 compute-0 podman[266897]: 2025-10-08 10:08:48.088654384 +0000 UTC m=+0.939545607 container died 8e5277bb753ff060d9258a2c04084c96d0184ea183261716a66084656a1a72b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 08 10:08:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-c263d62481d85d3a63ac8098acd147d7c93f09e5ea5f47a88b98c104d393020f-merged.mount: Deactivated successfully.
Oct 08 10:08:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:08:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:08:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:08:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:08:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:08:48 compute-0 podman[266897]: 2025-10-08 10:08:48.164578837 +0000 UTC m=+1.015470040 container remove 8e5277bb753ff060d9258a2c04084c96d0184ea183261716a66084656a1a72b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_goodall, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct 08 10:08:48 compute-0 systemd[1]: libpod-conmon-8e5277bb753ff060d9258a2c04084c96d0184ea183261716a66084656a1a72b5.scope: Deactivated successfully.
Oct 08 10:08:48 compute-0 sudo[266789]: pam_unix(sudo:session): session closed for user root
Oct 08 10:08:48 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 10:08:48 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:08:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 08 10:08:48 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 10:08:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:08:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:08:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:08:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:08:48 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:08:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:48 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c0030a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:48 compute-0 sudo[267004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 08 10:08:48 compute-0 sudo[267004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:08:48 compute-0 sudo[267004]: pam_unix(sudo:session): session closed for user root
Oct 08 10:08:48 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:48 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:08:48 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:48.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:08:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:48 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70003db0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:48 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:48 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:08:48 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:08:48.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:08:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:08:49 compute-0 ceph-mon[73572]: pgmap v718: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:08:49 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:08:49 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:08:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:49 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94002480 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:50 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v719: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 08 10:08:50 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:50 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004200 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:50 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:50 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:08:50 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:50.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:08:50 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Oct 08 10:08:50 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Oct 08 10:08:50 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Oct 08 10:08:50 compute-0 ceph-mon[73572]: pgmap v719: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 08 10:08:50 compute-0 ceph-mon[73572]: osdmap e149: 3 total, 3 up, 3 in
Oct 08 10:08:50 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:50 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c0030a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:50 compute-0 podman[267031]: 2025-10-08 10:08:50.905869425 +0000 UTC m=+0.058905393 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Oct 08 10:08:50 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:50 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:08:50 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:08:50.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:08:51 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Oct 08 10:08:51 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Oct 08 10:08:51 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Oct 08 10:08:51 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:51 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c0030a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:52 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v722: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s
Oct 08 10:08:52 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:52 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003e90 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:52 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:52 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:08:52 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:52.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:08:52 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Oct 08 10:08:52 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Oct 08 10:08:52 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Oct 08 10:08:52 compute-0 ceph-mon[73572]: osdmap e150: 3 total, 3 up, 3 in
Oct 08 10:08:52 compute-0 ceph-mon[73572]: pgmap v722: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s
Oct 08 10:08:52 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:52 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004200 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:52 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:52 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:08:52 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:08:52.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:08:53 compute-0 ceph-mon[73572]: osdmap e151: 3 total, 3 up, 3 in
Oct 08 10:08:53 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:53 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c0030a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:08:54 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v724: 353 pgs: 353 active+clean; 21 MiB data, 174 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 3.4 MiB/s wr, 32 op/s
Oct 08 10:08:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:54 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c0030a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:54 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:54 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:08:54 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:54.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:08:54 compute-0 ceph-mon[73572]: pgmap v724: 353 pgs: 353 active+clean; 21 MiB data, 174 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 3.4 MiB/s wr, 32 op/s
Oct 08 10:08:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:54 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003e90 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:54 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:54 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:08:54 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:08:54.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:08:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:08:55] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Oct 08 10:08:55 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:08:55] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Oct 08 10:08:55 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Oct 08 10:08:55 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Oct 08 10:08:55 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Oct 08 10:08:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:55 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004200 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:56 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v726: 353 pgs: 353 active+clean; 21 MiB data, 174 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 3.7 MiB/s wr, 34 op/s
Oct 08 10:08:56 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:56 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c0030a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:56 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:56 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:08:56 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:56.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:08:56 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:56 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c0030a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:56 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:56 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:08:56 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:08:56.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:08:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:08:57.112Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:08:57 compute-0 ceph-mon[73572]: osdmap e152: 3 total, 3 up, 3 in
Oct 08 10:08:57 compute-0 ceph-mon[73572]: pgmap v726: 353 pgs: 353 active+clean; 21 MiB data, 174 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 3.7 MiB/s wr, 34 op/s
Oct 08 10:08:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:08:57.406 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:08:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:08:57.407 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:08:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:08:57.407 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:08:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:57 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003e90 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:58 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v727: 353 pgs: 353 active+clean; 21 MiB data, 174 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.1 MiB/s wr, 28 op/s
Oct 08 10:08:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:58 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004200 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:58 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:58 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:08:58 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:58.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:08:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:58 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c0030a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:08:58 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:08:58 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:08:58 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:08:58.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:08:59 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:08:59 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Oct 08 10:08:59 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Oct 08 10:08:59 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Oct 08 10:08:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/100859 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 08 10:08:59 compute-0 ceph-mon[73572]: pgmap v727: 353 pgs: 353 active+clean; 21 MiB data, 174 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.1 MiB/s wr, 28 op/s
Oct 08 10:08:59 compute-0 ceph-mon[73572]: osdmap e153: 3 total, 3 up, 3 in
Oct 08 10:08:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:59 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c0030a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:00 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v729: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 5.5 MiB/s wr, 51 op/s
Oct 08 10:09:00 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:00 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003e90 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:00 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:09:00 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:09:00 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:09:00.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:09:00 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:00 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004200 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:00 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:09:00 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:09:00 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:09:00.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:09:01 compute-0 ceph-mon[73572]: pgmap v729: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 5.5 MiB/s wr, 51 op/s
Oct 08 10:09:01 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:01 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c0030a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:02 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v730: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 2.6 MiB/s wr, 24 op/s
Oct 08 10:09:02 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:02 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c0030a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:02 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:09:02 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:09:02 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:09:02.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:09:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:09:02 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:09:02 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:02 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003e90 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:02 compute-0 podman[267065]: 2025-10-08 10:09:02.914843458 +0000 UTC m=+0.079572352 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 08 10:09:02 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:09:02 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:09:02 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:09:02.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:09:03 compute-0 ceph-mon[73572]: pgmap v730: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 2.6 MiB/s wr, 24 op/s
Oct 08 10:09:03 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:09:03 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:03 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003e90 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:04 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:09:04 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v731: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 2.5 MiB/s wr, 23 op/s
Oct 08 10:09:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:04 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c0030a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:04 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:09:04 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:09:04 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:09:04.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:09:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:04 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c0030a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:04 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:09:04 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:09:04 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:09:04.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:09:05 compute-0 ceph-mon[73572]: pgmap v731: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 2.5 MiB/s wr, 23 op/s
Oct 08 10:09:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:09:05] "GET /metrics HTTP/1.1" 200 48402 "" "Prometheus/2.51.0"
Oct 08 10:09:05 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:09:05] "GET /metrics HTTP/1.1" 200 48402 "" "Prometheus/2.51.0"
Oct 08 10:09:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:05 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004200 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:06 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v732: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Oct 08 10:09:06 compute-0 sudo[267098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:09:06 compute-0 sudo[267098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:09:06 compute-0 sudo[267098]: pam_unix(sudo:session): session closed for user root
Oct 08 10:09:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:06 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003e90 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:06 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:09:06 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:09:06 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:09:06.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:09:06 compute-0 ceph-mon[73572]: pgmap v732: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Oct 08 10:09:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:06 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac780012c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:06 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:09:06 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:09:06 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:09:06.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:09:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:09:07.113Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:09:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:07 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c0030a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:08 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v733: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Oct 08 10:09:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:08 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:09:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:08 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004200 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:08 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:09:08 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:09:08 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:09:08.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:09:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:08 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78002180 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:08 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:09:08 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:09:08 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:09:08.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:09:09 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:09:09 compute-0 ceph-mon[73572]: pgmap v733: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Oct 08 10:09:09 compute-0 podman[267127]: 2025-10-08 10:09:09.885888006 +0000 UTC m=+0.046734540 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent)
Oct 08 10:09:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:09 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0001320 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:10 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v734: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 652 B/s wr, 2 op/s
Oct 08 10:09:10 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:10 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c0030a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:10 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:09:10 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:09:10 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:09:10.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:09:10 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:10 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004200 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:10 compute-0 podman[267148]: 2025-10-08 10:09:10.8897359 +0000 UTC m=+0.056409504 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, container_name=multipathd)
Oct 08 10:09:10 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:09:10 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:09:10 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:09:10.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:09:11 compute-0 ceph-mon[73572]: pgmap v734: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 652 B/s wr, 2 op/s
Oct 08 10:09:11 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:11 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:09:11 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:11 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:09:11 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:11 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:09:11 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:11 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78002180 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:12 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v735: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 08 10:09:12 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:12 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004200 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:12 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:09:12 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:09:12 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:09:12.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:09:12 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:12 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78002180 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:12 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:09:12 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:09:12 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:09:12.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:09:13 compute-0 ceph-mon[73572]: pgmap v735: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 08 10:09:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:13 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0001320 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:14 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:09:14 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v736: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 10:09:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:14 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 08 10:09:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:14 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c0041a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:14 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:09:14 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:09:14 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:09:14.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:09:14 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:09:14.447 163175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2a:d6:ca', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'b2:8b:1e:40:84:a3'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 08 10:09:14 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:09:14.448 163175 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 08 10:09:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:14 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004200 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:14 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:09:14 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:09:14 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:09:14.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:09:15 compute-0 ceph-mon[73572]: pgmap v736: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 10:09:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:09:15] "GET /metrics HTTP/1.1" 200 48402 "" "Prometheus/2.51.0"
Oct 08 10:09:15 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:09:15] "GET /metrics HTTP/1.1" 200 48402 "" "Prometheus/2.51.0"
Oct 08 10:09:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:15 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78002180 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:16 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v737: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 10:09:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:16 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0001320 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:16 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:09:16 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:09:16 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:09:16.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:09:16 compute-0 ceph-mon[73572]: pgmap v737: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 10:09:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:16 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c0041a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:16 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:09:16 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:09:16 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:09:16.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:09:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:09:17.115Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:09:17 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:09:17 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:09:17 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:09:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:09:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:09:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:17 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004200 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:18 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v738: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 10:09:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:09:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:09:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:09:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:09:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:18 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78003860 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:18 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:09:18 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:09:18 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:09:18.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:09:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:18 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0001320 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:18 compute-0 ceph-mon[73572]: pgmap v738: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 938 B/s wr, 3 op/s
Oct 08 10:09:18 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:09:18 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:09:18 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:09:18.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:09:19 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:09:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:19 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c0041a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:20 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v739: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1023 B/s wr, 4 op/s
Oct 08 10:09:20 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:20 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004200 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:20 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:09:20 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:09:20 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:09:20.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:09:20 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:09:20.449 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26869918-b723-425c-a2e1-0d697f3d0fec, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 10:09:20 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:20 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78003860 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:20 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:09:20 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:09:20 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:09:20.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:09:21 compute-0 ceph-mon[73572]: pgmap v739: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1023 B/s wr, 4 op/s
Oct 08 10:09:21 compute-0 ceph-mon[73572]: from='client.? 192.168.122.10:0/3511894652' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 08 10:09:21 compute-0 ceph-mon[73572]: from='client.? 192.168.122.10:0/3511894652' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 08 10:09:21 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/100921 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 08 10:09:21 compute-0 podman[267181]: 2025-10-08 10:09:21.913501619 +0000 UTC m=+0.061850119 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 08 10:09:21 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:21 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0001320 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:22 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v740: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Oct 08 10:09:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:22 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c0041a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:22 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:09:22 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:09:22 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:09:22.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:09:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:22 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004200 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:22 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:09:22 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:09:22 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:09:22.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:09:23 compute-0 ceph-mon[73572]: pgmap v740: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Oct 08 10:09:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:23 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78003860 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:24 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:09:24 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v741: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Oct 08 10:09:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:24 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ad0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:24 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:09:24 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:09:24 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:09:24.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:09:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:24 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c0041a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:24 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:09:24 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:09:24 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:09:24.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:09:25 compute-0 ceph-mon[73572]: pgmap v741: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Oct 08 10:09:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:09:25] "GET /metrics HTTP/1.1" 200 48403 "" "Prometheus/2.51.0"
Oct 08 10:09:25 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:09:25] "GET /metrics HTTP/1.1" 200 48403 "" "Prometheus/2.51.0"
Oct 08 10:09:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:25 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004200 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:26 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v742: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct 08 10:09:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:26 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78003860 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:26 compute-0 sudo[267208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:09:26 compute-0 sudo[267208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:09:26 compute-0 sudo[267208]: pam_unix(sudo:session): session closed for user root
Oct 08 10:09:26 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:09:26 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:09:26 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:09:26.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:09:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:26 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ad0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:26 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:09:26 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:09:26 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:09:26.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:09:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:09:27.116Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:09:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:09:27.117Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:09:27 compute-0 ceph-mon[73572]: pgmap v742: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct 08 10:09:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:28 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c0041a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:28 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v743: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct 08 10:09:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:28 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004200 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:28 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:09:28 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:09:28 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:09:28.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:09:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:28 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78003860 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:28 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:09:28 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:09:28 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:09:28.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:09:29 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:09:29 compute-0 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Oct 08 10:09:29 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:29.062238) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 08 10:09:29 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Oct 08 10:09:29 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918169062320, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1237, "num_deletes": 252, "total_data_size": 2186030, "memory_usage": 2218920, "flush_reason": "Manual Compaction"}
Oct 08 10:09:29 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Oct 08 10:09:29 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918169071741, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 1384368, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22125, "largest_seqno": 23361, "table_properties": {"data_size": 1379522, "index_size": 2242, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12185, "raw_average_key_size": 20, "raw_value_size": 1369109, "raw_average_value_size": 2320, "num_data_blocks": 97, "num_entries": 590, "num_filter_entries": 590, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759918061, "oldest_key_time": 1759918061, "file_creation_time": 1759918169, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Oct 08 10:09:29 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 9525 microseconds, and 4327 cpu microseconds.
Oct 08 10:09:29 compute-0 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 08 10:09:29 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:29.071799) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 1384368 bytes OK
Oct 08 10:09:29 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:29.071820) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Oct 08 10:09:29 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:29.073391) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Oct 08 10:09:29 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:29.073407) EVENT_LOG_v1 {"time_micros": 1759918169073402, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 08 10:09:29 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:29.073426) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 08 10:09:29 compute-0 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 2180563, prev total WAL file size 2180563, number of live WAL files 2.
Oct 08 10:09:29 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 10:09:29 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:29.074185) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353032' seq:72057594037927935, type:22 .. '6D67727374617400373535' seq:0, type:0; will stop at (end)
Oct 08 10:09:29 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 08 10:09:29 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(1351KB)], [47(14MB)]
Oct 08 10:09:29 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918169074255, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 16144972, "oldest_snapshot_seqno": -1}
Oct 08 10:09:29 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 5531 keys, 12783059 bytes, temperature: kUnknown
Oct 08 10:09:29 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918169140184, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 12783059, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12746983, "index_size": 21118, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13893, "raw_key_size": 139386, "raw_average_key_size": 25, "raw_value_size": 12647900, "raw_average_value_size": 2286, "num_data_blocks": 862, "num_entries": 5531, "num_filter_entries": 5531, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916581, "oldest_key_time": 0, "file_creation_time": 1759918169, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Oct 08 10:09:29 compute-0 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 08 10:09:29 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:29.140463) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 12783059 bytes
Oct 08 10:09:29 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:29.141527) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 244.5 rd, 193.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 14.1 +0.0 blob) out(12.2 +0.0 blob), read-write-amplify(20.9) write-amplify(9.2) OK, records in: 6009, records dropped: 478 output_compression: NoCompression
Oct 08 10:09:29 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:29.141545) EVENT_LOG_v1 {"time_micros": 1759918169141537, "job": 24, "event": "compaction_finished", "compaction_time_micros": 66022, "compaction_time_cpu_micros": 29801, "output_level": 6, "num_output_files": 1, "total_output_size": 12783059, "num_input_records": 6009, "num_output_records": 5531, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 08 10:09:29 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 10:09:29 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918169141950, "job": 24, "event": "table_file_deletion", "file_number": 49}
Oct 08 10:09:29 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 10:09:29 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918169144480, "job": 24, "event": "table_file_deletion", "file_number": 47}
Oct 08 10:09:29 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:29.074108) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:09:29 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:29.144629) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:09:29 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:29.144636) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:09:29 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:29.144644) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:09:29 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:29.144646) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:09:29 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:29.144648) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:09:29 compute-0 ceph-mon[73572]: pgmap v743: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct 08 10:09:30 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:30 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ad0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:30 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v744: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct 08 10:09:30 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:30 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c0041a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:30 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:09:30 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:09:30 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:09:30.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:09:30 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:30 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004200 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:30 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:09:30 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:09:30 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:09:30.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:09:31 compute-0 ceph-mon[73572]: pgmap v744: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct 08 10:09:32 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:32 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78003860 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:32 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v745: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:09:32 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:32 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ad0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:32 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:09:32 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:09:32 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:09:32.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:09:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:09:32 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:09:32 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:32 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c0041a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:33 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:09:33 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:09:33 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:09:32.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:09:33 compute-0 nova_compute[262220]: 2025-10-08 10:09:33.138 2 DEBUG oslo_concurrency.lockutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "f49b788e-70d1-4bc2-9f90-381017f2b232" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:09:33 compute-0 nova_compute[262220]: 2025-10-08 10:09:33.138 2 DEBUG oslo_concurrency.lockutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "f49b788e-70d1-4bc2-9f90-381017f2b232" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:09:33 compute-0 nova_compute[262220]: 2025-10-08 10:09:33.161 2 DEBUG nova.compute.manager [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 08 10:09:33 compute-0 nova_compute[262220]: 2025-10-08 10:09:33.255 2 DEBUG oslo_concurrency.lockutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:09:33 compute-0 nova_compute[262220]: 2025-10-08 10:09:33.256 2 DEBUG oslo_concurrency.lockutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:09:33 compute-0 nova_compute[262220]: 2025-10-08 10:09:33.264 2 DEBUG nova.virt.hardware [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 08 10:09:33 compute-0 nova_compute[262220]: 2025-10-08 10:09:33.265 2 INFO nova.compute.claims [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Claim successful on node compute-0.ctlplane.example.com
Oct 08 10:09:33 compute-0 nova_compute[262220]: 2025-10-08 10:09:33.389 2 DEBUG oslo_concurrency.processutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:09:33 compute-0 ceph-mon[73572]: pgmap v745: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:09:33 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:09:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 08 10:09:33 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/750286088' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:09:33 compute-0 nova_compute[262220]: 2025-10-08 10:09:33.817 2 DEBUG oslo_concurrency.processutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:09:33 compute-0 nova_compute[262220]: 2025-10-08 10:09:33.822 2 DEBUG nova.compute.provider_tree [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 08 10:09:33 compute-0 nova_compute[262220]: 2025-10-08 10:09:33.836 2 DEBUG nova.scheduler.client.report [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 08 10:09:33 compute-0 nova_compute[262220]: 2025-10-08 10:09:33.863 2 DEBUG oslo_concurrency.lockutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:09:33 compute-0 nova_compute[262220]: 2025-10-08 10:09:33.864 2 DEBUG nova.compute.manager [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 08 10:09:33 compute-0 nova_compute[262220]: 2025-10-08 10:09:33.910 2 DEBUG nova.compute.manager [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 08 10:09:33 compute-0 nova_compute[262220]: 2025-10-08 10:09:33.910 2 DEBUG nova.network.neutron [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 08 10:09:33 compute-0 podman[267262]: 2025-10-08 10:09:33.922638005 +0000 UTC m=+0.085491943 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:09:33 compute-0 nova_compute[262220]: 2025-10-08 10:09:33.937 2 INFO nova.virt.libvirt.driver [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 08 10:09:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:34 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004200 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:34 compute-0 nova_compute[262220]: 2025-10-08 10:09:34.027 2 DEBUG nova.compute.manager [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 08 10:09:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:09:34 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v746: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:09:34 compute-0 nova_compute[262220]: 2025-10-08 10:09:34.115 2 DEBUG nova.compute.manager [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 08 10:09:34 compute-0 nova_compute[262220]: 2025-10-08 10:09:34.118 2 DEBUG nova.virt.libvirt.driver [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 08 10:09:34 compute-0 nova_compute[262220]: 2025-10-08 10:09:34.118 2 INFO nova.virt.libvirt.driver [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Creating image(s)
Oct 08 10:09:34 compute-0 nova_compute[262220]: 2025-10-08 10:09:34.162 2 DEBUG nova.storage.rbd_utils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] rbd image f49b788e-70d1-4bc2-9f90-381017f2b232_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 08 10:09:34 compute-0 nova_compute[262220]: 2025-10-08 10:09:34.205 2 DEBUG nova.storage.rbd_utils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] rbd image f49b788e-70d1-4bc2-9f90-381017f2b232_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 08 10:09:34 compute-0 nova_compute[262220]: 2025-10-08 10:09:34.251 2 DEBUG nova.storage.rbd_utils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] rbd image f49b788e-70d1-4bc2-9f90-381017f2b232_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 08 10:09:34 compute-0 nova_compute[262220]: 2025-10-08 10:09:34.254 2 DEBUG oslo_concurrency.lockutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "3cde70359534d4758cf71011630bd1fb14a90c92" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:09:34 compute-0 nova_compute[262220]: 2025-10-08 10:09:34.255 2 DEBUG oslo_concurrency.lockutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "3cde70359534d4758cf71011630bd1fb14a90c92" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:09:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:34 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78003860 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:34 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:09:34 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:09:34 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:09:34.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:09:34 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/750286088' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:09:34 compute-0 nova_compute[262220]: 2025-10-08 10:09:34.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:09:34 compute-0 nova_compute[262220]: 2025-10-08 10:09:34.887 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 08 10:09:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:34 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ad0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:34 compute-0 nova_compute[262220]: 2025-10-08 10:09:34.914 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 08 10:09:34 compute-0 nova_compute[262220]: 2025-10-08 10:09:34.915 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:09:34 compute-0 nova_compute[262220]: 2025-10-08 10:09:34.915 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 08 10:09:34 compute-0 nova_compute[262220]: 2025-10-08 10:09:34.930 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:09:35 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:09:35 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:09:35 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:09:35.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:09:35 compute-0 nova_compute[262220]: 2025-10-08 10:09:35.009 2 WARNING oslo_policy.policy [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Oct 08 10:09:35 compute-0 nova_compute[262220]: 2025-10-08 10:09:35.010 2 WARNING oslo_policy.policy [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Oct 08 10:09:35 compute-0 nova_compute[262220]: 2025-10-08 10:09:35.012 2 DEBUG nova.policy [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd50b19166a7245e390a6e29682191263', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0edb2c85b88b4b168a4b2e8c5ed4a05c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 08 10:09:35 compute-0 nova_compute[262220]: 2025-10-08 10:09:35.229 2 DEBUG nova.virt.libvirt.imagebackend [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Image locations are: [{'url': 'rbd://787292cc-8154-50c4-9e00-e9be3e817149/images/e5994bac-385d-4cfe-962e-386aa0559983/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://787292cc-8154-50c4-9e00-e9be3e817149/images/e5994bac-385d-4cfe-962e-386aa0559983/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 08 10:09:35 compute-0 ceph-mon[73572]: pgmap v746: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:09:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:09:35] "GET /metrics HTTP/1.1" 200 48403 "" "Prometheus/2.51.0"
Oct 08 10:09:35 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:09:35] "GET /metrics HTTP/1.1" 200 48403 "" "Prometheus/2.51.0"
Oct 08 10:09:36 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:36 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c0041a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:36 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v747: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:09:36 compute-0 nova_compute[262220]: 2025-10-08 10:09:36.165 2 DEBUG oslo_concurrency.processutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3cde70359534d4758cf71011630bd1fb14a90c92.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:09:36 compute-0 nova_compute[262220]: 2025-10-08 10:09:36.220 2 DEBUG oslo_concurrency.processutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3cde70359534d4758cf71011630bd1fb14a90c92.part --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:09:36 compute-0 nova_compute[262220]: 2025-10-08 10:09:36.221 2 DEBUG nova.virt.images [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] e5994bac-385d-4cfe-962e-386aa0559983 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Oct 08 10:09:36 compute-0 nova_compute[262220]: 2025-10-08 10:09:36.222 2 DEBUG nova.privsep.utils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Oct 08 10:09:36 compute-0 nova_compute[262220]: 2025-10-08 10:09:36.222 2 DEBUG oslo_concurrency.processutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/3cde70359534d4758cf71011630bd1fb14a90c92.part /var/lib/nova/instances/_base/3cde70359534d4758cf71011630bd1fb14a90c92.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:09:36 compute-0 nova_compute[262220]: 2025-10-08 10:09:36.246 2 DEBUG nova.network.neutron [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Successfully created port: d6bc221b-bf28-4c61-b116-cd61209c7f31 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 08 10:09:36 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:36 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004200 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:36 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:09:36 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:09:36 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:09:36.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:09:36 compute-0 nova_compute[262220]: 2025-10-08 10:09:36.402 2 DEBUG oslo_concurrency.processutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/3cde70359534d4758cf71011630bd1fb14a90c92.part /var/lib/nova/instances/_base/3cde70359534d4758cf71011630bd1fb14a90c92.converted" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:09:36 compute-0 nova_compute[262220]: 2025-10-08 10:09:36.406 2 DEBUG oslo_concurrency.processutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3cde70359534d4758cf71011630bd1fb14a90c92.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:09:36 compute-0 ceph-mon[73572]: pgmap v747: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:09:36 compute-0 nova_compute[262220]: 2025-10-08 10:09:36.457 2 DEBUG oslo_concurrency.processutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3cde70359534d4758cf71011630bd1fb14a90c92.converted --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:09:36 compute-0 nova_compute[262220]: 2025-10-08 10:09:36.458 2 DEBUG oslo_concurrency.lockutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "3cde70359534d4758cf71011630bd1fb14a90c92" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.202s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:09:36 compute-0 nova_compute[262220]: 2025-10-08 10:09:36.481 2 DEBUG nova.storage.rbd_utils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] rbd image f49b788e-70d1-4bc2-9f90-381017f2b232_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 08 10:09:36 compute-0 nova_compute[262220]: 2025-10-08 10:09:36.484 2 DEBUG oslo_concurrency.processutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/3cde70359534d4758cf71011630bd1fb14a90c92 f49b788e-70d1-4bc2-9f90-381017f2b232_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:09:36 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:36 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78003860 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:37 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:09:37 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:09:37 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:09:37.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:09:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:09:37.118Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:09:37 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Oct 08 10:09:37 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Oct 08 10:09:37 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Oct 08 10:09:37 compute-0 nova_compute[262220]: 2025-10-08 10:09:37.941 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:09:37 compute-0 nova_compute[262220]: 2025-10-08 10:09:37.941 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:09:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:38 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ad0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:38 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v749: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 0 op/s
Oct 08 10:09:38 compute-0 nova_compute[262220]: 2025-10-08 10:09:38.137 2 DEBUG nova.network.neutron [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Successfully updated port: d6bc221b-bf28-4c61-b116-cd61209c7f31 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 08 10:09:38 compute-0 nova_compute[262220]: 2025-10-08 10:09:38.154 2 DEBUG oslo_concurrency.lockutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "refresh_cache-f49b788e-70d1-4bc2-9f90-381017f2b232" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 08 10:09:38 compute-0 nova_compute[262220]: 2025-10-08 10:09:38.154 2 DEBUG oslo_concurrency.lockutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquired lock "refresh_cache-f49b788e-70d1-4bc2-9f90-381017f2b232" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 08 10:09:38 compute-0 nova_compute[262220]: 2025-10-08 10:09:38.154 2 DEBUG nova.network.neutron [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 08 10:09:38 compute-0 nova_compute[262220]: 2025-10-08 10:09:38.327 2 DEBUG nova.network.neutron [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 08 10:09:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:38 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c0041a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:38 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:09:38 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:09:38 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:09:38.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:09:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Oct 08 10:09:38 compute-0 ceph-mon[73572]: osdmap e154: 3 total, 3 up, 3 in
Oct 08 10:09:38 compute-0 ceph-mon[73572]: pgmap v749: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 0 op/s
Oct 08 10:09:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Oct 08 10:09:38 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Oct 08 10:09:38 compute-0 nova_compute[262220]: 2025-10-08 10:09:38.647 2 DEBUG nova.compute.manager [req-c45e44aa-067a-47f2-9451-acbb30ef3e47 req-0ba52fe7-fbc1-4e50-a828-f8f659a55463 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Received event network-changed-d6bc221b-bf28-4c61-b116-cd61209c7f31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 08 10:09:38 compute-0 nova_compute[262220]: 2025-10-08 10:09:38.648 2 DEBUG nova.compute.manager [req-c45e44aa-067a-47f2-9451-acbb30ef3e47 req-0ba52fe7-fbc1-4e50-a828-f8f659a55463 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Refreshing instance network info cache due to event network-changed-d6bc221b-bf28-4c61-b116-cd61209c7f31. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 08 10:09:38 compute-0 nova_compute[262220]: 2025-10-08 10:09:38.648 2 DEBUG oslo_concurrency.lockutils [req-c45e44aa-067a-47f2-9451-acbb30ef3e47 req-0ba52fe7-fbc1-4e50-a828-f8f659a55463 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "refresh_cache-f49b788e-70d1-4bc2-9f90-381017f2b232" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 08 10:09:38 compute-0 nova_compute[262220]: 2025-10-08 10:09:38.764 2 DEBUG oslo_concurrency.processutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/3cde70359534d4758cf71011630bd1fb14a90c92 f49b788e-70d1-4bc2-9f90-381017f2b232_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.280s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:09:38 compute-0 nova_compute[262220]: 2025-10-08 10:09:38.842 2 DEBUG nova.storage.rbd_utils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] resizing rbd image f49b788e-70d1-4bc2-9f90-381017f2b232_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 08 10:09:38 compute-0 nova_compute[262220]: 2025-10-08 10:09:38.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:09:38 compute-0 nova_compute[262220]: 2025-10-08 10:09:38.888 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:09:38 compute-0 nova_compute[262220]: 2025-10-08 10:09:38.888 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 08 10:09:38 compute-0 nova_compute[262220]: 2025-10-08 10:09:38.888 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:09:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:38 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:38 compute-0 nova_compute[262220]: 2025-10-08 10:09:38.911 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:09:38 compute-0 nova_compute[262220]: 2025-10-08 10:09:38.912 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:09:38 compute-0 nova_compute[262220]: 2025-10-08 10:09:38.912 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:09:38 compute-0 nova_compute[262220]: 2025-10-08 10:09:38.913 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 08 10:09:38 compute-0 nova_compute[262220]: 2025-10-08 10:09:38.913 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:09:38 compute-0 nova_compute[262220]: 2025-10-08 10:09:38.973 2 DEBUG nova.objects.instance [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lazy-loading 'migration_context' on Instance uuid f49b788e-70d1-4bc2-9f90-381017f2b232 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 08 10:09:38 compute-0 nova_compute[262220]: 2025-10-08 10:09:38.988 2 DEBUG nova.virt.libvirt.driver [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 08 10:09:38 compute-0 nova_compute[262220]: 2025-10-08 10:09:38.989 2 DEBUG nova.virt.libvirt.driver [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Ensure instance console log exists: /var/lib/nova/instances/f49b788e-70d1-4bc2-9f90-381017f2b232/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 08 10:09:38 compute-0 nova_compute[262220]: 2025-10-08 10:09:38.989 2 DEBUG oslo_concurrency.lockutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:09:38 compute-0 nova_compute[262220]: 2025-10-08 10:09:38.990 2 DEBUG oslo_concurrency.lockutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:09:38 compute-0 nova_compute[262220]: 2025-10-08 10:09:38.990 2 DEBUG oslo_concurrency.lockutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:09:39 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:09:39 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:09:39 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:09:39.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:09:39 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:09:39 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 08 10:09:39 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4142038770' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:09:39 compute-0 nova_compute[262220]: 2025-10-08 10:09:39.357 2 DEBUG nova.network.neutron [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Updating instance_info_cache with network_info: [{"id": "d6bc221b-bf28-4c61-b116-cd61209c7f31", "address": "fa:16:3e:9d:d1:5c", "network": {"id": "f5c6f88b-41ed-45ea-b491-931be9a75138", "bridge": "br-int", "label": "tempest-network-smoke--1726135850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6bc221b-bf", "ovs_interfaceid": "d6bc221b-bf28-4c61-b116-cd61209c7f31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 08 10:09:39 compute-0 nova_compute[262220]: 2025-10-08 10:09:39.362 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:09:39 compute-0 nova_compute[262220]: 2025-10-08 10:09:39.380 2 DEBUG oslo_concurrency.lockutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Releasing lock "refresh_cache-f49b788e-70d1-4bc2-9f90-381017f2b232" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 08 10:09:39 compute-0 nova_compute[262220]: 2025-10-08 10:09:39.380 2 DEBUG nova.compute.manager [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Instance network_info: |[{"id": "d6bc221b-bf28-4c61-b116-cd61209c7f31", "address": "fa:16:3e:9d:d1:5c", "network": {"id": "f5c6f88b-41ed-45ea-b491-931be9a75138", "bridge": "br-int", "label": "tempest-network-smoke--1726135850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6bc221b-bf", "ovs_interfaceid": "d6bc221b-bf28-4c61-b116-cd61209c7f31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 08 10:09:39 compute-0 nova_compute[262220]: 2025-10-08 10:09:39.381 2 DEBUG oslo_concurrency.lockutils [req-c45e44aa-067a-47f2-9451-acbb30ef3e47 req-0ba52fe7-fbc1-4e50-a828-f8f659a55463 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquired lock "refresh_cache-f49b788e-70d1-4bc2-9f90-381017f2b232" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 08 10:09:39 compute-0 nova_compute[262220]: 2025-10-08 10:09:39.381 2 DEBUG nova.network.neutron [req-c45e44aa-067a-47f2-9451-acbb30ef3e47 req-0ba52fe7-fbc1-4e50-a828-f8f659a55463 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Refreshing network info cache for port d6bc221b-bf28-4c61-b116-cd61209c7f31 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 08 10:09:39 compute-0 nova_compute[262220]: 2025-10-08 10:09:39.383 2 DEBUG nova.virt.libvirt.driver [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Start _get_guest_xml network_info=[{"id": "d6bc221b-bf28-4c61-b116-cd61209c7f31", "address": "fa:16:3e:9d:d1:5c", "network": {"id": "f5c6f88b-41ed-45ea-b491-931be9a75138", "bridge": "br-int", "label": "tempest-network-smoke--1726135850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6bc221b-bf", "ovs_interfaceid": "d6bc221b-bf28-4c61-b116-cd61209c7f31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-08T10:08:49Z,direct_url=<?>,disk_format='qcow2',id=e5994bac-385d-4cfe-962e-386aa0559983,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9bebada0871a4efa9df99c6beff34c13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-08T10:08:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'encryption_secret_uuid': None, 'encryption_format': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'device_type': 'disk', 'size': 0, 'image_id': 'e5994bac-385d-4cfe-962e-386aa0559983'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 08 10:09:39 compute-0 nova_compute[262220]: 2025-10-08 10:09:39.391 2 WARNING nova.virt.libvirt.driver [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 10:09:39 compute-0 nova_compute[262220]: 2025-10-08 10:09:39.397 2 DEBUG nova.virt.libvirt.host [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 08 10:09:39 compute-0 nova_compute[262220]: 2025-10-08 10:09:39.398 2 DEBUG nova.virt.libvirt.host [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 08 10:09:39 compute-0 nova_compute[262220]: 2025-10-08 10:09:39.404 2 DEBUG nova.virt.libvirt.host [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 08 10:09:39 compute-0 nova_compute[262220]: 2025-10-08 10:09:39.405 2 DEBUG nova.virt.libvirt.host [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 08 10:09:39 compute-0 nova_compute[262220]: 2025-10-08 10:09:39.406 2 DEBUG nova.virt.libvirt.driver [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 08 10:09:39 compute-0 nova_compute[262220]: 2025-10-08 10:09:39.406 2 DEBUG nova.virt.hardware [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-08T10:08:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='461f98d6-ae65-4f86-8ae2-cc3cfaea2a46',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-08T10:08:49Z,direct_url=<?>,disk_format='qcow2',id=e5994bac-385d-4cfe-962e-386aa0559983,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9bebada0871a4efa9df99c6beff34c13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-08T10:08:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 08 10:09:39 compute-0 nova_compute[262220]: 2025-10-08 10:09:39.407 2 DEBUG nova.virt.hardware [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 08 10:09:39 compute-0 nova_compute[262220]: 2025-10-08 10:09:39.407 2 DEBUG nova.virt.hardware [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 08 10:09:39 compute-0 nova_compute[262220]: 2025-10-08 10:09:39.407 2 DEBUG nova.virt.hardware [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 08 10:09:39 compute-0 nova_compute[262220]: 2025-10-08 10:09:39.407 2 DEBUG nova.virt.hardware [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 08 10:09:39 compute-0 nova_compute[262220]: 2025-10-08 10:09:39.408 2 DEBUG nova.virt.hardware [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 08 10:09:39 compute-0 nova_compute[262220]: 2025-10-08 10:09:39.408 2 DEBUG nova.virt.hardware [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 08 10:09:39 compute-0 nova_compute[262220]: 2025-10-08 10:09:39.408 2 DEBUG nova.virt.hardware [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 08 10:09:39 compute-0 nova_compute[262220]: 2025-10-08 10:09:39.408 2 DEBUG nova.virt.hardware [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 08 10:09:39 compute-0 nova_compute[262220]: 2025-10-08 10:09:39.408 2 DEBUG nova.virt.hardware [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 08 10:09:39 compute-0 nova_compute[262220]: 2025-10-08 10:09:39.409 2 DEBUG nova.virt.hardware [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 08 10:09:39 compute-0 nova_compute[262220]: 2025-10-08 10:09:39.412 2 DEBUG nova.privsep.utils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Oct 08 10:09:39 compute-0 nova_compute[262220]: 2025-10-08 10:09:39.412 2 DEBUG oslo_concurrency.processutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:09:39 compute-0 ceph-mon[73572]: osdmap e155: 3 total, 3 up, 3 in
Oct 08 10:09:39 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/4142038770' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:09:39 compute-0 nova_compute[262220]: 2025-10-08 10:09:39.590 2 WARNING nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 10:09:39 compute-0 nova_compute[262220]: 2025-10-08 10:09:39.592 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4851MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 08 10:09:39 compute-0 nova_compute[262220]: 2025-10-08 10:09:39.593 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:09:39 compute-0 nova_compute[262220]: 2025-10-08 10:09:39.593 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:09:39 compute-0 nova_compute[262220]: 2025-10-08 10:09:39.688 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Instance f49b788e-70d1-4bc2-9f90-381017f2b232 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 08 10:09:39 compute-0 nova_compute[262220]: 2025-10-08 10:09:39.689 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 08 10:09:39 compute-0 nova_compute[262220]: 2025-10-08 10:09:39.689 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 08 10:09:39 compute-0 nova_compute[262220]: 2025-10-08 10:09:39.737 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Refreshing inventories for resource provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 08 10:09:39 compute-0 nova_compute[262220]: 2025-10-08 10:09:39.823 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Updating ProviderTree inventory for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 08 10:09:39 compute-0 nova_compute[262220]: 2025-10-08 10:09:39.824 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Updating inventory in ProviderTree for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 08 10:09:39 compute-0 nova_compute[262220]: 2025-10-08 10:09:39.839 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Refreshing aggregate associations for resource provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 08 10:09:39 compute-0 nova_compute[262220]: 2025-10-08 10:09:39.858 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Refreshing trait associations for resource provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2, traits: HW_CPU_X86_CLMUL,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_FMA3,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ACCELERATORS,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE41,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AMD_SVM,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_RESCUE_BFV,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SVM,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,HW_CPU_X86_SSE2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 08 10:09:39 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 08 10:09:39 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2611300459' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 08 10:09:39 compute-0 nova_compute[262220]: 2025-10-08 10:09:39.889 2 DEBUG oslo_concurrency.processutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:09:39 compute-0 nova_compute[262220]: 2025-10-08 10:09:39.915 2 DEBUG nova.storage.rbd_utils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] rbd image f49b788e-70d1-4bc2-9f90-381017f2b232_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 08 10:09:39 compute-0 nova_compute[262220]: 2025-10-08 10:09:39.918 2 DEBUG oslo_concurrency.processutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:09:39 compute-0 nova_compute[262220]: 2025-10-08 10:09:39.933 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:09:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:40 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78003860 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:40 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v751: 353 pgs: 353 active+clean; 74 MiB data, 204 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.5 MiB/s wr, 40 op/s
Oct 08 10:09:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:40 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ad0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:40 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 08 10:09:40 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/771362139' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 08 10:09:40 compute-0 nova_compute[262220]: 2025-10-08 10:09:40.368 2 DEBUG oslo_concurrency.processutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:09:40 compute-0 nova_compute[262220]: 2025-10-08 10:09:40.370 2 DEBUG nova.virt.libvirt.vif [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-08T10:09:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1358472667',display_name='tempest-TestNetworkBasicOps-server-1358472667',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1358472667',id=1,image_ref='e5994bac-385d-4cfe-962e-386aa0559983',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGCqOiRkCvMZRP8fdEWleadJa9k0DhfKx++pZ4blF3y05LQ1KZbyE4MTPNAMp9BRrBdK92MH6DC+pII7aGjodGwK7AspsjQ0hDDswc17pIZ089tmxUxos+hWl7sAULow5Q==',key_name='tempest-TestNetworkBasicOps-1893605271',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0edb2c85b88b4b168a4b2e8c5ed4a05c',ramdisk_id='',reservation_id='r-50tfjz8b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e5994bac-385d-4cfe-962e-386aa0559983',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-139500885',owner_user_name='tempest-TestNetworkBasicOps-139500885-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-08T10:09:34Z,user_data=None,user_id='d50b19166a7245e390a6e29682191263',uuid=f49b788e-70d1-4bc2-9f90-381017f2b232,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d6bc221b-bf28-4c61-b116-cd61209c7f31", "address": "fa:16:3e:9d:d1:5c", "network": {"id": "f5c6f88b-41ed-45ea-b491-931be9a75138", "bridge": "br-int", "label": "tempest-network-smoke--1726135850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6bc221b-bf", "ovs_interfaceid": "d6bc221b-bf28-4c61-b116-cd61209c7f31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 08 10:09:40 compute-0 nova_compute[262220]: 2025-10-08 10:09:40.370 2 DEBUG nova.network.os_vif_util [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converting VIF {"id": "d6bc221b-bf28-4c61-b116-cd61209c7f31", "address": "fa:16:3e:9d:d1:5c", "network": {"id": "f5c6f88b-41ed-45ea-b491-931be9a75138", "bridge": "br-int", "label": "tempest-network-smoke--1726135850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6bc221b-bf", "ovs_interfaceid": "d6bc221b-bf28-4c61-b116-cd61209c7f31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 08 10:09:40 compute-0 nova_compute[262220]: 2025-10-08 10:09:40.371 2 DEBUG nova.network.os_vif_util [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:d1:5c,bridge_name='br-int',has_traffic_filtering=True,id=d6bc221b-bf28-4c61-b116-cd61209c7f31,network=Network(f5c6f88b-41ed-45ea-b491-931be9a75138),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6bc221b-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 08 10:09:40 compute-0 nova_compute[262220]: 2025-10-08 10:09:40.373 2 DEBUG nova.objects.instance [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lazy-loading 'pci_devices' on Instance uuid f49b788e-70d1-4bc2-9f90-381017f2b232 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 08 10:09:40 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:09:40 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:09:40 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:09:40.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:09:40 compute-0 nova_compute[262220]: 2025-10-08 10:09:40.387 2 DEBUG nova.virt.libvirt.driver [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] End _get_guest_xml xml=<domain type="kvm">
Oct 08 10:09:40 compute-0 nova_compute[262220]:   <uuid>f49b788e-70d1-4bc2-9f90-381017f2b232</uuid>
Oct 08 10:09:40 compute-0 nova_compute[262220]:   <name>instance-00000001</name>
Oct 08 10:09:40 compute-0 nova_compute[262220]:   <memory>131072</memory>
Oct 08 10:09:40 compute-0 nova_compute[262220]:   <vcpu>1</vcpu>
Oct 08 10:09:40 compute-0 nova_compute[262220]:   <metadata>
Oct 08 10:09:40 compute-0 nova_compute[262220]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 08 10:09:40 compute-0 nova_compute[262220]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 08 10:09:40 compute-0 nova_compute[262220]:       <nova:name>tempest-TestNetworkBasicOps-server-1358472667</nova:name>
Oct 08 10:09:40 compute-0 nova_compute[262220]:       <nova:creationTime>2025-10-08 10:09:39</nova:creationTime>
Oct 08 10:09:40 compute-0 nova_compute[262220]:       <nova:flavor name="m1.nano">
Oct 08 10:09:40 compute-0 nova_compute[262220]:         <nova:memory>128</nova:memory>
Oct 08 10:09:40 compute-0 nova_compute[262220]:         <nova:disk>1</nova:disk>
Oct 08 10:09:40 compute-0 nova_compute[262220]:         <nova:swap>0</nova:swap>
Oct 08 10:09:40 compute-0 nova_compute[262220]:         <nova:ephemeral>0</nova:ephemeral>
Oct 08 10:09:40 compute-0 nova_compute[262220]:         <nova:vcpus>1</nova:vcpus>
Oct 08 10:09:40 compute-0 nova_compute[262220]:       </nova:flavor>
Oct 08 10:09:40 compute-0 nova_compute[262220]:       <nova:owner>
Oct 08 10:09:40 compute-0 nova_compute[262220]:         <nova:user uuid="d50b19166a7245e390a6e29682191263">tempest-TestNetworkBasicOps-139500885-project-member</nova:user>
Oct 08 10:09:40 compute-0 nova_compute[262220]:         <nova:project uuid="0edb2c85b88b4b168a4b2e8c5ed4a05c">tempest-TestNetworkBasicOps-139500885</nova:project>
Oct 08 10:09:40 compute-0 nova_compute[262220]:       </nova:owner>
Oct 08 10:09:40 compute-0 nova_compute[262220]:       <nova:root type="image" uuid="e5994bac-385d-4cfe-962e-386aa0559983"/>
Oct 08 10:09:40 compute-0 nova_compute[262220]:       <nova:ports>
Oct 08 10:09:40 compute-0 nova_compute[262220]:         <nova:port uuid="d6bc221b-bf28-4c61-b116-cd61209c7f31">
Oct 08 10:09:40 compute-0 nova_compute[262220]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 08 10:09:40 compute-0 nova_compute[262220]:         </nova:port>
Oct 08 10:09:40 compute-0 nova_compute[262220]:       </nova:ports>
Oct 08 10:09:40 compute-0 nova_compute[262220]:     </nova:instance>
Oct 08 10:09:40 compute-0 nova_compute[262220]:   </metadata>
Oct 08 10:09:40 compute-0 nova_compute[262220]:   <sysinfo type="smbios">
Oct 08 10:09:40 compute-0 nova_compute[262220]:     <system>
Oct 08 10:09:40 compute-0 nova_compute[262220]:       <entry name="manufacturer">RDO</entry>
Oct 08 10:09:40 compute-0 nova_compute[262220]:       <entry name="product">OpenStack Compute</entry>
Oct 08 10:09:40 compute-0 nova_compute[262220]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 08 10:09:40 compute-0 nova_compute[262220]:       <entry name="serial">f49b788e-70d1-4bc2-9f90-381017f2b232</entry>
Oct 08 10:09:40 compute-0 nova_compute[262220]:       <entry name="uuid">f49b788e-70d1-4bc2-9f90-381017f2b232</entry>
Oct 08 10:09:40 compute-0 nova_compute[262220]:       <entry name="family">Virtual Machine</entry>
Oct 08 10:09:40 compute-0 nova_compute[262220]:     </system>
Oct 08 10:09:40 compute-0 nova_compute[262220]:   </sysinfo>
Oct 08 10:09:40 compute-0 nova_compute[262220]:   <os>
Oct 08 10:09:40 compute-0 nova_compute[262220]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 08 10:09:40 compute-0 nova_compute[262220]:     <boot dev="hd"/>
Oct 08 10:09:40 compute-0 nova_compute[262220]:     <smbios mode="sysinfo"/>
Oct 08 10:09:40 compute-0 nova_compute[262220]:   </os>
Oct 08 10:09:40 compute-0 nova_compute[262220]:   <features>
Oct 08 10:09:40 compute-0 nova_compute[262220]:     <acpi/>
Oct 08 10:09:40 compute-0 nova_compute[262220]:     <apic/>
Oct 08 10:09:40 compute-0 nova_compute[262220]:     <vmcoreinfo/>
Oct 08 10:09:40 compute-0 nova_compute[262220]:   </features>
Oct 08 10:09:40 compute-0 nova_compute[262220]:   <clock offset="utc">
Oct 08 10:09:40 compute-0 nova_compute[262220]:     <timer name="pit" tickpolicy="delay"/>
Oct 08 10:09:40 compute-0 nova_compute[262220]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 08 10:09:40 compute-0 nova_compute[262220]:     <timer name="hpet" present="no"/>
Oct 08 10:09:40 compute-0 nova_compute[262220]:   </clock>
Oct 08 10:09:40 compute-0 nova_compute[262220]:   <cpu mode="host-model" match="exact">
Oct 08 10:09:40 compute-0 nova_compute[262220]:     <topology sockets="1" cores="1" threads="1"/>
Oct 08 10:09:40 compute-0 nova_compute[262220]:   </cpu>
Oct 08 10:09:40 compute-0 nova_compute[262220]:   <devices>
Oct 08 10:09:40 compute-0 nova_compute[262220]:     <disk type="network" device="disk">
Oct 08 10:09:40 compute-0 nova_compute[262220]:       <driver type="raw" cache="none"/>
Oct 08 10:09:40 compute-0 nova_compute[262220]:       <source protocol="rbd" name="vms/f49b788e-70d1-4bc2-9f90-381017f2b232_disk">
Oct 08 10:09:40 compute-0 nova_compute[262220]:         <host name="192.168.122.100" port="6789"/>
Oct 08 10:09:40 compute-0 nova_compute[262220]:         <host name="192.168.122.102" port="6789"/>
Oct 08 10:09:40 compute-0 nova_compute[262220]:         <host name="192.168.122.101" port="6789"/>
Oct 08 10:09:40 compute-0 nova_compute[262220]:       </source>
Oct 08 10:09:40 compute-0 nova_compute[262220]:       <auth username="openstack">
Oct 08 10:09:40 compute-0 nova_compute[262220]:         <secret type="ceph" uuid="787292cc-8154-50c4-9e00-e9be3e817149"/>
Oct 08 10:09:40 compute-0 nova_compute[262220]:       </auth>
Oct 08 10:09:40 compute-0 nova_compute[262220]:       <target dev="vda" bus="virtio"/>
Oct 08 10:09:40 compute-0 nova_compute[262220]:     </disk>
Oct 08 10:09:40 compute-0 nova_compute[262220]:     <disk type="network" device="cdrom">
Oct 08 10:09:40 compute-0 nova_compute[262220]:       <driver type="raw" cache="none"/>
Oct 08 10:09:40 compute-0 nova_compute[262220]:       <source protocol="rbd" name="vms/f49b788e-70d1-4bc2-9f90-381017f2b232_disk.config">
Oct 08 10:09:40 compute-0 nova_compute[262220]:         <host name="192.168.122.100" port="6789"/>
Oct 08 10:09:40 compute-0 nova_compute[262220]:         <host name="192.168.122.102" port="6789"/>
Oct 08 10:09:40 compute-0 nova_compute[262220]:         <host name="192.168.122.101" port="6789"/>
Oct 08 10:09:40 compute-0 nova_compute[262220]:       </source>
Oct 08 10:09:40 compute-0 nova_compute[262220]:       <auth username="openstack">
Oct 08 10:09:40 compute-0 nova_compute[262220]:         <secret type="ceph" uuid="787292cc-8154-50c4-9e00-e9be3e817149"/>
Oct 08 10:09:40 compute-0 nova_compute[262220]:       </auth>
Oct 08 10:09:40 compute-0 nova_compute[262220]:       <target dev="sda" bus="sata"/>
Oct 08 10:09:40 compute-0 nova_compute[262220]:     </disk>
Oct 08 10:09:40 compute-0 nova_compute[262220]:     <interface type="ethernet">
Oct 08 10:09:40 compute-0 nova_compute[262220]:       <mac address="fa:16:3e:9d:d1:5c"/>
Oct 08 10:09:40 compute-0 nova_compute[262220]:       <model type="virtio"/>
Oct 08 10:09:40 compute-0 nova_compute[262220]:       <driver name="vhost" rx_queue_size="512"/>
Oct 08 10:09:40 compute-0 nova_compute[262220]:       <mtu size="1442"/>
Oct 08 10:09:40 compute-0 nova_compute[262220]:       <target dev="tapd6bc221b-bf"/>
Oct 08 10:09:40 compute-0 nova_compute[262220]:     </interface>
Oct 08 10:09:40 compute-0 nova_compute[262220]:     <serial type="pty">
Oct 08 10:09:40 compute-0 nova_compute[262220]:       <log file="/var/lib/nova/instances/f49b788e-70d1-4bc2-9f90-381017f2b232/console.log" append="off"/>
Oct 08 10:09:40 compute-0 nova_compute[262220]:     </serial>
Oct 08 10:09:40 compute-0 nova_compute[262220]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 08 10:09:40 compute-0 nova_compute[262220]:     <video>
Oct 08 10:09:40 compute-0 nova_compute[262220]:       <model type="virtio"/>
Oct 08 10:09:40 compute-0 nova_compute[262220]:     </video>
Oct 08 10:09:40 compute-0 nova_compute[262220]:     <input type="tablet" bus="usb"/>
Oct 08 10:09:40 compute-0 nova_compute[262220]:     <rng model="virtio">
Oct 08 10:09:40 compute-0 nova_compute[262220]:       <backend model="random">/dev/urandom</backend>
Oct 08 10:09:40 compute-0 nova_compute[262220]:     </rng>
Oct 08 10:09:40 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root"/>
Oct 08 10:09:40 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:09:40 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:09:40 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:09:40 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:09:40 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:09:40 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:09:40 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:09:40 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:09:40 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:09:40 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:09:40 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:09:40 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:09:40 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:09:40 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:09:40 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:09:40 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:09:40 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:09:40 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:09:40 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:09:40 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:09:40 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:09:40 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:09:40 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:09:40 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:09:40 compute-0 nova_compute[262220]:     <controller type="usb" index="0"/>
Oct 08 10:09:40 compute-0 nova_compute[262220]:     <memballoon model="virtio">
Oct 08 10:09:40 compute-0 nova_compute[262220]:       <stats period="10"/>
Oct 08 10:09:40 compute-0 nova_compute[262220]:     </memballoon>
Oct 08 10:09:40 compute-0 nova_compute[262220]:   </devices>
Oct 08 10:09:40 compute-0 nova_compute[262220]: </domain>
Oct 08 10:09:40 compute-0 nova_compute[262220]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 08 10:09:40 compute-0 nova_compute[262220]: 2025-10-08 10:09:40.388 2 DEBUG nova.compute.manager [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Preparing to wait for external event network-vif-plugged-d6bc221b-bf28-4c61-b116-cd61209c7f31 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 08 10:09:40 compute-0 nova_compute[262220]: 2025-10-08 10:09:40.389 2 DEBUG oslo_concurrency.lockutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "f49b788e-70d1-4bc2-9f90-381017f2b232-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:09:40 compute-0 nova_compute[262220]: 2025-10-08 10:09:40.389 2 DEBUG oslo_concurrency.lockutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "f49b788e-70d1-4bc2-9f90-381017f2b232-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:09:40 compute-0 nova_compute[262220]: 2025-10-08 10:09:40.389 2 DEBUG oslo_concurrency.lockutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "f49b788e-70d1-4bc2-9f90-381017f2b232-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:09:40 compute-0 nova_compute[262220]: 2025-10-08 10:09:40.390 2 DEBUG nova.virt.libvirt.vif [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-08T10:09:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1358472667',display_name='tempest-TestNetworkBasicOps-server-1358472667',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1358472667',id=1,image_ref='e5994bac-385d-4cfe-962e-386aa0559983',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGCqOiRkCvMZRP8fdEWleadJa9k0DhfKx++pZ4blF3y05LQ1KZbyE4MTPNAMp9BRrBdK92MH6DC+pII7aGjodGwK7AspsjQ0hDDswc17pIZ089tmxUxos+hWl7sAULow5Q==',key_name='tempest-TestNetworkBasicOps-1893605271',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0edb2c85b88b4b168a4b2e8c5ed4a05c',ramdisk_id='',reservation_id='r-50tfjz8b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e5994bac-385d-4cfe-962e-386aa0559983',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-139500885',owner_user_name='tempest-TestNetworkBasicOps-139500885-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-08T10:09:34Z,user_data=None,user_id='d50b19166a7245e390a6e29682191263',uuid=f49b788e-70d1-4bc2-9f90-381017f2b232,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d6bc221b-bf28-4c61-b116-cd61209c7f31", "address": "fa:16:3e:9d:d1:5c", "network": {"id": "f5c6f88b-41ed-45ea-b491-931be9a75138", "bridge": "br-int", "label": "tempest-network-smoke--1726135850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6bc221b-bf", "ovs_interfaceid": "d6bc221b-bf28-4c61-b116-cd61209c7f31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 08 10:09:40 compute-0 nova_compute[262220]: 2025-10-08 10:09:40.390 2 DEBUG nova.network.os_vif_util [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converting VIF {"id": "d6bc221b-bf28-4c61-b116-cd61209c7f31", "address": "fa:16:3e:9d:d1:5c", "network": {"id": "f5c6f88b-41ed-45ea-b491-931be9a75138", "bridge": "br-int", "label": "tempest-network-smoke--1726135850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6bc221b-bf", "ovs_interfaceid": "d6bc221b-bf28-4c61-b116-cd61209c7f31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 08 10:09:40 compute-0 nova_compute[262220]: 2025-10-08 10:09:40.390 2 DEBUG nova.network.os_vif_util [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:d1:5c,bridge_name='br-int',has_traffic_filtering=True,id=d6bc221b-bf28-4c61-b116-cd61209c7f31,network=Network(f5c6f88b-41ed-45ea-b491-931be9a75138),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6bc221b-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 08 10:09:40 compute-0 nova_compute[262220]: 2025-10-08 10:09:40.391 2 DEBUG os_vif [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:d1:5c,bridge_name='br-int',has_traffic_filtering=True,id=d6bc221b-bf28-4c61-b116-cd61209c7f31,network=Network(f5c6f88b-41ed-45ea-b491-931be9a75138),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6bc221b-bf') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 08 10:09:40 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 08 10:09:40 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2857050921' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:09:40 compute-0 nova_compute[262220]: 2025-10-08 10:09:40.418 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:09:40 compute-0 nova_compute[262220]: 2025-10-08 10:09:40.423 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Updating inventory in ProviderTree for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 08 10:09:40 compute-0 nova_compute[262220]: 2025-10-08 10:09:40.459 2 DEBUG ovsdbapp.backend.ovs_idl [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 08 10:09:40 compute-0 nova_compute[262220]: 2025-10-08 10:09:40.459 2 DEBUG ovsdbapp.backend.ovs_idl [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 08 10:09:40 compute-0 nova_compute[262220]: 2025-10-08 10:09:40.459 2 DEBUG ovsdbapp.backend.ovs_idl [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 08 10:09:40 compute-0 nova_compute[262220]: 2025-10-08 10:09:40.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 08 10:09:40 compute-0 nova_compute[262220]: 2025-10-08 10:09:40.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [POLLOUT] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:09:40 compute-0 nova_compute[262220]: 2025-10-08 10:09:40.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 08 10:09:40 compute-0 nova_compute[262220]: 2025-10-08 10:09:40.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:09:40 compute-0 nova_compute[262220]: 2025-10-08 10:09:40.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:09:40 compute-0 nova_compute[262220]: 2025-10-08 10:09:40.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:09:40 compute-0 nova_compute[262220]: 2025-10-08 10:09:40.474 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Updated inventory for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Oct 08 10:09:40 compute-0 nova_compute[262220]: 2025-10-08 10:09:40.474 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Updating resource provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Oct 08 10:09:40 compute-0 nova_compute[262220]: 2025-10-08 10:09:40.474 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Updating inventory in ProviderTree for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 08 10:09:40 compute-0 nova_compute[262220]: 2025-10-08 10:09:40.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:09:40 compute-0 nova_compute[262220]: 2025-10-08 10:09:40.478 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 10:09:40 compute-0 nova_compute[262220]: 2025-10-08 10:09:40.479 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 10:09:40 compute-0 nova_compute[262220]: 2025-10-08 10:09:40.480 2 INFO oslo.privsep.daemon [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmprzdv9b44/privsep.sock']
Oct 08 10:09:40 compute-0 nova_compute[262220]: 2025-10-08 10:09:40.495 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 08 10:09:40 compute-0 nova_compute[262220]: 2025-10-08 10:09:40.495 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.902s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:09:40 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/2611300459' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 08 10:09:40 compute-0 ceph-mon[73572]: pgmap v751: 353 pgs: 353 active+clean; 74 MiB data, 204 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.5 MiB/s wr, 40 op/s
Oct 08 10:09:40 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/771362139' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 08 10:09:40 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/2857050921' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:09:40 compute-0 nova_compute[262220]: 2025-10-08 10:09:40.674 2 DEBUG nova.network.neutron [req-c45e44aa-067a-47f2-9451-acbb30ef3e47 req-0ba52fe7-fbc1-4e50-a828-f8f659a55463 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Updated VIF entry in instance network info cache for port d6bc221b-bf28-4c61-b116-cd61209c7f31. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 08 10:09:40 compute-0 nova_compute[262220]: 2025-10-08 10:09:40.675 2 DEBUG nova.network.neutron [req-c45e44aa-067a-47f2-9451-acbb30ef3e47 req-0ba52fe7-fbc1-4e50-a828-f8f659a55463 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Updating instance_info_cache with network_info: [{"id": "d6bc221b-bf28-4c61-b116-cd61209c7f31", "address": "fa:16:3e:9d:d1:5c", "network": {"id": "f5c6f88b-41ed-45ea-b491-931be9a75138", "bridge": "br-int", "label": "tempest-network-smoke--1726135850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6bc221b-bf", "ovs_interfaceid": "d6bc221b-bf28-4c61-b116-cd61209c7f31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 08 10:09:40 compute-0 nova_compute[262220]: 2025-10-08 10:09:40.690 2 DEBUG oslo_concurrency.lockutils [req-c45e44aa-067a-47f2-9451-acbb30ef3e47 req-0ba52fe7-fbc1-4e50-a828-f8f659a55463 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Releasing lock "refresh_cache-f49b788e-70d1-4bc2-9f90-381017f2b232" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 08 10:09:40 compute-0 podman[267584]: 2025-10-08 10:09:40.888703337 +0000 UTC m=+0.050065955 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 08 10:09:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:40 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:41 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:09:41 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:09:41 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:09:41.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:09:41 compute-0 nova_compute[262220]: 2025-10-08 10:09:41.165 2 INFO oslo.privsep.daemon [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Spawned new privsep daemon via rootwrap
Oct 08 10:09:41 compute-0 nova_compute[262220]: 2025-10-08 10:09:41.047 565 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 08 10:09:41 compute-0 nova_compute[262220]: 2025-10-08 10:09:41.051 565 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 08 10:09:41 compute-0 nova_compute[262220]: 2025-10-08 10:09:41.053 565 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Oct 08 10:09:41 compute-0 nova_compute[262220]: 2025-10-08 10:09:41.053 565 INFO oslo.privsep.daemon [-] privsep daemon running as pid 565
Oct 08 10:09:41 compute-0 nova_compute[262220]: 2025-10-08 10:09:41.495 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:09:41 compute-0 nova_compute[262220]: 2025-10-08 10:09:41.495 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 08 10:09:41 compute-0 nova_compute[262220]: 2025-10-08 10:09:41.495 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 08 10:09:41 compute-0 nova_compute[262220]: 2025-10-08 10:09:41.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:09:41 compute-0 nova_compute[262220]: 2025-10-08 10:09:41.498 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd6bc221b-bf, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 10:09:41 compute-0 nova_compute[262220]: 2025-10-08 10:09:41.498 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd6bc221b-bf, col_values=(('external_ids', {'iface-id': 'd6bc221b-bf28-4c61-b116-cd61209c7f31', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9d:d1:5c', 'vm-uuid': 'f49b788e-70d1-4bc2-9f90-381017f2b232'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 10:09:41 compute-0 nova_compute[262220]: 2025-10-08 10:09:41.534 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 08 10:09:41 compute-0 nova_compute[262220]: 2025-10-08 10:09:41.535 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 08 10:09:41 compute-0 nova_compute[262220]: 2025-10-08 10:09:41.535 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:09:41 compute-0 nova_compute[262220]: 2025-10-08 10:09:41.536 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:09:41 compute-0 nova_compute[262220]: 2025-10-08 10:09:41.536 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:09:41 compute-0 nova_compute[262220]: 2025-10-08 10:09:41.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:09:41 compute-0 NetworkManager[44872]: <info>  [1759918181.5483] manager: (tapd6bc221b-bf): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25)
Oct 08 10:09:41 compute-0 nova_compute[262220]: 2025-10-08 10:09:41.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 08 10:09:41 compute-0 nova_compute[262220]: 2025-10-08 10:09:41.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:09:41 compute-0 nova_compute[262220]: 2025-10-08 10:09:41.555 2 INFO os_vif [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:d1:5c,bridge_name='br-int',has_traffic_filtering=True,id=d6bc221b-bf28-4c61-b116-cd61209c7f31,network=Network(f5c6f88b-41ed-45ea-b491-931be9a75138),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6bc221b-bf')
Oct 08 10:09:41 compute-0 nova_compute[262220]: 2025-10-08 10:09:41.616 2 DEBUG nova.virt.libvirt.driver [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 08 10:09:41 compute-0 nova_compute[262220]: 2025-10-08 10:09:41.617 2 DEBUG nova.virt.libvirt.driver [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 08 10:09:41 compute-0 nova_compute[262220]: 2025-10-08 10:09:41.617 2 DEBUG nova.virt.libvirt.driver [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] No VIF found with MAC fa:16:3e:9d:d1:5c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 08 10:09:41 compute-0 nova_compute[262220]: 2025-10-08 10:09:41.618 2 INFO nova.virt.libvirt.driver [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Using config drive
Oct 08 10:09:41 compute-0 nova_compute[262220]: 2025-10-08 10:09:41.640 2 DEBUG nova.storage.rbd_utils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] rbd image f49b788e-70d1-4bc2-9f90-381017f2b232_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 08 10:09:41 compute-0 podman[267628]: 2025-10-08 10:09:41.91131932 +0000 UTC m=+0.073869297 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3)
Oct 08 10:09:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:42 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94002090 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:42 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v752: 353 pgs: 353 active+clean; 74 MiB data, 204 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.5 MiB/s wr, 40 op/s
Oct 08 10:09:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:42 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78003860 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:42 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:09:42 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:09:42 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:09:42.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:09:42 compute-0 nova_compute[262220]: 2025-10-08 10:09:42.402 2 INFO nova.virt.libvirt.driver [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Creating config drive at /var/lib/nova/instances/f49b788e-70d1-4bc2-9f90-381017f2b232/disk.config
Oct 08 10:09:42 compute-0 nova_compute[262220]: 2025-10-08 10:09:42.407 2 DEBUG oslo_concurrency.processutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f49b788e-70d1-4bc2-9f90-381017f2b232/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6xrqpvmg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:09:42 compute-0 nova_compute[262220]: 2025-10-08 10:09:42.543 2 DEBUG oslo_concurrency.processutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f49b788e-70d1-4bc2-9f90-381017f2b232/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6xrqpvmg" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:09:42 compute-0 nova_compute[262220]: 2025-10-08 10:09:42.573 2 DEBUG nova.storage.rbd_utils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] rbd image f49b788e-70d1-4bc2-9f90-381017f2b232_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 08 10:09:42 compute-0 nova_compute[262220]: 2025-10-08 10:09:42.576 2 DEBUG oslo_concurrency.processutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f49b788e-70d1-4bc2-9f90-381017f2b232/disk.config f49b788e-70d1-4bc2-9f90-381017f2b232_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:09:42 compute-0 nova_compute[262220]: 2025-10-08 10:09:42.721 2 DEBUG oslo_concurrency.processutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f49b788e-70d1-4bc2-9f90-381017f2b232/disk.config f49b788e-70d1-4bc2-9f90-381017f2b232_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:09:42 compute-0 nova_compute[262220]: 2025-10-08 10:09:42.722 2 INFO nova.virt.libvirt.driver [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Deleting local config drive /var/lib/nova/instances/f49b788e-70d1-4bc2-9f90-381017f2b232/disk.config because it was imported into RBD.
Oct 08 10:09:42 compute-0 systemd[1]: Starting libvirt secret daemon...
Oct 08 10:09:42 compute-0 nova_compute[262220]: 2025-10-08 10:09:42.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:09:42 compute-0 systemd[1]: Started libvirt secret daemon.
Oct 08 10:09:42 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Oct 08 10:09:42 compute-0 NetworkManager[44872]: <info>  [1759918182.8152] manager: (tapd6bc221b-bf): new Tun device (/org/freedesktop/NetworkManager/Devices/26)
Oct 08 10:09:42 compute-0 kernel: tapd6bc221b-bf: entered promiscuous mode
Oct 08 10:09:42 compute-0 ovn_controller[153187]: 2025-10-08T10:09:42Z|00027|binding|INFO|Claiming lport d6bc221b-bf28-4c61-b116-cd61209c7f31 for this chassis.
Oct 08 10:09:42 compute-0 ovn_controller[153187]: 2025-10-08T10:09:42Z|00028|binding|INFO|d6bc221b-bf28-4c61-b116-cd61209c7f31: Claiming fa:16:3e:9d:d1:5c 10.100.0.6
Oct 08 10:09:42 compute-0 nova_compute[262220]: 2025-10-08 10:09:42.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:09:42 compute-0 nova_compute[262220]: 2025-10-08 10:09:42.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:09:42 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:09:42.836 163175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:d1:5c 10.100.0.6'], port_security=['fa:16:3e:9d:d1:5c 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'f49b788e-70d1-4bc2-9f90-381017f2b232', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f5c6f88b-41ed-45ea-b491-931be9a75138', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0edb2c85b88b4b168a4b2e8c5ed4a05c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1b714465-ebb6-4c8b-ab03-a9d6fbedd458', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e6475b99-4f25-4ccc-88e7-4eafaf6f3891, chassis=[<ovs.db.idl.Row object at 0x7f191f105a60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f191f105a60>], logical_port=d6bc221b-bf28-4c61-b116-cd61209c7f31) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 08 10:09:42 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:09:42.837 163175 INFO neutron.agent.ovn.metadata.agent [-] Port d6bc221b-bf28-4c61-b116-cd61209c7f31 in datapath f5c6f88b-41ed-45ea-b491-931be9a75138 bound to our chassis
Oct 08 10:09:42 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:09:42.838 163175 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f5c6f88b-41ed-45ea-b491-931be9a75138
Oct 08 10:09:42 compute-0 systemd-udevd[267720]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 10:09:42 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:09:42.840 163175 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmphu37rar1/privsep.sock']
Oct 08 10:09:42 compute-0 NetworkManager[44872]: <info>  [1759918182.8581] device (tapd6bc221b-bf): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 08 10:09:42 compute-0 NetworkManager[44872]: <info>  [1759918182.8590] device (tapd6bc221b-bf): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 08 10:09:42 compute-0 systemd-machined[216030]: New machine qemu-1-instance-00000001.
Oct 08 10:09:42 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Oct 08 10:09:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:42 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ad0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:42 compute-0 nova_compute[262220]: 2025-10-08 10:09:42.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:09:42 compute-0 ovn_controller[153187]: 2025-10-08T10:09:42Z|00029|binding|INFO|Setting lport d6bc221b-bf28-4c61-b116-cd61209c7f31 ovn-installed in OVS
Oct 08 10:09:42 compute-0 ovn_controller[153187]: 2025-10-08T10:09:42Z|00030|binding|INFO|Setting lport d6bc221b-bf28-4c61-b116-cd61209c7f31 up in Southbound
Oct 08 10:09:42 compute-0 nova_compute[262220]: 2025-10-08 10:09:42.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:09:43 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:09:43 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:09:43 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:09:43.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:09:43 compute-0 ceph-mon[73572]: pgmap v752: 353 pgs: 353 active+clean; 74 MiB data, 204 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.5 MiB/s wr, 40 op/s
Oct 08 10:09:43 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/3459562863' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:09:43 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/268655304' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:09:43 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:09:43.562 163175 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct 08 10:09:43 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:09:43.562 163175 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmphu37rar1/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Oct 08 10:09:43 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:09:43.421 267781 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 08 10:09:43 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:09:43.426 267781 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 08 10:09:43 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:09:43.428 267781 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Oct 08 10:09:43 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:09:43.429 267781 INFO oslo.privsep.daemon [-] privsep daemon running as pid 267781
Oct 08 10:09:43 compute-0 nova_compute[262220]: 2025-10-08 10:09:43.565 2 DEBUG nova.compute.manager [req-5b21bf1d-8b6a-411c-af27-a52abffd24eb req-30ed8aad-f45f-455b-a46f-c163b35ed074 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Received event network-vif-plugged-d6bc221b-bf28-4c61-b116-cd61209c7f31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 08 10:09:43 compute-0 nova_compute[262220]: 2025-10-08 10:09:43.565 2 DEBUG oslo_concurrency.lockutils [req-5b21bf1d-8b6a-411c-af27-a52abffd24eb req-30ed8aad-f45f-455b-a46f-c163b35ed074 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "f49b788e-70d1-4bc2-9f90-381017f2b232-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:09:43 compute-0 nova_compute[262220]: 2025-10-08 10:09:43.565 2 DEBUG oslo_concurrency.lockutils [req-5b21bf1d-8b6a-411c-af27-a52abffd24eb req-30ed8aad-f45f-455b-a46f-c163b35ed074 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "f49b788e-70d1-4bc2-9f90-381017f2b232-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:09:43 compute-0 nova_compute[262220]: 2025-10-08 10:09:43.565 2 DEBUG oslo_concurrency.lockutils [req-5b21bf1d-8b6a-411c-af27-a52abffd24eb req-30ed8aad-f45f-455b-a46f-c163b35ed074 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "f49b788e-70d1-4bc2-9f90-381017f2b232-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:09:43 compute-0 nova_compute[262220]: 2025-10-08 10:09:43.566 2 DEBUG nova.compute.manager [req-5b21bf1d-8b6a-411c-af27-a52abffd24eb req-30ed8aad-f45f-455b-a46f-c163b35ed074 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Processing event network-vif-plugged-d6bc221b-bf28-4c61-b116-cd61209c7f31 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 08 10:09:43 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:09:43.566 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[27e84b03-ab95-46e7-94e6-cdde1d3fdc38]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:09:43 compute-0 nova_compute[262220]: 2025-10-08 10:09:43.821 2 DEBUG nova.virt.driver [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Emitting event <LifecycleEvent: 1759918183.8208282, f49b788e-70d1-4bc2-9f90-381017f2b232 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 08 10:09:43 compute-0 nova_compute[262220]: 2025-10-08 10:09:43.822 2 INFO nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] VM Started (Lifecycle Event)
Oct 08 10:09:43 compute-0 nova_compute[262220]: 2025-10-08 10:09:43.826 2 DEBUG nova.compute.manager [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 08 10:09:43 compute-0 nova_compute[262220]: 2025-10-08 10:09:43.839 2 DEBUG nova.virt.libvirt.driver [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 08 10:09:43 compute-0 nova_compute[262220]: 2025-10-08 10:09:43.842 2 INFO nova.virt.libvirt.driver [-] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Instance spawned successfully.
Oct 08 10:09:43 compute-0 nova_compute[262220]: 2025-10-08 10:09:43.842 2 DEBUG nova.virt.libvirt.driver [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 08 10:09:43 compute-0 nova_compute[262220]: 2025-10-08 10:09:43.865 2 DEBUG nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 08 10:09:43 compute-0 nova_compute[262220]: 2025-10-08 10:09:43.872 2 DEBUG nova.virt.libvirt.driver [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 08 10:09:43 compute-0 nova_compute[262220]: 2025-10-08 10:09:43.872 2 DEBUG nova.virt.libvirt.driver [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 08 10:09:43 compute-0 nova_compute[262220]: 2025-10-08 10:09:43.873 2 DEBUG nova.virt.libvirt.driver [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 08 10:09:43 compute-0 nova_compute[262220]: 2025-10-08 10:09:43.873 2 DEBUG nova.virt.libvirt.driver [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 08 10:09:43 compute-0 nova_compute[262220]: 2025-10-08 10:09:43.874 2 DEBUG nova.virt.libvirt.driver [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 08 10:09:43 compute-0 nova_compute[262220]: 2025-10-08 10:09:43.874 2 DEBUG nova.virt.libvirt.driver [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 08 10:09:43 compute-0 nova_compute[262220]: 2025-10-08 10:09:43.877 2 DEBUG nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 08 10:09:43 compute-0 nova_compute[262220]: 2025-10-08 10:09:43.905 2 INFO nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 08 10:09:43 compute-0 nova_compute[262220]: 2025-10-08 10:09:43.906 2 DEBUG nova.virt.driver [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Emitting event <LifecycleEvent: 1759918183.8219543, f49b788e-70d1-4bc2-9f90-381017f2b232 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 08 10:09:43 compute-0 nova_compute[262220]: 2025-10-08 10:09:43.906 2 INFO nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] VM Paused (Lifecycle Event)
Oct 08 10:09:43 compute-0 nova_compute[262220]: 2025-10-08 10:09:43.933 2 DEBUG nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 08 10:09:43 compute-0 nova_compute[262220]: 2025-10-08 10:09:43.936 2 DEBUG nova.virt.driver [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Emitting event <LifecycleEvent: 1759918183.8379052, f49b788e-70d1-4bc2-9f90-381017f2b232 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 08 10:09:43 compute-0 nova_compute[262220]: 2025-10-08 10:09:43.937 2 INFO nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] VM Resumed (Lifecycle Event)
Oct 08 10:09:43 compute-0 nova_compute[262220]: 2025-10-08 10:09:43.942 2 INFO nova.compute.manager [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Took 9.83 seconds to spawn the instance on the hypervisor.
Oct 08 10:09:43 compute-0 nova_compute[262220]: 2025-10-08 10:09:43.943 2 DEBUG nova.compute.manager [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 08 10:09:43 compute-0 nova_compute[262220]: 2025-10-08 10:09:43.955 2 DEBUG nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 08 10:09:43 compute-0 nova_compute[262220]: 2025-10-08 10:09:43.960 2 DEBUG nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 08 10:09:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:44 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:44 compute-0 nova_compute[262220]: 2025-10-08 10:09:44.024 2 INFO nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 08 10:09:44 compute-0 nova_compute[262220]: 2025-10-08 10:09:44.044 2 INFO nova.compute.manager [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Took 10.82 seconds to build instance.
Oct 08 10:09:44 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:09:44 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Oct 08 10:09:44 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Oct 08 10:09:44 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v754: 353 pgs: 353 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 3.3 MiB/s wr, 80 op/s
Oct 08 10:09:44 compute-0 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Oct 08 10:09:44 compute-0 nova_compute[262220]: 2025-10-08 10:09:44.113 2 DEBUG oslo_concurrency.lockutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "f49b788e-70d1-4bc2-9f90-381017f2b232" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.975s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:09:44 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/3069375628' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:09:44 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/3500555269' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:09:44 compute-0 ceph-mon[73572]: osdmap e156: 3 total, 3 up, 3 in
Oct 08 10:09:44 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:09:44.263 267781 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:09:44 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:09:44.263 267781 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:09:44 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:09:44.263 267781 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:09:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:44 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:44 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:09:44 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:09:44 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:09:44.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:09:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:44 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78003860 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:44 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:09:44.993 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[55e29e44-43e9-4307-9609-e1b444c9bdc9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:09:44 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:09:44.995 163175 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf5c6f88b-41 in ovnmeta-f5c6f88b-41ed-45ea-b491-931be9a75138 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 08 10:09:44 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:09:44.997 267781 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf5c6f88b-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 08 10:09:44 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:09:44.997 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[7702c2da-1e52-48ee-a58c-d1e43bd2f872]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:09:45 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:09:45.000 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[23cc25fd-696f-4649-b088-63f0487f02c0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:09:45 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:09:45 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 10:09:45 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:09:45.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 10:09:45 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:09:45.022 163290 DEBUG oslo.privsep.daemon [-] privsep: reply[8cf58830-d389-4c57-b280-b1e8f94041d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:09:45 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:09:45.051 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[2165f542-8262-4264-a665-3410dc043bca]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:09:45 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:09:45.053 163175 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpwyl_6z3j/privsep.sock']
Oct 08 10:09:45 compute-0 ceph-mon[73572]: pgmap v754: 353 pgs: 353 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 3.3 MiB/s wr, 80 op/s
Oct 08 10:09:45 compute-0 nova_compute[262220]: 2025-10-08 10:09:45.655 2 DEBUG nova.compute.manager [req-5070819c-8f79-4f4c-b487-e46479b66067 req-0ded576d-7c5e-42ea-9d33-4de13ac216ac 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Received event network-vif-plugged-d6bc221b-bf28-4c61-b116-cd61209c7f31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 08 10:09:45 compute-0 nova_compute[262220]: 2025-10-08 10:09:45.655 2 DEBUG oslo_concurrency.lockutils [req-5070819c-8f79-4f4c-b487-e46479b66067 req-0ded576d-7c5e-42ea-9d33-4de13ac216ac 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "f49b788e-70d1-4bc2-9f90-381017f2b232-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:09:45 compute-0 nova_compute[262220]: 2025-10-08 10:09:45.655 2 DEBUG oslo_concurrency.lockutils [req-5070819c-8f79-4f4c-b487-e46479b66067 req-0ded576d-7c5e-42ea-9d33-4de13ac216ac 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "f49b788e-70d1-4bc2-9f90-381017f2b232-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:09:45 compute-0 nova_compute[262220]: 2025-10-08 10:09:45.656 2 DEBUG oslo_concurrency.lockutils [req-5070819c-8f79-4f4c-b487-e46479b66067 req-0ded576d-7c5e-42ea-9d33-4de13ac216ac 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "f49b788e-70d1-4bc2-9f90-381017f2b232-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:09:45 compute-0 nova_compute[262220]: 2025-10-08 10:09:45.656 2 DEBUG nova.compute.manager [req-5070819c-8f79-4f4c-b487-e46479b66067 req-0ded576d-7c5e-42ea-9d33-4de13ac216ac 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] No waiting events found dispatching network-vif-plugged-d6bc221b-bf28-4c61-b116-cd61209c7f31 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 08 10:09:45 compute-0 nova_compute[262220]: 2025-10-08 10:09:45.656 2 WARNING nova.compute.manager [req-5070819c-8f79-4f4c-b487-e46479b66067 req-0ded576d-7c5e-42ea-9d33-4de13ac216ac 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Received unexpected event network-vif-plugged-d6bc221b-bf28-4c61-b116-cd61209c7f31 for instance with vm_state active and task_state None.
Oct 08 10:09:45 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:09:45.718 163175 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct 08 10:09:45 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:09:45.719 163175 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpwyl_6z3j/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Oct 08 10:09:45 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:09:45.578 267799 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 08 10:09:45 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:09:45.582 267799 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 08 10:09:45 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:09:45.584 267799 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Oct 08 10:09:45 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:09:45.585 267799 INFO oslo.privsep.daemon [-] privsep daemon running as pid 267799
Oct 08 10:09:45 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:09:45.721 267799 DEBUG oslo.privsep.daemon [-] privsep: reply[3ce0c1be-5fd7-4410-b3c5-8242269f54f0]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:09:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:09:45] "GET /metrics HTTP/1.1" 200 48403 "" "Prometheus/2.51.0"
Oct 08 10:09:45 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:09:45] "GET /metrics HTTP/1.1" 200 48403 "" "Prometheus/2.51.0"
Oct 08 10:09:46 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:46 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ad0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:46 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v755: 353 pgs: 353 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 65 op/s
Oct 08 10:09:46 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:09:46.231 267799 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:09:46 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:09:46.231 267799 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:09:46 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:09:46.231 267799 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:09:46 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:46 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:46 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:09:46 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:09:46 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:09:46.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:09:46 compute-0 sudo[267805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:09:46 compute-0 sudo[267805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:09:46 compute-0 sudo[267805]: pam_unix(sudo:session): session closed for user root
Oct 08 10:09:46 compute-0 nova_compute[262220]: 2025-10-08 10:09:46.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:09:46 compute-0 NetworkManager[44872]: <info>  [1759918186.5625] manager: (patch-br-int-to-provnet-5eda3e1f-dd4f-4c7d-b5fb-d8c0d9996d6d): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/27)
Oct 08 10:09:46 compute-0 NetworkManager[44872]: <info>  [1759918186.5630] device (patch-br-int-to-provnet-5eda3e1f-dd4f-4c7d-b5fb-d8c0d9996d6d)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 08 10:09:46 compute-0 NetworkManager[44872]: <info>  [1759918186.5637] manager: (patch-provnet-5eda3e1f-dd4f-4c7d-b5fb-d8c0d9996d6d-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/28)
Oct 08 10:09:46 compute-0 NetworkManager[44872]: <info>  [1759918186.5639] device (patch-provnet-5eda3e1f-dd4f-4c7d-b5fb-d8c0d9996d6d-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 08 10:09:46 compute-0 nova_compute[262220]: 2025-10-08 10:09:46.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:09:46 compute-0 NetworkManager[44872]: <info>  [1759918186.5645] manager: (patch-provnet-5eda3e1f-dd4f-4c7d-b5fb-d8c0d9996d6d-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Oct 08 10:09:46 compute-0 NetworkManager[44872]: <info>  [1759918186.5649] manager: (patch-br-int-to-provnet-5eda3e1f-dd4f-4c7d-b5fb-d8c0d9996d6d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/30)
Oct 08 10:09:46 compute-0 NetworkManager[44872]: <info>  [1759918186.5652] device (patch-br-int-to-provnet-5eda3e1f-dd4f-4c7d-b5fb-d8c0d9996d6d)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 08 10:09:46 compute-0 NetworkManager[44872]: <info>  [1759918186.5654] device (patch-provnet-5eda3e1f-dd4f-4c7d-b5fb-d8c0d9996d6d-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 08 10:09:46 compute-0 nova_compute[262220]: 2025-10-08 10:09:46.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:09:46 compute-0 nova_compute[262220]: 2025-10-08 10:09:46.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:09:46 compute-0 nova_compute[262220]: 2025-10-08 10:09:46.805 2 DEBUG nova.compute.manager [req-5dd18c9b-6d2c-4533-8cf0-3812628e4abe req-fc7c94ee-8b91-44f5-879e-e6492d5dd02d 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Received event network-changed-d6bc221b-bf28-4c61-b116-cd61209c7f31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 08 10:09:46 compute-0 nova_compute[262220]: 2025-10-08 10:09:46.805 2 DEBUG nova.compute.manager [req-5dd18c9b-6d2c-4533-8cf0-3812628e4abe req-fc7c94ee-8b91-44f5-879e-e6492d5dd02d 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Refreshing instance network info cache due to event network-changed-d6bc221b-bf28-4c61-b116-cd61209c7f31. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 08 10:09:46 compute-0 nova_compute[262220]: 2025-10-08 10:09:46.806 2 DEBUG oslo_concurrency.lockutils [req-5dd18c9b-6d2c-4533-8cf0-3812628e4abe req-fc7c94ee-8b91-44f5-879e-e6492d5dd02d 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "refresh_cache-f49b788e-70d1-4bc2-9f90-381017f2b232" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 08 10:09:46 compute-0 nova_compute[262220]: 2025-10-08 10:09:46.806 2 DEBUG oslo_concurrency.lockutils [req-5dd18c9b-6d2c-4533-8cf0-3812628e4abe req-fc7c94ee-8b91-44f5-879e-e6492d5dd02d 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquired lock "refresh_cache-f49b788e-70d1-4bc2-9f90-381017f2b232" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 08 10:09:46 compute-0 nova_compute[262220]: 2025-10-08 10:09:46.806 2 DEBUG nova.network.neutron [req-5dd18c9b-6d2c-4533-8cf0-3812628e4abe req-fc7c94ee-8b91-44f5-879e-e6492d5dd02d 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Refreshing network info cache for port d6bc221b-bf28-4c61-b116-cd61209c7f31 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 08 10:09:46 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:09:46.851 267799 DEBUG oslo.privsep.daemon [-] privsep: reply[ac37b49d-2dc3-4f27-94df-f62f0b8236ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:09:46 compute-0 NetworkManager[44872]: <info>  [1759918186.8627] manager: (tapf5c6f88b-40): new Veth device (/org/freedesktop/NetworkManager/Devices/31)
Oct 08 10:09:46 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:09:46.863 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[d7e438d7-548e-4aa1-985e-0bb5c3813b3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:09:46 compute-0 systemd-udevd[267836]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 10:09:46 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:09:46.891 267799 DEBUG oslo.privsep.daemon [-] privsep: reply[8459f282-79bb-476e-b7fe-62df0069282b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:09:46 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:09:46.894 267799 DEBUG oslo.privsep.daemon [-] privsep: reply[6da6cfdf-2a04-47d0-b940-ac0d339c73ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:09:46 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:46 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:46 compute-0 NetworkManager[44872]: <info>  [1759918186.9187] device (tapf5c6f88b-40): carrier: link connected
Oct 08 10:09:46 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:09:46.922 267799 DEBUG oslo.privsep.daemon [-] privsep: reply[392ae52a-886a-416e-9bda-1c40a67cdc66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:09:46 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:09:46.940 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[17135495-2ad5-49d7-a551-546314e2dbaf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf5c6f88b-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:9c:fc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 414555, 'reachable_time': 36725, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267855, 'error': None, 'target': 'ovnmeta-f5c6f88b-41ed-45ea-b491-931be9a75138', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:09:46 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:09:46.956 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[8e6bf5c0-5fb8-4689-827c-cb50075cd81b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4e:9cfc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 414555, 'tstamp': 414555}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267856, 'error': None, 'target': 'ovnmeta-f5c6f88b-41ed-45ea-b491-931be9a75138', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:09:46 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:09:46.971 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[c17937ff-cbbd-44bb-ba01-d271f9567e2a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf5c6f88b-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:9c:fc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 414555, 'reachable_time': 36725, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 267857, 'error': None, 'target': 'ovnmeta-f5c6f88b-41ed-45ea-b491-931be9a75138', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:09:46 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:09:46.998 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[e9e3de11-12a2-424f-a8f4-a20e1112c990]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:09:47 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:09:47 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:09:47 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:09:47.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:09:47 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:09:47.050 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[0ee9428c-41f7-4552-90db-eb4f703afadf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:09:47 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:09:47.052 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf5c6f88b-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 10:09:47 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:09:47.052 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 10:09:47 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:09:47.053 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf5c6f88b-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 10:09:47 compute-0 kernel: tapf5c6f88b-40: entered promiscuous mode
Oct 08 10:09:47 compute-0 NetworkManager[44872]: <info>  [1759918187.0554] manager: (tapf5c6f88b-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/32)
Oct 08 10:09:47 compute-0 nova_compute[262220]: 2025-10-08 10:09:47.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:09:47 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:09:47.059 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf5c6f88b-40, col_values=(('external_ids', {'iface-id': '950da3ad-35fb-4b98-a8cb-0ee192607b20'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 10:09:47 compute-0 ovn_controller[153187]: 2025-10-08T10:09:47Z|00031|binding|INFO|Releasing lport 950da3ad-35fb-4b98-a8cb-0ee192607b20 from this chassis (sb_readonly=0)
Oct 08 10:09:47 compute-0 nova_compute[262220]: 2025-10-08 10:09:47.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:09:47 compute-0 nova_compute[262220]: 2025-10-08 10:09:47.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:09:47 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:09:47.074 163175 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f5c6f88b-41ed-45ea-b491-931be9a75138.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f5c6f88b-41ed-45ea-b491-931be9a75138.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 08 10:09:47 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:09:47.075 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[fc79314d-7fd6-4149-9681-fabdd4d1f994]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:09:47 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:09:47.076 163175 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 08 10:09:47 compute-0 ovn_metadata_agent[163169]: global
Oct 08 10:09:47 compute-0 ovn_metadata_agent[163169]:     log         /dev/log local0 debug
Oct 08 10:09:47 compute-0 ovn_metadata_agent[163169]:     log-tag     haproxy-metadata-proxy-f5c6f88b-41ed-45ea-b491-931be9a75138
Oct 08 10:09:47 compute-0 ovn_metadata_agent[163169]:     user        root
Oct 08 10:09:47 compute-0 ovn_metadata_agent[163169]:     group       root
Oct 08 10:09:47 compute-0 ovn_metadata_agent[163169]:     maxconn     1024
Oct 08 10:09:47 compute-0 ovn_metadata_agent[163169]:     pidfile     /var/lib/neutron/external/pids/f5c6f88b-41ed-45ea-b491-931be9a75138.pid.haproxy
Oct 08 10:09:47 compute-0 ovn_metadata_agent[163169]:     daemon
Oct 08 10:09:47 compute-0 ovn_metadata_agent[163169]: 
Oct 08 10:09:47 compute-0 ovn_metadata_agent[163169]: defaults
Oct 08 10:09:47 compute-0 ovn_metadata_agent[163169]:     log global
Oct 08 10:09:47 compute-0 ovn_metadata_agent[163169]:     mode http
Oct 08 10:09:47 compute-0 ovn_metadata_agent[163169]:     option httplog
Oct 08 10:09:47 compute-0 ovn_metadata_agent[163169]:     option dontlognull
Oct 08 10:09:47 compute-0 ovn_metadata_agent[163169]:     option http-server-close
Oct 08 10:09:47 compute-0 ovn_metadata_agent[163169]:     option forwardfor
Oct 08 10:09:47 compute-0 ovn_metadata_agent[163169]:     retries                 3
Oct 08 10:09:47 compute-0 ovn_metadata_agent[163169]:     timeout http-request    30s
Oct 08 10:09:47 compute-0 ovn_metadata_agent[163169]:     timeout connect         30s
Oct 08 10:09:47 compute-0 ovn_metadata_agent[163169]:     timeout client          32s
Oct 08 10:09:47 compute-0 ovn_metadata_agent[163169]:     timeout server          32s
Oct 08 10:09:47 compute-0 ovn_metadata_agent[163169]:     timeout http-keep-alive 30s
Oct 08 10:09:47 compute-0 ovn_metadata_agent[163169]: 
Oct 08 10:09:47 compute-0 ovn_metadata_agent[163169]: 
Oct 08 10:09:47 compute-0 ovn_metadata_agent[163169]: listen listener
Oct 08 10:09:47 compute-0 ovn_metadata_agent[163169]:     bind 169.254.169.254:80
Oct 08 10:09:47 compute-0 ovn_metadata_agent[163169]:     server metadata /var/lib/neutron/metadata_proxy
Oct 08 10:09:47 compute-0 ovn_metadata_agent[163169]:     http-request add-header X-OVN-Network-ID f5c6f88b-41ed-45ea-b491-931be9a75138
Oct 08 10:09:47 compute-0 ovn_metadata_agent[163169]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 08 10:09:47 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:09:47.078 163175 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f5c6f88b-41ed-45ea-b491-931be9a75138', 'env', 'PROCESS_TAG=haproxy-f5c6f88b-41ed-45ea-b491-931be9a75138', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f5c6f88b-41ed-45ea-b491-931be9a75138.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 08 10:09:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:09:47.119Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:09:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:09:47.119Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:09:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:09:47.120Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:09:47 compute-0 ceph-mon[73572]: pgmap v755: 353 pgs: 353 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 65 op/s
Oct 08 10:09:47 compute-0 podman[267890]: 2025-10-08 10:09:47.446704938 +0000 UTC m=+0.052552377 container create 0efd5af81104fe944b7194eb793292499da3df8ecfcc2e35b2f5a1c79a3b1927 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f5c6f88b-41ed-45ea-b491-931be9a75138, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 08 10:09:47 compute-0 systemd[1]: Started libpod-conmon-0efd5af81104fe944b7194eb793292499da3df8ecfcc2e35b2f5a1c79a3b1927.scope.
Oct 08 10:09:47 compute-0 podman[267890]: 2025-10-08 10:09:47.420678764 +0000 UTC m=+0.026526233 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 08 10:09:47 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:09:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78e3d1aa5320eb20e28cf9285cbf8434fde889ae25e1684b2e2a512764f7589a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 08 10:09:47 compute-0 podman[267890]: 2025-10-08 10:09:47.55147187 +0000 UTC m=+0.157319339 container init 0efd5af81104fe944b7194eb793292499da3df8ecfcc2e35b2f5a1c79a3b1927 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f5c6f88b-41ed-45ea-b491-931be9a75138, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 08 10:09:47 compute-0 podman[267890]: 2025-10-08 10:09:47.557134066 +0000 UTC m=+0.162981505 container start 0efd5af81104fe944b7194eb793292499da3df8ecfcc2e35b2f5a1c79a3b1927 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f5c6f88b-41ed-45ea-b491-931be9a75138, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 08 10:09:47 compute-0 neutron-haproxy-ovnmeta-f5c6f88b-41ed-45ea-b491-931be9a75138[267905]: [NOTICE]   (267909) : New worker (267911) forked
Oct 08 10:09:47 compute-0 neutron-haproxy-ovnmeta-f5c6f88b-41ed-45ea-b491-931be9a75138[267905]: [NOTICE]   (267909) : Loading success.
Oct 08 10:09:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:09:47
Oct 08 10:09:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 08 10:09:47 compute-0 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct 08 10:09:47 compute-0 ceph-mgr[73869]: [balancer INFO root] pools ['.nfs', '.mgr', 'volumes', 'vms', 'cephfs.cephfs.data', 'images', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta', '.rgw.root', 'backups']
Oct 08 10:09:47 compute-0 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct 08 10:09:47 compute-0 nova_compute[262220]: 2025-10-08 10:09:47.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:09:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:09:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:09:47 compute-0 nova_compute[262220]: 2025-10-08 10:09:47.897 2 DEBUG nova.network.neutron [req-5dd18c9b-6d2c-4533-8cf0-3812628e4abe req-fc7c94ee-8b91-44f5-879e-e6492d5dd02d 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Updated VIF entry in instance network info cache for port d6bc221b-bf28-4c61-b116-cd61209c7f31. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 08 10:09:47 compute-0 nova_compute[262220]: 2025-10-08 10:09:47.897 2 DEBUG nova.network.neutron [req-5dd18c9b-6d2c-4533-8cf0-3812628e4abe req-fc7c94ee-8b91-44f5-879e-e6492d5dd02d 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Updating instance_info_cache with network_info: [{"id": "d6bc221b-bf28-4c61-b116-cd61209c7f31", "address": "fa:16:3e:9d:d1:5c", "network": {"id": "f5c6f88b-41ed-45ea-b491-931be9a75138", "bridge": "br-int", "label": "tempest-network-smoke--1726135850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6bc221b-bf", "ovs_interfaceid": "d6bc221b-bf28-4c61-b116-cd61209c7f31", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 08 10:09:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:09:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:09:47 compute-0 nova_compute[262220]: 2025-10-08 10:09:47.921 2 DEBUG oslo_concurrency.lockutils [req-5dd18c9b-6d2c-4533-8cf0-3812628e4abe req-fc7c94ee-8b91-44f5-879e-e6492d5dd02d 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Releasing lock "refresh_cache-f49b788e-70d1-4bc2-9f90-381017f2b232" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 08 10:09:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct 08 10:09:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:09:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 08 10:09:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:09:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Oct 08 10:09:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:09:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:09:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:09:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:09:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:09:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 08 10:09:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:09:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 08 10:09:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:09:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:09:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:09:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 08 10:09:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:09:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 08 10:09:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:09:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:09:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:09:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 08 10:09:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:09:47 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 10:09:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:48 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78003860 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 08 10:09:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:09:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:09:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:09:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:09:48 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v756: 353 pgs: 353 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 55 op/s
Oct 08 10:09:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:09:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:09:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:09:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:09:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:09:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 08 10:09:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:09:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:09:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:09:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:09:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:48 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ad0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:48 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:09:48 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:09:48 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:09:48.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:09:48 compute-0 sudo[267921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:09:48 compute-0 sudo[267921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:09:48 compute-0 sudo[267921]: pam_unix(sudo:session): session closed for user root
Oct 08 10:09:48 compute-0 sudo[267946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 08 10:09:48 compute-0 sudo[267946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:09:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:48 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:49 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:09:49 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:09:49 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:09:49.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:09:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:09:49 compute-0 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Oct 08 10:09:49 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:49.081565) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 08 10:09:49 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Oct 08 10:09:49 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918189081617, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 473, "num_deletes": 258, "total_data_size": 422347, "memory_usage": 431576, "flush_reason": "Manual Compaction"}
Oct 08 10:09:49 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Oct 08 10:09:49 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918189105997, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 418228, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23362, "largest_seqno": 23834, "table_properties": {"data_size": 415596, "index_size": 668, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6267, "raw_average_key_size": 17, "raw_value_size": 410117, "raw_average_value_size": 1161, "num_data_blocks": 30, "num_entries": 353, "num_filter_entries": 353, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759918169, "oldest_key_time": 1759918169, "file_creation_time": 1759918189, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Oct 08 10:09:49 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 25019 microseconds, and 3482 cpu microseconds.
Oct 08 10:09:49 compute-0 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 08 10:09:49 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:49.106586) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 418228 bytes OK
Oct 08 10:09:49 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:49.106604) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Oct 08 10:09:49 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:49.113082) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Oct 08 10:09:49 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:49.113100) EVENT_LOG_v1 {"time_micros": 1759918189113095, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 08 10:09:49 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:49.113117) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 08 10:09:49 compute-0 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 419510, prev total WAL file size 419510, number of live WAL files 2.
Oct 08 10:09:49 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 10:09:49 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:49.113585) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323532' seq:72057594037927935, type:22 .. '6C6F676D00353036' seq:0, type:0; will stop at (end)
Oct 08 10:09:49 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 08 10:09:49 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(408KB)], [50(12MB)]
Oct 08 10:09:49 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918189113621, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 13201287, "oldest_snapshot_seqno": -1}
Oct 08 10:09:49 compute-0 sudo[267946]: pam_unix(sudo:session): session closed for user root
Oct 08 10:09:49 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 5356 keys, 13082972 bytes, temperature: kUnknown
Oct 08 10:09:49 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918189280650, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 13082972, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13047165, "index_size": 21297, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13445, "raw_key_size": 137022, "raw_average_key_size": 25, "raw_value_size": 12950191, "raw_average_value_size": 2417, "num_data_blocks": 866, "num_entries": 5356, "num_filter_entries": 5356, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916581, "oldest_key_time": 0, "file_creation_time": 1759918189, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Oct 08 10:09:49 compute-0 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 08 10:09:49 compute-0 ceph-mon[73572]: pgmap v756: 353 pgs: 353 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 55 op/s
Oct 08 10:09:49 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:49.280886) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 13082972 bytes
Oct 08 10:09:49 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:49.299203) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 79.0 rd, 78.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 12.2 +0.0 blob) out(12.5 +0.0 blob), read-write-amplify(62.8) write-amplify(31.3) OK, records in: 5884, records dropped: 528 output_compression: NoCompression
Oct 08 10:09:49 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:49.299238) EVENT_LOG_v1 {"time_micros": 1759918189299225, "job": 26, "event": "compaction_finished", "compaction_time_micros": 167094, "compaction_time_cpu_micros": 25841, "output_level": 6, "num_output_files": 1, "total_output_size": 13082972, "num_input_records": 5884, "num_output_records": 5356, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 08 10:09:49 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 10:09:49 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918189299469, "job": 26, "event": "table_file_deletion", "file_number": 52}
Oct 08 10:09:49 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 10:09:49 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918189301647, "job": 26, "event": "table_file_deletion", "file_number": 50}
Oct 08 10:09:49 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:49.113495) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:09:49 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:49.301674) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:09:49 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:49.301690) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:09:49 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:49.301692) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:09:49 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:49.301694) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:09:49 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:49.301695) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:09:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 10:09:49 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:09:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 08 10:09:49 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 10:09:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 08 10:09:49 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:09:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 08 10:09:49 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:09:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 08 10:09:49 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 10:09:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 08 10:09:49 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 10:09:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 10:09:49 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:09:49 compute-0 sudo[268005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:09:49 compute-0 sudo[268005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:09:49 compute-0 sudo[268005]: pam_unix(sudo:session): session closed for user root
Oct 08 10:09:49 compute-0 sudo[268030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 08 10:09:49 compute-0 sudo[268030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:09:49 compute-0 podman[268096]: 2025-10-08 10:09:49.929312022 +0000 UTC m=+0.104923987 container create c1efa525557cd209242c0a911ee54c8fb03bea77cd1621bbcecc7521e8c1fe30 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_curran, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 08 10:09:49 compute-0 podman[268096]: 2025-10-08 10:09:49.846807712 +0000 UTC m=+0.022419697 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:09:49 compute-0 systemd[1]: Started libpod-conmon-c1efa525557cd209242c0a911ee54c8fb03bea77cd1621bbcecc7521e8c1fe30.scope.
Oct 08 10:09:50 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:09:50 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:50 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:50 compute-0 podman[268096]: 2025-10-08 10:09:50.066801749 +0000 UTC m=+0.242413734 container init c1efa525557cd209242c0a911ee54c8fb03bea77cd1621bbcecc7521e8c1fe30 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_curran, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:09:50 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v757: 353 pgs: 353 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 941 KiB/s wr, 97 op/s
Oct 08 10:09:50 compute-0 podman[268096]: 2025-10-08 10:09:50.074815753 +0000 UTC m=+0.250427718 container start c1efa525557cd209242c0a911ee54c8fb03bea77cd1621bbcecc7521e8c1fe30 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1)
Oct 08 10:09:50 compute-0 gifted_curran[268113]: 167 167
Oct 08 10:09:50 compute-0 systemd[1]: libpod-c1efa525557cd209242c0a911ee54c8fb03bea77cd1621bbcecc7521e8c1fe30.scope: Deactivated successfully.
Oct 08 10:09:50 compute-0 podman[268096]: 2025-10-08 10:09:50.09970399 +0000 UTC m=+0.275315975 container attach c1efa525557cd209242c0a911ee54c8fb03bea77cd1621bbcecc7521e8c1fe30 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_curran, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid)
Oct 08 10:09:50 compute-0 podman[268096]: 2025-10-08 10:09:50.100778285 +0000 UTC m=+0.276390250 container died c1efa525557cd209242c0a911ee54c8fb03bea77cd1621bbcecc7521e8c1fe30 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_curran, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325)
Oct 08 10:09:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-4bff23ae85cadcea67b9a8fd01ae06d0e9cd13e2ed00f635413abbf389a1a1b2-merged.mount: Deactivated successfully.
Oct 08 10:09:50 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:09:50 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 10:09:50 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:09:50 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:09:50 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 10:09:50 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 10:09:50 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:09:50 compute-0 podman[268096]: 2025-10-08 10:09:50.330318126 +0000 UTC m=+0.505930091 container remove c1efa525557cd209242c0a911ee54c8fb03bea77cd1621bbcecc7521e8c1fe30 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 08 10:09:50 compute-0 systemd[1]: libpod-conmon-c1efa525557cd209242c0a911ee54c8fb03bea77cd1621bbcecc7521e8c1fe30.scope: Deactivated successfully.
Oct 08 10:09:50 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:50 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78003860 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:50 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:09:50 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:09:50 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:09:50.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:09:50 compute-0 podman[268138]: 2025-10-08 10:09:50.5136892 +0000 UTC m=+0.039284792 container create a43ae329c53cb7057cd460b51d7b809a0b2b094e572e24c182a6ee5adb0f6cdb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 08 10:09:50 compute-0 systemd[1]: Started libpod-conmon-a43ae329c53cb7057cd460b51d7b809a0b2b094e572e24c182a6ee5adb0f6cdb.scope.
Oct 08 10:09:50 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:09:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/133968619cb2e81dfd4dc97c343e54606a44d5bc2293af7e90238296ba216b9a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:09:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/133968619cb2e81dfd4dc97c343e54606a44d5bc2293af7e90238296ba216b9a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:09:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/133968619cb2e81dfd4dc97c343e54606a44d5bc2293af7e90238296ba216b9a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:09:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/133968619cb2e81dfd4dc97c343e54606a44d5bc2293af7e90238296ba216b9a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:09:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/133968619cb2e81dfd4dc97c343e54606a44d5bc2293af7e90238296ba216b9a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 10:09:50 compute-0 podman[268138]: 2025-10-08 10:09:50.589755439 +0000 UTC m=+0.115351051 container init a43ae329c53cb7057cd460b51d7b809a0b2b094e572e24c182a6ee5adb0f6cdb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_moser, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 08 10:09:50 compute-0 podman[268138]: 2025-10-08 10:09:50.496776434 +0000 UTC m=+0.022372046 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:09:50 compute-0 podman[268138]: 2025-10-08 10:09:50.602104404 +0000 UTC m=+0.127699996 container start a43ae329c53cb7057cd460b51d7b809a0b2b094e572e24c182a6ee5adb0f6cdb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_moser, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 08 10:09:50 compute-0 podman[268138]: 2025-10-08 10:09:50.60502258 +0000 UTC m=+0.130618172 container attach a43ae329c53cb7057cd460b51d7b809a0b2b094e572e24c182a6ee5adb0f6cdb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_moser, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 08 10:09:50 compute-0 hungry_moser[268155]: --> passed data devices: 0 physical, 1 LVM
Oct 08 10:09:50 compute-0 hungry_moser[268155]: --> All data devices are unavailable
Oct 08 10:09:50 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:50 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ad0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:50 compute-0 systemd[1]: libpod-a43ae329c53cb7057cd460b51d7b809a0b2b094e572e24c182a6ee5adb0f6cdb.scope: Deactivated successfully.
Oct 08 10:09:50 compute-0 podman[268138]: 2025-10-08 10:09:50.930846024 +0000 UTC m=+0.456441616 container died a43ae329c53cb7057cd460b51d7b809a0b2b094e572e24c182a6ee5adb0f6cdb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 08 10:09:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-133968619cb2e81dfd4dc97c343e54606a44d5bc2293af7e90238296ba216b9a-merged.mount: Deactivated successfully.
Oct 08 10:09:50 compute-0 podman[268138]: 2025-10-08 10:09:50.970914309 +0000 UTC m=+0.496509901 container remove a43ae329c53cb7057cd460b51d7b809a0b2b094e572e24c182a6ee5adb0f6cdb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_moser, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct 08 10:09:50 compute-0 systemd[1]: libpod-conmon-a43ae329c53cb7057cd460b51d7b809a0b2b094e572e24c182a6ee5adb0f6cdb.scope: Deactivated successfully.
Oct 08 10:09:51 compute-0 sudo[268030]: pam_unix(sudo:session): session closed for user root
Oct 08 10:09:51 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:09:51 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:09:51 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:09:51.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:09:51 compute-0 sudo[268183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:09:51 compute-0 sudo[268183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:09:51 compute-0 sudo[268183]: pam_unix(sudo:session): session closed for user root
Oct 08 10:09:51 compute-0 sudo[268208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- lvm list --format json
Oct 08 10:09:51 compute-0 sudo[268208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:09:51 compute-0 ceph-mon[73572]: pgmap v757: 353 pgs: 353 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 941 KiB/s wr, 97 op/s
Oct 08 10:09:51 compute-0 nova_compute[262220]: 2025-10-08 10:09:51.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:09:51 compute-0 podman[268272]: 2025-10-08 10:09:51.614954146 +0000 UTC m=+0.044211473 container create 4ff2a80642af05d532207a9a0f8b9a5990f0276d526fd6ceacc6895f82d8da0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_curran, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 08 10:09:51 compute-0 systemd[1]: Started libpod-conmon-4ff2a80642af05d532207a9a0f8b9a5990f0276d526fd6ceacc6895f82d8da0c.scope.
Oct 08 10:09:51 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:09:51 compute-0 podman[268272]: 2025-10-08 10:09:51.595923371 +0000 UTC m=+0.025180728 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:09:51 compute-0 podman[268272]: 2025-10-08 10:09:51.694992486 +0000 UTC m=+0.124249833 container init 4ff2a80642af05d532207a9a0f8b9a5990f0276d526fd6ceacc6895f82d8da0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_curran, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct 08 10:09:51 compute-0 podman[268272]: 2025-10-08 10:09:51.701748207 +0000 UTC m=+0.131005534 container start 4ff2a80642af05d532207a9a0f8b9a5990f0276d526fd6ceacc6895f82d8da0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:09:51 compute-0 podman[268272]: 2025-10-08 10:09:51.70485668 +0000 UTC m=+0.134114027 container attach 4ff2a80642af05d532207a9a0f8b9a5990f0276d526fd6ceacc6895f82d8da0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_curran, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Oct 08 10:09:51 compute-0 hardcore_curran[268289]: 167 167
Oct 08 10:09:51 compute-0 systemd[1]: libpod-4ff2a80642af05d532207a9a0f8b9a5990f0276d526fd6ceacc6895f82d8da0c.scope: Deactivated successfully.
Oct 08 10:09:51 compute-0 podman[268272]: 2025-10-08 10:09:51.707265349 +0000 UTC m=+0.136522686 container died 4ff2a80642af05d532207a9a0f8b9a5990f0276d526fd6ceacc6895f82d8da0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_curran, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Oct 08 10:09:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6d1b3449f77e13c23d7dc8481df2b3987932b3962fe1fe8be3ff0b39f2d5fdc-merged.mount: Deactivated successfully.
Oct 08 10:09:51 compute-0 podman[268272]: 2025-10-08 10:09:51.747446419 +0000 UTC m=+0.176703746 container remove 4ff2a80642af05d532207a9a0f8b9a5990f0276d526fd6ceacc6895f82d8da0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_curran, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:09:51 compute-0 systemd[1]: libpod-conmon-4ff2a80642af05d532207a9a0f8b9a5990f0276d526fd6ceacc6895f82d8da0c.scope: Deactivated successfully.
Oct 08 10:09:51 compute-0 podman[268312]: 2025-10-08 10:09:51.93438824 +0000 UTC m=+0.053851250 container create 7fc0bcea39185536385cf4c952793d8f75f2b3d9cf6becb255ec1e708566f31e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_curran, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct 08 10:09:51 compute-0 systemd[1]: Started libpod-conmon-7fc0bcea39185536385cf4c952793d8f75f2b3d9cf6becb255ec1e708566f31e.scope.
Oct 08 10:09:51 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:09:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9037dea98a69426fc1ab52681638b3fcff3f5a41f16d2bce1ed25ba0f93604d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:09:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9037dea98a69426fc1ab52681638b3fcff3f5a41f16d2bce1ed25ba0f93604d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:09:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9037dea98a69426fc1ab52681638b3fcff3f5a41f16d2bce1ed25ba0f93604d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:09:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9037dea98a69426fc1ab52681638b3fcff3f5a41f16d2bce1ed25ba0f93604d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:09:51 compute-0 podman[268312]: 2025-10-08 10:09:51.905001525 +0000 UTC m=+0.024464565 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:09:52 compute-0 podman[268312]: 2025-10-08 10:09:52.011330318 +0000 UTC m=+0.130793348 container init 7fc0bcea39185536385cf4c952793d8f75f2b3d9cf6becb255ec1e708566f31e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_curran, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 08 10:09:52 compute-0 podman[268312]: 2025-10-08 10:09:52.019584799 +0000 UTC m=+0.139047799 container start 7fc0bcea39185536385cf4c952793d8f75f2b3d9cf6becb255ec1e708566f31e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_curran, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 08 10:09:52 compute-0 podman[268312]: 2025-10-08 10:09:52.022808755 +0000 UTC m=+0.142271755 container attach 7fc0bcea39185536385cf4c952793d8f75f2b3d9cf6becb255ec1e708566f31e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_curran, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 08 10:09:52 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:52 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:52 compute-0 podman[268327]: 2025-10-08 10:09:52.072654202 +0000 UTC m=+0.097880846 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct 08 10:09:52 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v758: 353 pgs: 353 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 941 KiB/s wr, 97 op/s
Oct 08 10:09:52 compute-0 gracious_curran[268331]: {
Oct 08 10:09:52 compute-0 gracious_curran[268331]:     "1": [
Oct 08 10:09:52 compute-0 gracious_curran[268331]:         {
Oct 08 10:09:52 compute-0 gracious_curran[268331]:             "devices": [
Oct 08 10:09:52 compute-0 gracious_curran[268331]:                 "/dev/loop3"
Oct 08 10:09:52 compute-0 gracious_curran[268331]:             ],
Oct 08 10:09:52 compute-0 gracious_curran[268331]:             "lv_name": "ceph_lv0",
Oct 08 10:09:52 compute-0 gracious_curran[268331]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:09:52 compute-0 gracious_curran[268331]:             "lv_size": "21470642176",
Oct 08 10:09:52 compute-0 gracious_curran[268331]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 08 10:09:52 compute-0 gracious_curran[268331]:             "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 10:09:52 compute-0 gracious_curran[268331]:             "name": "ceph_lv0",
Oct 08 10:09:52 compute-0 gracious_curran[268331]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:09:52 compute-0 gracious_curran[268331]:             "tags": {
Oct 08 10:09:52 compute-0 gracious_curran[268331]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:09:52 compute-0 gracious_curran[268331]:                 "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 10:09:52 compute-0 gracious_curran[268331]:                 "ceph.cephx_lockbox_secret": "",
Oct 08 10:09:52 compute-0 gracious_curran[268331]:                 "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct 08 10:09:52 compute-0 gracious_curran[268331]:                 "ceph.cluster_name": "ceph",
Oct 08 10:09:52 compute-0 gracious_curran[268331]:                 "ceph.crush_device_class": "",
Oct 08 10:09:52 compute-0 gracious_curran[268331]:                 "ceph.encrypted": "0",
Oct 08 10:09:52 compute-0 gracious_curran[268331]:                 "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct 08 10:09:52 compute-0 gracious_curran[268331]:                 "ceph.osd_id": "1",
Oct 08 10:09:52 compute-0 gracious_curran[268331]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 08 10:09:52 compute-0 gracious_curran[268331]:                 "ceph.type": "block",
Oct 08 10:09:52 compute-0 gracious_curran[268331]:                 "ceph.vdo": "0",
Oct 08 10:09:52 compute-0 gracious_curran[268331]:                 "ceph.with_tpm": "0"
Oct 08 10:09:52 compute-0 gracious_curran[268331]:             },
Oct 08 10:09:52 compute-0 gracious_curran[268331]:             "type": "block",
Oct 08 10:09:52 compute-0 gracious_curran[268331]:             "vg_name": "ceph_vg0"
Oct 08 10:09:52 compute-0 gracious_curran[268331]:         }
Oct 08 10:09:52 compute-0 gracious_curran[268331]:     ]
Oct 08 10:09:52 compute-0 gracious_curran[268331]: }
Oct 08 10:09:52 compute-0 systemd[1]: libpod-7fc0bcea39185536385cf4c952793d8f75f2b3d9cf6becb255ec1e708566f31e.scope: Deactivated successfully.
Oct 08 10:09:52 compute-0 podman[268312]: 2025-10-08 10:09:52.318935362 +0000 UTC m=+0.438398372 container died 7fc0bcea39185536385cf4c952793d8f75f2b3d9cf6becb255ec1e708566f31e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 08 10:09:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9037dea98a69426fc1ab52681638b3fcff3f5a41f16d2bce1ed25ba0f93604d-merged.mount: Deactivated successfully.
Oct 08 10:09:52 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:52 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:52 compute-0 podman[268312]: 2025-10-08 10:09:52.364164188 +0000 UTC m=+0.483627198 container remove 7fc0bcea39185536385cf4c952793d8f75f2b3d9cf6becb255ec1e708566f31e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Oct 08 10:09:52 compute-0 systemd[1]: libpod-conmon-7fc0bcea39185536385cf4c952793d8f75f2b3d9cf6becb255ec1e708566f31e.scope: Deactivated successfully.
Oct 08 10:09:52 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:09:52 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:09:52 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:09:52.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:09:52 compute-0 sudo[268208]: pam_unix(sudo:session): session closed for user root
Oct 08 10:09:52 compute-0 sudo[268371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:09:52 compute-0 sudo[268371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:09:52 compute-0 sudo[268371]: pam_unix(sudo:session): session closed for user root
Oct 08 10:09:52 compute-0 ceph-mon[73572]: pgmap v758: 353 pgs: 353 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 941 KiB/s wr, 97 op/s
Oct 08 10:09:52 compute-0 sudo[268396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- raw list --format json
Oct 08 10:09:52 compute-0 sudo[268396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:09:52 compute-0 nova_compute[262220]: 2025-10-08 10:09:52.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:09:52 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:52 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78003860 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:52 compute-0 podman[268461]: 2025-10-08 10:09:52.942155825 +0000 UTC m=+0.039181568 container create f6cbce94631aeca3396cc20d60e6dc467b210da80c8c59964a9e63db7e209813 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_fermat, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 10:09:52 compute-0 systemd[1]: Started libpod-conmon-f6cbce94631aeca3396cc20d60e6dc467b210da80c8c59964a9e63db7e209813.scope.
Oct 08 10:09:53 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:09:53 compute-0 podman[268461]: 2025-10-08 10:09:52.924781395 +0000 UTC m=+0.021807148 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:09:53 compute-0 podman[268461]: 2025-10-08 10:09:53.021627206 +0000 UTC m=+0.118652989 container init f6cbce94631aeca3396cc20d60e6dc467b210da80c8c59964a9e63db7e209813 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_fermat, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 08 10:09:53 compute-0 podman[268461]: 2025-10-08 10:09:53.028623985 +0000 UTC m=+0.125649738 container start f6cbce94631aeca3396cc20d60e6dc467b210da80c8c59964a9e63db7e209813 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Oct 08 10:09:53 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:09:53 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:09:53 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:09:53.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:09:53 compute-0 laughing_fermat[268478]: 167 167
Oct 08 10:09:53 compute-0 systemd[1]: libpod-f6cbce94631aeca3396cc20d60e6dc467b210da80c8c59964a9e63db7e209813.scope: Deactivated successfully.
Oct 08 10:09:53 compute-0 podman[268461]: 2025-10-08 10:09:53.036575777 +0000 UTC m=+0.133601560 container attach f6cbce94631aeca3396cc20d60e6dc467b210da80c8c59964a9e63db7e209813 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_fermat, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 10:09:53 compute-0 podman[268461]: 2025-10-08 10:09:53.037453276 +0000 UTC m=+0.134479049 container died f6cbce94631aeca3396cc20d60e6dc467b210da80c8c59964a9e63db7e209813 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:09:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b915765bc985c3c203b4caf38d86323411ada4a3460db64142cb16d3b803c87-merged.mount: Deactivated successfully.
Oct 08 10:09:53 compute-0 podman[268461]: 2025-10-08 10:09:53.226542358 +0000 UTC m=+0.323568111 container remove f6cbce94631aeca3396cc20d60e6dc467b210da80c8c59964a9e63db7e209813 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_fermat, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 08 10:09:53 compute-0 systemd[1]: libpod-conmon-f6cbce94631aeca3396cc20d60e6dc467b210da80c8c59964a9e63db7e209813.scope: Deactivated successfully.
Oct 08 10:09:53 compute-0 podman[268503]: 2025-10-08 10:09:53.415614558 +0000 UTC m=+0.062392341 container create c5f039d40878f5f7b0e9764c3ac991f4da466a451a82c6f19cbdde6aee2490b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 08 10:09:53 compute-0 systemd[1]: Started libpod-conmon-c5f039d40878f5f7b0e9764c3ac991f4da466a451a82c6f19cbdde6aee2490b4.scope.
Oct 08 10:09:53 compute-0 podman[268503]: 2025-10-08 10:09:53.378325033 +0000 UTC m=+0.025102836 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:09:53 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:09:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bac9c31dca9f09d701d3fff154e2bdf5fc803a9db2b6d38be32cdcb15ef5e3c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:09:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bac9c31dca9f09d701d3fff154e2bdf5fc803a9db2b6d38be32cdcb15ef5e3c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:09:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bac9c31dca9f09d701d3fff154e2bdf5fc803a9db2b6d38be32cdcb15ef5e3c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:09:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bac9c31dca9f09d701d3fff154e2bdf5fc803a9db2b6d38be32cdcb15ef5e3c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:09:53 compute-0 podman[268503]: 2025-10-08 10:09:53.511187758 +0000 UTC m=+0.157965561 container init c5f039d40878f5f7b0e9764c3ac991f4da466a451a82c6f19cbdde6aee2490b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct 08 10:09:53 compute-0 podman[268503]: 2025-10-08 10:09:53.519543053 +0000 UTC m=+0.166320826 container start c5f039d40878f5f7b0e9764c3ac991f4da466a451a82c6f19cbdde6aee2490b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_blackwell, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 10:09:53 compute-0 podman[268503]: 2025-10-08 10:09:53.523088359 +0000 UTC m=+0.169866142 container attach c5f039d40878f5f7b0e9764c3ac991f4da466a451a82c6f19cbdde6aee2490b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_blackwell, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:09:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:54 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ad0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:09:54 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v759: 353 pgs: 353 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 76 op/s
Oct 08 10:09:54 compute-0 lvm[268594]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 08 10:09:54 compute-0 lvm[268594]: VG ceph_vg0 finished
Oct 08 10:09:54 compute-0 hungry_blackwell[268519]: {}
Oct 08 10:09:54 compute-0 systemd[1]: libpod-c5f039d40878f5f7b0e9764c3ac991f4da466a451a82c6f19cbdde6aee2490b4.scope: Deactivated successfully.
Oct 08 10:09:54 compute-0 systemd[1]: libpod-c5f039d40878f5f7b0e9764c3ac991f4da466a451a82c6f19cbdde6aee2490b4.scope: Consumed 1.068s CPU time.
Oct 08 10:09:54 compute-0 podman[268503]: 2025-10-08 10:09:54.248718967 +0000 UTC m=+0.895496770 container died c5f039d40878f5f7b0e9764c3ac991f4da466a451a82c6f19cbdde6aee2490b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1)
Oct 08 10:09:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-5bac9c31dca9f09d701d3fff154e2bdf5fc803a9db2b6d38be32cdcb15ef5e3c-merged.mount: Deactivated successfully.
Oct 08 10:09:54 compute-0 podman[268503]: 2025-10-08 10:09:54.330986518 +0000 UTC m=+0.977764301 container remove c5f039d40878f5f7b0e9764c3ac991f4da466a451a82c6f19cbdde6aee2490b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:09:54 compute-0 systemd[1]: libpod-conmon-c5f039d40878f5f7b0e9764c3ac991f4da466a451a82c6f19cbdde6aee2490b4.scope: Deactivated successfully.
Oct 08 10:09:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:54 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:54 compute-0 sudo[268396]: pam_unix(sudo:session): session closed for user root
Oct 08 10:09:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 10:09:54 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:09:54 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:09:54 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:09:54.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:09:54 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:09:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 10:09:54 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:09:54 compute-0 sudo[268612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 08 10:09:54 compute-0 sudo[268612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:09:54 compute-0 sudo[268612]: pam_unix(sudo:session): session closed for user root
Oct 08 10:09:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:54 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:55 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:09:55 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:09:55 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:09:55.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:09:55 compute-0 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 08 10:09:55 compute-0 ceph-mon[73572]: pgmap v759: 353 pgs: 353 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 76 op/s
Oct 08 10:09:55 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:09:55 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:09:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:09:55] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Oct 08 10:09:55 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:09:55] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Oct 08 10:09:56 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:56 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78003860 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:56 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v760: 353 pgs: 353 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Oct 08 10:09:56 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:56 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ad0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:56 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:09:56 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:09:56 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:09:56.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:09:56 compute-0 nova_compute[262220]: 2025-10-08 10:09:56.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:09:56 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:56 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:57 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:09:57 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 10:09:57 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:09:57.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 10:09:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:09:57.120Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:09:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:09:57.408 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:09:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:09:57.408 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:09:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:09:57.409 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:09:57 compute-0 ceph-mon[73572]: pgmap v760: 353 pgs: 353 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Oct 08 10:09:57 compute-0 ovn_controller[153187]: 2025-10-08T10:09:57Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9d:d1:5c 10.100.0.6
Oct 08 10:09:57 compute-0 ovn_controller[153187]: 2025-10-08T10:09:57Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9d:d1:5c 10.100.0.6
Oct 08 10:09:57 compute-0 nova_compute[262220]: 2025-10-08 10:09:57.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:09:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:58 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94002980 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:58 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v761: 353 pgs: 353 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Oct 08 10:09:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:58 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78003860 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:58 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:09:58 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:09:58 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:09:58.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:09:58 compute-0 ceph-mon[73572]: pgmap v761: 353 pgs: 353 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Oct 08 10:09:58 compute-0 sshd-session[267785]: Connection closed by 66.132.153.137 port 48420 [preauth]
Oct 08 10:09:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:58 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ad0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:09:59 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:09:59 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:09:59 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:09:59.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:09:59 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:10:00 compute-0 ceph-mon[73572]: log_channel(cluster) log [INF] : overall HEALTH_OK
Oct 08 10:10:00 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:00 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:00 compute-0 ceph-mon[73572]: overall HEALTH_OK
Oct 08 10:10:00 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v762: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 127 op/s
Oct 08 10:10:00 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:00 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94002980 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:00 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:10:00 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:10:00 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:10:00.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:10:00 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:00 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78003860 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:01 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:10:01 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:10:01 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:10:01.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:10:01 compute-0 ceph-mon[73572]: pgmap v762: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 127 op/s
Oct 08 10:10:01 compute-0 nova_compute[262220]: 2025-10-08 10:10:01.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:10:02 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:02 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ad0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:02 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v763: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 08 10:10:02 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:02 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:02 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:10:02 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:10:02 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:10:02.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:10:02 compute-0 nova_compute[262220]: 2025-10-08 10:10:02.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:10:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:10:02 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:10:02 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:02 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003aa0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:03 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:10:03 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:10:03 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:10:03.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:10:03 compute-0 ceph-mon[73572]: pgmap v763: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 08 10:10:03 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:10:03 compute-0 nova_compute[262220]: 2025-10-08 10:10:03.459 2 INFO nova.compute.manager [None req-2cee94aa-4cf6-4621-b8e5-1fd66eab24e8 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Get console output
Oct 08 10:10:03 compute-0 nova_compute[262220]: 2025-10-08 10:10:03.465 2 INFO oslo.privsep.daemon [None req-2cee94aa-4cf6-4621-b8e5-1fd66eab24e8 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpyuw02l4u/privsep.sock']
Oct 08 10:10:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:04 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78003860 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:04 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:10:04 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v764: 353 pgs: 353 active+clean; 121 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 08 10:10:04 compute-0 nova_compute[262220]: 2025-10-08 10:10:04.198 2 INFO oslo.privsep.daemon [None req-2cee94aa-4cf6-4621-b8e5-1fd66eab24e8 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Spawned new privsep daemon via rootwrap
Oct 08 10:10:04 compute-0 nova_compute[262220]: 2025-10-08 10:10:04.053 631 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 08 10:10:04 compute-0 nova_compute[262220]: 2025-10-08 10:10:04.057 631 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 08 10:10:04 compute-0 nova_compute[262220]: 2025-10-08 10:10:04.059 631 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Oct 08 10:10:04 compute-0 nova_compute[262220]: 2025-10-08 10:10:04.059 631 INFO oslo.privsep.daemon [-] privsep daemon running as pid 631
Oct 08 10:10:04 compute-0 nova_compute[262220]: 2025-10-08 10:10:04.306 631 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 08 10:10:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:04 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ad0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:04 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:10:04 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:10:04 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:10:04.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:10:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:04 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:04 compute-0 podman[268654]: 2025-10-08 10:10:04.937373322 +0000 UTC m=+0.098661412 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Oct 08 10:10:05 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:10:05 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:10:05 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:10:05.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:10:05 compute-0 ceph-mon[73572]: pgmap v764: 353 pgs: 353 active+clean; 121 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 08 10:10:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:10:05] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Oct 08 10:10:05 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:10:05] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Oct 08 10:10:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:06 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003aa0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/101006 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 08 10:10:06 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v765: 353 pgs: 353 active+clean; 121 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 08 10:10:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:06 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78003860 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:06 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:10:06 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:10:06 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:10:06.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:10:06 compute-0 sudo[268682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:10:06 compute-0 sudo[268682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:10:06 compute-0 sudo[268682]: pam_unix(sudo:session): session closed for user root
Oct 08 10:10:06 compute-0 nova_compute[262220]: 2025-10-08 10:10:06.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:10:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:06 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:07 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:10:07 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:10:07 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:10:07.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:10:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:10:07.121Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:10:07 compute-0 ceph-mon[73572]: pgmap v765: 353 pgs: 353 active+clean; 121 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 08 10:10:07 compute-0 nova_compute[262220]: 2025-10-08 10:10:07.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:10:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:08 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:08 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v766: 353 pgs: 353 active+clean; 121 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 08 10:10:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:08 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003aa0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:08 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:10:08 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:10:08 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:10:08.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:10:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:08 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78003860 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:09 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:10:09 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:10:09 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:10:09.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:10:09 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:10:09 compute-0 ceph-mon[73572]: pgmap v766: 353 pgs: 353 active+clean; 121 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 08 10:10:10 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:10 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:10 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v767: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 333 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 08 10:10:10 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:10 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:10 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:10:10 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:10:10 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:10:10.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:10:10 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:10 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90002490 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:11 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:10:11 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:10:11 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:10:11.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:10:11 compute-0 ceph-mon[73572]: pgmap v767: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 333 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 08 10:10:11 compute-0 nova_compute[262220]: 2025-10-08 10:10:11.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:10:11 compute-0 podman[268713]: 2025-10-08 10:10:11.90282596 +0000 UTC m=+0.064258772 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 08 10:10:12 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:12 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003aa0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:12 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v768: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 16 KiB/s wr, 1 op/s
Oct 08 10:10:12 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:12 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:12 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:10:12 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:10:12 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:10:12.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:10:12 compute-0 nova_compute[262220]: 2025-10-08 10:10:12.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:10:12 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:12 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c002550 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:12 compute-0 podman[268735]: 2025-10-08 10:10:12.930083295 +0000 UTC m=+0.074624823 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 08 10:10:13 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:10:13 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:10:13 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:10:13.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:10:13 compute-0 ceph-mon[73572]: pgmap v768: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 16 KiB/s wr, 1 op/s
Oct 08 10:10:13 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/2340573493' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:10:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:14 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90002490 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:14 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:10:14 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v769: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 6.6 KiB/s rd, 16 KiB/s wr, 1 op/s
Oct 08 10:10:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:14 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:10:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:14 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003aa0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:14 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:10:14 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:10:14 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:10:14.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:10:14 compute-0 ceph-mon[73572]: pgmap v769: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 6.6 KiB/s rd, 16 KiB/s wr, 1 op/s
Oct 08 10:10:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:14 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:15 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:10:15 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:10:15 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:10:15.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:10:15 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:10:15.306 163175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2a:d6:ca', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'b2:8b:1e:40:84:a3'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 08 10:10:15 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:10:15.307 163175 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 08 10:10:15 compute-0 nova_compute[262220]: 2025-10-08 10:10:15.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:10:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:10:15] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Oct 08 10:10:15 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:10:15] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Oct 08 10:10:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:16 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c002550 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:16 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v770: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 6.6 KiB/s rd, 3.7 KiB/s wr, 1 op/s
Oct 08 10:10:16 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:10:16.309 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26869918-b723-425c-a2e1-0d697f3d0fec, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 10:10:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:16 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90002490 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:16 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:10:16 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:10:16 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:10:16.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:10:16 compute-0 nova_compute[262220]: 2025-10-08 10:10:16.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:10:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:16 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003aa0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:17 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:10:17 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:10:17 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:10:17.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:10:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:10:17.122Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:10:17 compute-0 ceph-mon[73572]: pgmap v770: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 6.6 KiB/s rd, 3.7 KiB/s wr, 1 op/s
Oct 08 10:10:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:17 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:10:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:17 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:10:17 compute-0 nova_compute[262220]: 2025-10-08 10:10:17.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:10:17 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:10:17 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:10:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:10:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:10:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:18 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:18 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v771: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 6.6 KiB/s rd, 3.7 KiB/s wr, 1 op/s
Oct 08 10:10:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:10:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:10:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:10:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:10:18 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/188651906' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 08 10:10:18 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:10:18 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/3627852261' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 08 10:10:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:18 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c002550 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:18 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:10:18 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:10:18 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:10:18.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:10:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:18 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90002490 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:19 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:10:19 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:10:19 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:10:19.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:10:19 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:10:19 compute-0 ceph-mon[73572]: pgmap v771: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 6.6 KiB/s rd, 3.7 KiB/s wr, 1 op/s
Oct 08 10:10:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/101019 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 08 10:10:20 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:20 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003aa0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:20 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v772: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 08 10:10:20 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:20 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:20 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:10:20 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:10:20 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:10:20.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:10:20 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:20 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c002550 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:21 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:10:21 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:10:21 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:10:21.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:10:21 compute-0 ceph-mon[73572]: pgmap v772: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 08 10:10:21 compute-0 ceph-mon[73572]: from='client.? 192.168.122.10:0/1480585905' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 08 10:10:21 compute-0 ceph-mon[73572]: from='client.? 192.168.122.10:0/1480585905' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 08 10:10:21 compute-0 nova_compute[262220]: 2025-10-08 10:10:21.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:10:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:22 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90002490 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:22 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v773: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 35 op/s
Oct 08 10:10:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:22 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003aa0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:22 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:10:22 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:10:22 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:10:22.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:10:22 compute-0 nova_compute[262220]: 2025-10-08 10:10:22.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:10:22 compute-0 podman[268767]: 2025-10-08 10:10:22.900710392 +0000 UTC m=+0.057397936 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 08 10:10:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:22 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:23 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:10:23 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:10:23 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:10:23.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:10:23 compute-0 ceph-mon[73572]: pgmap v773: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 35 op/s
Oct 08 10:10:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:24 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003870 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:24 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:10:24 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v774: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 109 op/s
Oct 08 10:10:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:24 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac900036f0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:24 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:10:24 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:10:24 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:10:24.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:10:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:24 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003aa0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:25 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:10:25 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:10:25 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:10:25.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:10:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:25 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:10:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:25 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:10:25 compute-0 ceph-mon[73572]: pgmap v774: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 109 op/s
Oct 08 10:10:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:10:25] "GET /metrics HTTP/1.1" 200 48464 "" "Prometheus/2.51.0"
Oct 08 10:10:25 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:10:25] "GET /metrics HTTP/1.1" 200 48464 "" "Prometheus/2.51.0"
Oct 08 10:10:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:26 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:26 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v775: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 109 op/s
Oct 08 10:10:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:26 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003870 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:26 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:10:26 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:10:26 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:10:26.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:10:26 compute-0 nova_compute[262220]: 2025-10-08 10:10:26.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:10:26 compute-0 sudo[268791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:10:26 compute-0 sudo[268791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:10:26 compute-0 sudo[268791]: pam_unix(sudo:session): session closed for user root
Oct 08 10:10:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:26 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac900036f0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:27 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:10:27 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:10:27 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:10:27.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:10:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:10:27.123Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:10:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:10:27.123Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:10:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:10:27.124Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:10:27 compute-0 ceph-mon[73572]: pgmap v775: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 109 op/s
Oct 08 10:10:27 compute-0 nova_compute[262220]: 2025-10-08 10:10:27.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:10:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:28 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003aa0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:28 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v776: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 109 op/s
Oct 08 10:10:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:28 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 08 10:10:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:28 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:28 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:10:28 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:10:28 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:10:28.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:10:28 compute-0 ceph-mon[73572]: pgmap v776: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 109 op/s
Oct 08 10:10:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:28 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003870 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:29 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:10:29 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:10:29 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:10:29 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:10:29.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:10:29 compute-0 nova_compute[262220]: 2025-10-08 10:10:29.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:10:30 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:30 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac900036f0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:30 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v777: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 111 op/s
Oct 08 10:10:30 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:30 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003aa0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:30 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:10:30 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:10:30 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:10:30.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:10:30 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:30 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:31 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:10:31 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:10:31 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:10:31.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:10:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:10:31 compute-0 ceph-mon[73572]: pgmap v777: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 111 op/s
Oct 08 10:10:31 compute-0 nova_compute[262220]: 2025-10-08 10:10:31.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:10:32 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:32 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:32 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v778: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 76 op/s
Oct 08 10:10:32 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:32 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004790 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:32 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:10:32 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:10:32 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:10:32.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:10:32 compute-0 ceph-mon[73572]: pgmap v778: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 76 op/s
Oct 08 10:10:32 compute-0 nova_compute[262220]: 2025-10-08 10:10:32.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:10:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:10:32 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:10:32 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:32 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:33 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:10:33 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:10:33 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:10:33.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:10:33 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:10:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:34 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/101034 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 1ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 08 10:10:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:10:34 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v779: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 79 op/s
Oct 08 10:10:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:34 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:10:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:34 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:10:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:34 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:34 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:10:34 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:10:34 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:10:34.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:10:34 compute-0 ceph-mon[73572]: pgmap v779: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 79 op/s
Oct 08 10:10:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:34 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004790 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:35 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:10:35 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:10:35 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:10:35.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:10:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:10:35] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Oct 08 10:10:35 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:10:35] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Oct 08 10:10:35 compute-0 podman[268825]: 2025-10-08 10:10:35.952057691 +0000 UTC m=+0.112327431 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 08 10:10:36 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:36 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:36 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v780: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 5.7 KiB/s rd, 4.7 KiB/s wr, 5 op/s
Oct 08 10:10:36 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:36 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:36 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:10:36 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:10:36 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:10:36.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:10:36 compute-0 nova_compute[262220]: 2025-10-08 10:10:36.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:10:36 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:36 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:37 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:10:37 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:10:37 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:10:37.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:10:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:10:37.124Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:10:37 compute-0 ceph-mon[73572]: pgmap v780: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 5.7 KiB/s rd, 4.7 KiB/s wr, 5 op/s
Oct 08 10:10:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:37 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 08 10:10:37 compute-0 nova_compute[262220]: 2025-10-08 10:10:37.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:10:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:38 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004790 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:38 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v781: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 5.7 KiB/s rd, 4.7 KiB/s wr, 5 op/s
Oct 08 10:10:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:38 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:38 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:10:38 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:10:38 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:10:38.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:10:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:38 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:39 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:10:39 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:10:39 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:10:39 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:10:39.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:10:39 compute-0 ceph-mon[73572]: pgmap v781: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 5.7 KiB/s rd, 4.7 KiB/s wr, 5 op/s
Oct 08 10:10:39 compute-0 nova_compute[262220]: 2025-10-08 10:10:39.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:10:39 compute-0 nova_compute[262220]: 2025-10-08 10:10:39.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:10:39 compute-0 nova_compute[262220]: 2025-10-08 10:10:39.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:10:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:40 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:40 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v782: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 376 KiB/s rd, 2.1 MiB/s wr, 70 op/s
Oct 08 10:10:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:40 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004790 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:40 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:10:40 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:10:40 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:10:40.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:10:40 compute-0 nova_compute[262220]: 2025-10-08 10:10:40.881 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:10:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:40 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:41 compute-0 nova_compute[262220]: 2025-10-08 10:10:41.041 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:10:41 compute-0 nova_compute[262220]: 2025-10-08 10:10:41.042 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:10:41 compute-0 nova_compute[262220]: 2025-10-08 10:10:41.042 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 08 10:10:41 compute-0 nova_compute[262220]: 2025-10-08 10:10:41.042 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:10:41 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:10:41 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:10:41 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:10:41.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:10:41 compute-0 nova_compute[262220]: 2025-10-08 10:10:41.194 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:10:41 compute-0 nova_compute[262220]: 2025-10-08 10:10:41.195 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:10:41 compute-0 nova_compute[262220]: 2025-10-08 10:10:41.195 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:10:41 compute-0 nova_compute[262220]: 2025-10-08 10:10:41.195 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 08 10:10:41 compute-0 nova_compute[262220]: 2025-10-08 10:10:41.196 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:10:41 compute-0 ceph-mon[73572]: pgmap v782: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 376 KiB/s rd, 2.1 MiB/s wr, 70 op/s
Oct 08 10:10:41 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/101041 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 08 10:10:41 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 08 10:10:41 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1976522226' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:10:41 compute-0 nova_compute[262220]: 2025-10-08 10:10:41.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:10:41 compute-0 nova_compute[262220]: 2025-10-08 10:10:41.631 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:10:41 compute-0 nova_compute[262220]: 2025-10-08 10:10:41.703 2 DEBUG nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 08 10:10:41 compute-0 nova_compute[262220]: 2025-10-08 10:10:41.704 2 DEBUG nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 08 10:10:41 compute-0 nova_compute[262220]: 2025-10-08 10:10:41.855 2 WARNING nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 10:10:41 compute-0 nova_compute[262220]: 2025-10-08 10:10:41.856 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4404MB free_disk=59.89728546142578GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 08 10:10:41 compute-0 nova_compute[262220]: 2025-10-08 10:10:41.856 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:10:41 compute-0 nova_compute[262220]: 2025-10-08 10:10:41.857 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:10:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:42 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:42 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v783: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 375 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Oct 08 10:10:42 compute-0 nova_compute[262220]: 2025-10-08 10:10:42.197 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Instance f49b788e-70d1-4bc2-9f90-381017f2b232 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 08 10:10:42 compute-0 nova_compute[262220]: 2025-10-08 10:10:42.197 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 08 10:10:42 compute-0 nova_compute[262220]: 2025-10-08 10:10:42.197 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 08 10:10:42 compute-0 nova_compute[262220]: 2025-10-08 10:10:42.231 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:10:42 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/1976522226' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:10:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:42 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:42 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:10:42 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:10:42 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:10:42.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:10:42 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 08 10:10:42 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2791682770' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:10:42 compute-0 nova_compute[262220]: 2025-10-08 10:10:42.712 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:10:42 compute-0 nova_compute[262220]: 2025-10-08 10:10:42.720 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 08 10:10:42 compute-0 nova_compute[262220]: 2025-10-08 10:10:42.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:10:42 compute-0 nova_compute[262220]: 2025-10-08 10:10:42.817 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 08 10:10:42 compute-0 podman[268904]: 2025-10-08 10:10:42.894474032 +0000 UTC m=+0.056526397 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 08 10:10:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:42 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:42 compute-0 nova_compute[262220]: 2025-10-08 10:10:42.958 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 08 10:10:42 compute-0 nova_compute[262220]: 2025-10-08 10:10:42.958 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.102s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:10:43 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:10:43 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:10:43 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:10:43.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:10:43 compute-0 ceph-mon[73572]: pgmap v783: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 375 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Oct 08 10:10:43 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/2791682770' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:10:43 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/2989463810' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:10:43 compute-0 nova_compute[262220]: 2025-10-08 10:10:43.803 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:10:43 compute-0 nova_compute[262220]: 2025-10-08 10:10:43.803 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 08 10:10:43 compute-0 nova_compute[262220]: 2025-10-08 10:10:43.803 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 08 10:10:43 compute-0 podman[268925]: 2025-10-08 10:10:43.886958906 +0000 UTC m=+0.052348950 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 08 10:10:44 compute-0 nova_compute[262220]: 2025-10-08 10:10:44.044 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "refresh_cache-f49b788e-70d1-4bc2-9f90-381017f2b232" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 08 10:10:44 compute-0 nova_compute[262220]: 2025-10-08 10:10:44.044 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquired lock "refresh_cache-f49b788e-70d1-4bc2-9f90-381017f2b232" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 08 10:10:44 compute-0 nova_compute[262220]: 2025-10-08 10:10:44.044 2 DEBUG nova.network.neutron [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 08 10:10:44 compute-0 nova_compute[262220]: 2025-10-08 10:10:44.045 2 DEBUG nova.objects.instance [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f49b788e-70d1-4bc2-9f90-381017f2b232 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 08 10:10:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:44 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004790 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:44 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:10:44 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v784: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 375 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Oct 08 10:10:44 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/3124647078' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:10:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:44 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:44 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:10:44 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:10:44 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:10:44.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:10:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:44 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca00020e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:45 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:10:45 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:10:45 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:10:45.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:10:45 compute-0 ceph-mon[73572]: pgmap v784: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 375 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Oct 08 10:10:45 compute-0 nova_compute[262220]: 2025-10-08 10:10:45.618 2 DEBUG nova.network.neutron [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Updating instance_info_cache with network_info: [{"id": "d6bc221b-bf28-4c61-b116-cd61209c7f31", "address": "fa:16:3e:9d:d1:5c", "network": {"id": "f5c6f88b-41ed-45ea-b491-931be9a75138", "bridge": "br-int", "label": "tempest-network-smoke--1726135850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6bc221b-bf", "ovs_interfaceid": "d6bc221b-bf28-4c61-b116-cd61209c7f31", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 08 10:10:45 compute-0 nova_compute[262220]: 2025-10-08 10:10:45.656 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Releasing lock "refresh_cache-f49b788e-70d1-4bc2-9f90-381017f2b232" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 08 10:10:45 compute-0 nova_compute[262220]: 2025-10-08 10:10:45.657 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 08 10:10:45 compute-0 nova_compute[262220]: 2025-10-08 10:10:45.657 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:10:45 compute-0 nova_compute[262220]: 2025-10-08 10:10:45.657 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:10:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:10:45] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Oct 08 10:10:45 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:10:45] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Oct 08 10:10:46 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:46 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:46 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v785: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 370 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 08 10:10:46 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:46 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004790 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:46 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:10:46 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:10:46 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:10:46.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:10:46 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/842369940' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:10:46 compute-0 nova_compute[262220]: 2025-10-08 10:10:46.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:10:46 compute-0 sudo[268950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:10:46 compute-0 sudo[268950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:10:46 compute-0 sudo[268950]: pam_unix(sudo:session): session closed for user root
Oct 08 10:10:46 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:46 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004790 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:47 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:10:47 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:10:47 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:10:47.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:10:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:10:47.126Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:10:47 compute-0 ceph-mon[73572]: pgmap v785: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 370 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 08 10:10:47 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/3897777676' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:10:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:10:47
Oct 08 10:10:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 08 10:10:47 compute-0 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct 08 10:10:47 compute-0 ceph-mgr[73869]: [balancer INFO root] pools ['volumes', '.nfs', 'vms', 'default.rgw.log', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', 'backups']
Oct 08 10:10:47 compute-0 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct 08 10:10:47 compute-0 nova_compute[262220]: 2025-10-08 10:10:47.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:10:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:10:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:10:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:10:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:10:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct 08 10:10:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:10:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 08 10:10:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:10:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015193727819561111 of space, bias 1.0, pg target 0.45581183458683333 quantized to 32 (current 32)
Oct 08 10:10:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:10:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:10:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:10:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:10:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:10:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 08 10:10:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:10:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 08 10:10:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:10:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:10:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:10:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 08 10:10:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:10:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 08 10:10:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:10:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:10:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:10:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 08 10:10:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:10:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 10:10:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 08 10:10:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:10:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:10:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:10:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:10:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:48 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca00020e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:48 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v786: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 370 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 08 10:10:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:10:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:10:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:10:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:10:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 08 10:10:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:10:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:10:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:10:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:10:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:48 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:48 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:10:48 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:10:48 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:10:48.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:10:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:10:48 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/1624746832' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:10:48 compute-0 ceph-mon[73572]: pgmap v786: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 370 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 08 10:10:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:48 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:10:49 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:10:49 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:10:49 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:10:49.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:10:49 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/3180223112' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:10:50 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:50 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004790 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:50 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v787: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 389 KiB/s rd, 2.1 MiB/s wr, 94 op/s
Oct 08 10:10:50 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:50 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca00020e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:50 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:10:50 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:10:50 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:10:50.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:10:50 compute-0 ceph-mon[73572]: pgmap v787: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 389 KiB/s rd, 2.1 MiB/s wr, 94 op/s
Oct 08 10:10:50 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:50 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:51 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:10:51 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:10:51 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:10:51.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:10:51 compute-0 nova_compute[262220]: 2025-10-08 10:10:51.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:10:52 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:52 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:52 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v788: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 17 KiB/s wr, 28 op/s
Oct 08 10:10:52 compute-0 ovn_controller[153187]: 2025-10-08T10:10:52Z|00032|binding|INFO|Releasing lport 950da3ad-35fb-4b98-a8cb-0ee192607b20 from this chassis (sb_readonly=0)
Oct 08 10:10:52 compute-0 nova_compute[262220]: 2025-10-08 10:10:52.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:10:52 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:52 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004790 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:52 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:10:52 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:10:52 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:10:52.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:10:52 compute-0 nova_compute[262220]: 2025-10-08 10:10:52.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:10:52 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:52 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0002280 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:53 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:10:53 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:10:53 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:10:53.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:10:53 compute-0 ceph-mon[73572]: pgmap v788: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 17 KiB/s wr, 28 op/s
Oct 08 10:10:53 compute-0 podman[268983]: 2025-10-08 10:10:53.910220827 +0000 UTC m=+0.075808232 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 08 10:10:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:54 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:10:54 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v789: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 17 KiB/s wr, 28 op/s
Oct 08 10:10:54 compute-0 nova_compute[262220]: 2025-10-08 10:10:54.258 2 DEBUG nova.compute.manager [req-ffc5d50d-66c1-448e-8eb8-d0ec7d1a7a25 req-8cc5e77a-bcb3-4e18-bc87-70bb063bae12 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Received event network-changed-d6bc221b-bf28-4c61-b116-cd61209c7f31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 08 10:10:54 compute-0 nova_compute[262220]: 2025-10-08 10:10:54.258 2 DEBUG nova.compute.manager [req-ffc5d50d-66c1-448e-8eb8-d0ec7d1a7a25 req-8cc5e77a-bcb3-4e18-bc87-70bb063bae12 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Refreshing instance network info cache due to event network-changed-d6bc221b-bf28-4c61-b116-cd61209c7f31. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 08 10:10:54 compute-0 nova_compute[262220]: 2025-10-08 10:10:54.258 2 DEBUG oslo_concurrency.lockutils [req-ffc5d50d-66c1-448e-8eb8-d0ec7d1a7a25 req-8cc5e77a-bcb3-4e18-bc87-70bb063bae12 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "refresh_cache-f49b788e-70d1-4bc2-9f90-381017f2b232" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 08 10:10:54 compute-0 nova_compute[262220]: 2025-10-08 10:10:54.258 2 DEBUG oslo_concurrency.lockutils [req-ffc5d50d-66c1-448e-8eb8-d0ec7d1a7a25 req-8cc5e77a-bcb3-4e18-bc87-70bb063bae12 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquired lock "refresh_cache-f49b788e-70d1-4bc2-9f90-381017f2b232" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 08 10:10:54 compute-0 nova_compute[262220]: 2025-10-08 10:10:54.259 2 DEBUG nova.network.neutron [req-ffc5d50d-66c1-448e-8eb8-d0ec7d1a7a25 req-8cc5e77a-bcb3-4e18-bc87-70bb063bae12 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Refreshing network info cache for port d6bc221b-bf28-4c61-b116-cd61209c7f31 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 08 10:10:54 compute-0 nova_compute[262220]: 2025-10-08 10:10:54.375 2 DEBUG oslo_concurrency.lockutils [None req-4ac922e2-0630-40c6-8951-49f328d415e4 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "f49b788e-70d1-4bc2-9f90-381017f2b232" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:10:54 compute-0 nova_compute[262220]: 2025-10-08 10:10:54.375 2 DEBUG oslo_concurrency.lockutils [None req-4ac922e2-0630-40c6-8951-49f328d415e4 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "f49b788e-70d1-4bc2-9f90-381017f2b232" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:10:54 compute-0 nova_compute[262220]: 2025-10-08 10:10:54.376 2 DEBUG oslo_concurrency.lockutils [None req-4ac922e2-0630-40c6-8951-49f328d415e4 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "f49b788e-70d1-4bc2-9f90-381017f2b232-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:10:54 compute-0 nova_compute[262220]: 2025-10-08 10:10:54.376 2 DEBUG oslo_concurrency.lockutils [None req-4ac922e2-0630-40c6-8951-49f328d415e4 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "f49b788e-70d1-4bc2-9f90-381017f2b232-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:10:54 compute-0 nova_compute[262220]: 2025-10-08 10:10:54.377 2 DEBUG oslo_concurrency.lockutils [None req-4ac922e2-0630-40c6-8951-49f328d415e4 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "f49b788e-70d1-4bc2-9f90-381017f2b232-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:10:54 compute-0 nova_compute[262220]: 2025-10-08 10:10:54.378 2 INFO nova.compute.manager [None req-4ac922e2-0630-40c6-8951-49f328d415e4 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Terminating instance
Oct 08 10:10:54 compute-0 nova_compute[262220]: 2025-10-08 10:10:54.379 2 DEBUG nova.compute.manager [None req-4ac922e2-0630-40c6-8951-49f328d415e4 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 08 10:10:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:54 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:54 compute-0 kernel: tapd6bc221b-bf (unregistering): left promiscuous mode
Oct 08 10:10:54 compute-0 NetworkManager[44872]: <info>  [1759918254.4376] device (tapd6bc221b-bf): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 08 10:10:54 compute-0 ovn_controller[153187]: 2025-10-08T10:10:54Z|00033|binding|INFO|Releasing lport d6bc221b-bf28-4c61-b116-cd61209c7f31 from this chassis (sb_readonly=0)
Oct 08 10:10:54 compute-0 ovn_controller[153187]: 2025-10-08T10:10:54Z|00034|binding|INFO|Setting lport d6bc221b-bf28-4c61-b116-cd61209c7f31 down in Southbound
Oct 08 10:10:54 compute-0 ovn_controller[153187]: 2025-10-08T10:10:54Z|00035|binding|INFO|Removing iface tapd6bc221b-bf ovn-installed in OVS
Oct 08 10:10:54 compute-0 nova_compute[262220]: 2025-10-08 10:10:54.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:10:54 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:10:54 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:10:54 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:10:54.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:10:54 compute-0 nova_compute[262220]: 2025-10-08 10:10:54.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:10:54 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:10:54.488 163175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:d1:5c 10.100.0.6'], port_security=['fa:16:3e:9d:d1:5c 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'f49b788e-70d1-4bc2-9f90-381017f2b232', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f5c6f88b-41ed-45ea-b491-931be9a75138', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0edb2c85b88b4b168a4b2e8c5ed4a05c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1b714465-ebb6-4c8b-ab03-a9d6fbedd458', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e6475b99-4f25-4ccc-88e7-4eafaf6f3891, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f191f105a60>], logical_port=d6bc221b-bf28-4c61-b116-cd61209c7f31) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f191f105a60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 08 10:10:54 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:10:54.489 163175 INFO neutron.agent.ovn.metadata.agent [-] Port d6bc221b-bf28-4c61-b116-cd61209c7f31 in datapath f5c6f88b-41ed-45ea-b491-931be9a75138 unbound from our chassis
Oct 08 10:10:54 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:10:54.490 163175 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f5c6f88b-41ed-45ea-b491-931be9a75138, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 08 10:10:54 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:10:54.491 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[ef9da7e1-229a-4e0e-994a-86f5a971ccd4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:10:54 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:10:54.491 163175 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f5c6f88b-41ed-45ea-b491-931be9a75138 namespace which is not needed anymore
Oct 08 10:10:54 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Oct 08 10:10:54 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 15.732s CPU time.
Oct 08 10:10:54 compute-0 systemd-machined[216030]: Machine qemu-1-instance-00000001 terminated.
Oct 08 10:10:54 compute-0 nova_compute[262220]: 2025-10-08 10:10:54.609 2 INFO nova.virt.libvirt.driver [-] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Instance destroyed successfully.
Oct 08 10:10:54 compute-0 nova_compute[262220]: 2025-10-08 10:10:54.610 2 DEBUG nova.objects.instance [None req-4ac922e2-0630-40c6-8951-49f328d415e4 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lazy-loading 'resources' on Instance uuid f49b788e-70d1-4bc2-9f90-381017f2b232 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 08 10:10:54 compute-0 neutron-haproxy-ovnmeta-f5c6f88b-41ed-45ea-b491-931be9a75138[267905]: [NOTICE]   (267909) : haproxy version is 2.8.14-c23fe91
Oct 08 10:10:54 compute-0 neutron-haproxy-ovnmeta-f5c6f88b-41ed-45ea-b491-931be9a75138[267905]: [NOTICE]   (267909) : path to executable is /usr/sbin/haproxy
Oct 08 10:10:54 compute-0 neutron-haproxy-ovnmeta-f5c6f88b-41ed-45ea-b491-931be9a75138[267905]: [WARNING]  (267909) : Exiting Master process...
Oct 08 10:10:54 compute-0 neutron-haproxy-ovnmeta-f5c6f88b-41ed-45ea-b491-931be9a75138[267905]: [WARNING]  (267909) : Exiting Master process...
Oct 08 10:10:54 compute-0 neutron-haproxy-ovnmeta-f5c6f88b-41ed-45ea-b491-931be9a75138[267905]: [ALERT]    (267909) : Current worker (267911) exited with code 143 (Terminated)
Oct 08 10:10:54 compute-0 neutron-haproxy-ovnmeta-f5c6f88b-41ed-45ea-b491-931be9a75138[267905]: [WARNING]  (267909) : All workers exited. Exiting... (0)
Oct 08 10:10:54 compute-0 systemd[1]: libpod-0efd5af81104fe944b7194eb793292499da3df8ecfcc2e35b2f5a1c79a3b1927.scope: Deactivated successfully.
Oct 08 10:10:54 compute-0 nova_compute[262220]: 2025-10-08 10:10:54.636 2 DEBUG nova.virt.libvirt.vif [None req-4ac922e2-0630-40c6-8951-49f328d415e4 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-08T10:09:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1358472667',display_name='tempest-TestNetworkBasicOps-server-1358472667',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1358472667',id=1,image_ref='e5994bac-385d-4cfe-962e-386aa0559983',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGCqOiRkCvMZRP8fdEWleadJa9k0DhfKx++pZ4blF3y05LQ1KZbyE4MTPNAMp9BRrBdK92MH6DC+pII7aGjodGwK7AspsjQ0hDDswc17pIZ089tmxUxos+hWl7sAULow5Q==',key_name='tempest-TestNetworkBasicOps-1893605271',keypairs=<?>,launch_index=0,launched_at=2025-10-08T10:09:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0edb2c85b88b4b168a4b2e8c5ed4a05c',ramdisk_id='',reservation_id='r-50tfjz8b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e5994bac-385d-4cfe-962e-386aa0559983',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-139500885',owner_user_name='tempest-TestNetworkBasicOps-139500885-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-08T10:09:44Z,user_data=None,user_id='d50b19166a7245e390a6e29682191263',uuid=f49b788e-70d1-4bc2-9f90-381017f2b232,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d6bc221b-bf28-4c61-b116-cd61209c7f31", "address": "fa:16:3e:9d:d1:5c", "network": {"id": "f5c6f88b-41ed-45ea-b491-931be9a75138", "bridge": "br-int", "label": "tempest-network-smoke--1726135850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6bc221b-bf", "ovs_interfaceid": "d6bc221b-bf28-4c61-b116-cd61209c7f31", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 08 10:10:54 compute-0 nova_compute[262220]: 2025-10-08 10:10:54.637 2 DEBUG nova.network.os_vif_util [None req-4ac922e2-0630-40c6-8951-49f328d415e4 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converting VIF {"id": "d6bc221b-bf28-4c61-b116-cd61209c7f31", "address": "fa:16:3e:9d:d1:5c", "network": {"id": "f5c6f88b-41ed-45ea-b491-931be9a75138", "bridge": "br-int", "label": "tempest-network-smoke--1726135850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6bc221b-bf", "ovs_interfaceid": "d6bc221b-bf28-4c61-b116-cd61209c7f31", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 08 10:10:54 compute-0 podman[269027]: 2025-10-08 10:10:54.638338455 +0000 UTC m=+0.054002435 container died 0efd5af81104fe944b7194eb793292499da3df8ecfcc2e35b2f5a1c79a3b1927 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f5c6f88b-41ed-45ea-b491-931be9a75138, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 08 10:10:54 compute-0 nova_compute[262220]: 2025-10-08 10:10:54.638 2 DEBUG nova.network.os_vif_util [None req-4ac922e2-0630-40c6-8951-49f328d415e4 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9d:d1:5c,bridge_name='br-int',has_traffic_filtering=True,id=d6bc221b-bf28-4c61-b116-cd61209c7f31,network=Network(f5c6f88b-41ed-45ea-b491-931be9a75138),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6bc221b-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 08 10:10:54 compute-0 nova_compute[262220]: 2025-10-08 10:10:54.639 2 DEBUG os_vif [None req-4ac922e2-0630-40c6-8951-49f328d415e4 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9d:d1:5c,bridge_name='br-int',has_traffic_filtering=True,id=d6bc221b-bf28-4c61-b116-cd61209c7f31,network=Network(f5c6f88b-41ed-45ea-b491-931be9a75138),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6bc221b-bf') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 08 10:10:54 compute-0 nova_compute[262220]: 2025-10-08 10:10:54.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:10:54 compute-0 nova_compute[262220]: 2025-10-08 10:10:54.641 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd6bc221b-bf, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 10:10:54 compute-0 nova_compute[262220]: 2025-10-08 10:10:54.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 08 10:10:54 compute-0 nova_compute[262220]: 2025-10-08 10:10:54.648 2 INFO os_vif [None req-4ac922e2-0630-40c6-8951-49f328d415e4 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9d:d1:5c,bridge_name='br-int',has_traffic_filtering=True,id=d6bc221b-bf28-4c61-b116-cd61209c7f31,network=Network(f5c6f88b-41ed-45ea-b491-931be9a75138),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6bc221b-bf')
Oct 08 10:10:54 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0efd5af81104fe944b7194eb793292499da3df8ecfcc2e35b2f5a1c79a3b1927-userdata-shm.mount: Deactivated successfully.
Oct 08 10:10:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-78e3d1aa5320eb20e28cf9285cbf8434fde889ae25e1684b2e2a512764f7589a-merged.mount: Deactivated successfully.
Oct 08 10:10:54 compute-0 podman[269027]: 2025-10-08 10:10:54.680715747 +0000 UTC m=+0.096379727 container cleanup 0efd5af81104fe944b7194eb793292499da3df8ecfcc2e35b2f5a1c79a3b1927 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f5c6f88b-41ed-45ea-b491-931be9a75138, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:10:54 compute-0 systemd[1]: libpod-conmon-0efd5af81104fe944b7194eb793292499da3df8ecfcc2e35b2f5a1c79a3b1927.scope: Deactivated successfully.
Oct 08 10:10:54 compute-0 podman[269082]: 2025-10-08 10:10:54.752914039 +0000 UTC m=+0.049093304 container remove 0efd5af81104fe944b7194eb793292499da3df8ecfcc2e35b2f5a1c79a3b1927 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f5c6f88b-41ed-45ea-b491-931be9a75138, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 08 10:10:54 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:10:54.761 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[f0a6e149-6356-4dd0-8383-c488ce4c80a7]: (4, ('Wed Oct  8 10:10:54 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f5c6f88b-41ed-45ea-b491-931be9a75138 (0efd5af81104fe944b7194eb793292499da3df8ecfcc2e35b2f5a1c79a3b1927)\n0efd5af81104fe944b7194eb793292499da3df8ecfcc2e35b2f5a1c79a3b1927\nWed Oct  8 10:10:54 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f5c6f88b-41ed-45ea-b491-931be9a75138 (0efd5af81104fe944b7194eb793292499da3df8ecfcc2e35b2f5a1c79a3b1927)\n0efd5af81104fe944b7194eb793292499da3df8ecfcc2e35b2f5a1c79a3b1927\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:10:54 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:10:54.763 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[8f72415d-1bcd-451c-9ff9-41c4d7bac199]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:10:54 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:10:54.764 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf5c6f88b-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 10:10:54 compute-0 sudo[269083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:10:54 compute-0 kernel: tapf5c6f88b-40: left promiscuous mode
Oct 08 10:10:54 compute-0 nova_compute[262220]: 2025-10-08 10:10:54.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:10:54 compute-0 sudo[269083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:10:54 compute-0 sudo[269083]: pam_unix(sudo:session): session closed for user root
Oct 08 10:10:54 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:10:54.785 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[6e30ce52-dd99-4309-a361-eba3cbe77ce7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:10:54 compute-0 nova_compute[262220]: 2025-10-08 10:10:54.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:10:54 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:10:54.818 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[1c01a8fa-07b5-408d-a924-8fb79bfc015e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:10:54 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:10:54.819 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[a0c53f0c-2b6c-4ac4-907e-d7340e130098]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:10:54 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:10:54.835 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[52fbd9a0-5f8d-4ba6-911f-e2f2cb6af048]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 414547, 'reachable_time': 44869, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269148, 'error': None, 'target': 'ovnmeta-f5c6f88b-41ed-45ea-b491-931be9a75138', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:10:54 compute-0 sudo[269123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 08 10:10:54 compute-0 sudo[269123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:10:54 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:10:54.849 163290 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f5c6f88b-41ed-45ea-b491-931be9a75138 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 08 10:10:54 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:10:54.850 163290 DEBUG oslo.privsep.daemon [-] privsep: reply[4d5b25cf-149b-4c1b-8ba3-01c9b4f7def9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:10:54 compute-0 systemd[1]: run-netns-ovnmeta\x2df5c6f88b\x2d41ed\x2d45ea\x2db491\x2d931be9a75138.mount: Deactivated successfully.
Oct 08 10:10:54 compute-0 nova_compute[262220]: 2025-10-08 10:10:54.953 2 DEBUG nova.compute.manager [req-e5920ed0-6aca-4e87-8032-b4eb14cff0ee req-fd96448b-8ffd-474f-b491-fd71e51ba99d 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Received event network-vif-unplugged-d6bc221b-bf28-4c61-b116-cd61209c7f31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 08 10:10:54 compute-0 nova_compute[262220]: 2025-10-08 10:10:54.953 2 DEBUG oslo_concurrency.lockutils [req-e5920ed0-6aca-4e87-8032-b4eb14cff0ee req-fd96448b-8ffd-474f-b491-fd71e51ba99d 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "f49b788e-70d1-4bc2-9f90-381017f2b232-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:10:54 compute-0 nova_compute[262220]: 2025-10-08 10:10:54.954 2 DEBUG oslo_concurrency.lockutils [req-e5920ed0-6aca-4e87-8032-b4eb14cff0ee req-fd96448b-8ffd-474f-b491-fd71e51ba99d 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "f49b788e-70d1-4bc2-9f90-381017f2b232-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:10:54 compute-0 nova_compute[262220]: 2025-10-08 10:10:54.954 2 DEBUG oslo_concurrency.lockutils [req-e5920ed0-6aca-4e87-8032-b4eb14cff0ee req-fd96448b-8ffd-474f-b491-fd71e51ba99d 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "f49b788e-70d1-4bc2-9f90-381017f2b232-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:10:54 compute-0 nova_compute[262220]: 2025-10-08 10:10:54.954 2 DEBUG nova.compute.manager [req-e5920ed0-6aca-4e87-8032-b4eb14cff0ee req-fd96448b-8ffd-474f-b491-fd71e51ba99d 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] No waiting events found dispatching network-vif-unplugged-d6bc221b-bf28-4c61-b116-cd61209c7f31 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 08 10:10:54 compute-0 nova_compute[262220]: 2025-10-08 10:10:54.955 2 DEBUG nova.compute.manager [req-e5920ed0-6aca-4e87-8032-b4eb14cff0ee req-fd96448b-8ffd-474f-b491-fd71e51ba99d 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Received event network-vif-unplugged-d6bc221b-bf28-4c61-b116-cd61209c7f31 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 08 10:10:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:54 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004790 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:55 compute-0 nova_compute[262220]: 2025-10-08 10:10:55.090 2 INFO nova.virt.libvirt.driver [None req-4ac922e2-0630-40c6-8951-49f328d415e4 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Deleting instance files /var/lib/nova/instances/f49b788e-70d1-4bc2-9f90-381017f2b232_del
Oct 08 10:10:55 compute-0 nova_compute[262220]: 2025-10-08 10:10:55.091 2 INFO nova.virt.libvirt.driver [None req-4ac922e2-0630-40c6-8951-49f328d415e4 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Deletion of /var/lib/nova/instances/f49b788e-70d1-4bc2-9f90-381017f2b232_del complete
Oct 08 10:10:55 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:10:55 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:10:55 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:10:55.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:10:55 compute-0 ceph-mon[73572]: pgmap v789: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 17 KiB/s wr, 28 op/s
Oct 08 10:10:55 compute-0 nova_compute[262220]: 2025-10-08 10:10:55.193 2 DEBUG nova.virt.libvirt.host [None req-4ac922e2-0630-40c6-8951-49f328d415e4 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Oct 08 10:10:55 compute-0 nova_compute[262220]: 2025-10-08 10:10:55.194 2 INFO nova.virt.libvirt.host [None req-4ac922e2-0630-40c6-8951-49f328d415e4 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] UEFI support detected
Oct 08 10:10:55 compute-0 nova_compute[262220]: 2025-10-08 10:10:55.195 2 INFO nova.compute.manager [None req-4ac922e2-0630-40c6-8951-49f328d415e4 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Took 0.82 seconds to destroy the instance on the hypervisor.
Oct 08 10:10:55 compute-0 nova_compute[262220]: 2025-10-08 10:10:55.196 2 DEBUG oslo.service.loopingcall [None req-4ac922e2-0630-40c6-8951-49f328d415e4 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 08 10:10:55 compute-0 nova_compute[262220]: 2025-10-08 10:10:55.197 2 DEBUG nova.compute.manager [-] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 08 10:10:55 compute-0 nova_compute[262220]: 2025-10-08 10:10:55.197 2 DEBUG nova.network.neutron [-] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 08 10:10:55 compute-0 sudo[269123]: pam_unix(sudo:session): session closed for user root
Oct 08 10:10:55 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 10:10:55 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:10:55 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 08 10:10:55 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 10:10:55 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 08 10:10:55 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:10:55 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 08 10:10:55 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:10:55 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 08 10:10:55 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 10:10:55 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 08 10:10:55 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 10:10:55 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 10:10:55 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:10:55 compute-0 sudo[269187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:10:55 compute-0 sudo[269187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:10:55 compute-0 sudo[269187]: pam_unix(sudo:session): session closed for user root
Oct 08 10:10:55 compute-0 sudo[269212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 08 10:10:55 compute-0 sudo[269212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:10:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:10:55] "GET /metrics HTTP/1.1" 200 48464 "" "Prometheus/2.51.0"
Oct 08 10:10:55 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:10:55] "GET /metrics HTTP/1.1" 200 48464 "" "Prometheus/2.51.0"
Oct 08 10:10:56 compute-0 podman[269277]: 2025-10-08 10:10:56.041513089 +0000 UTC m=+0.040600694 container create 2d9a588ca6e45ce40b251ed01b9690c9c79d82cdf9a25ac2637bf79039340172 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:10:56 compute-0 systemd[1]: Started libpod-conmon-2d9a588ca6e45ce40b251ed01b9690c9c79d82cdf9a25ac2637bf79039340172.scope.
Oct 08 10:10:56 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:56 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca00022a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:56 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v790: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Oct 08 10:10:56 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:10:56 compute-0 podman[269277]: 2025-10-08 10:10:56.026081523 +0000 UTC m=+0.025169148 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:10:56 compute-0 podman[269277]: 2025-10-08 10:10:56.122620604 +0000 UTC m=+0.121708219 container init 2d9a588ca6e45ce40b251ed01b9690c9c79d82cdf9a25ac2637bf79039340172 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 08 10:10:56 compute-0 podman[269277]: 2025-10-08 10:10:56.13132687 +0000 UTC m=+0.130414465 container start 2d9a588ca6e45ce40b251ed01b9690c9c79d82cdf9a25ac2637bf79039340172 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 08 10:10:56 compute-0 podman[269277]: 2025-10-08 10:10:56.135938912 +0000 UTC m=+0.135026537 container attach 2d9a588ca6e45ce40b251ed01b9690c9c79d82cdf9a25ac2637bf79039340172 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default)
Oct 08 10:10:56 compute-0 sweet_mahavira[269294]: 167 167
Oct 08 10:10:56 compute-0 systemd[1]: libpod-2d9a588ca6e45ce40b251ed01b9690c9c79d82cdf9a25ac2637bf79039340172.scope: Deactivated successfully.
Oct 08 10:10:56 compute-0 podman[269277]: 2025-10-08 10:10:56.139336243 +0000 UTC m=+0.138423838 container died 2d9a588ca6e45ce40b251ed01b9690c9c79d82cdf9a25ac2637bf79039340172 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_mahavira, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:10:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-d80a2554b001469d27cd93a27bdf001601f0b2c2b5f0dcda226fd62219eecdbe-merged.mount: Deactivated successfully.
Oct 08 10:10:56 compute-0 podman[269277]: 2025-10-08 10:10:56.185130548 +0000 UTC m=+0.184218153 container remove 2d9a588ca6e45ce40b251ed01b9690c9c79d82cdf9a25ac2637bf79039340172 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:10:56 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:10:56 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 10:10:56 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:10:56 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:10:56 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 10:10:56 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 10:10:56 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:10:56 compute-0 systemd[1]: libpod-conmon-2d9a588ca6e45ce40b251ed01b9690c9c79d82cdf9a25ac2637bf79039340172.scope: Deactivated successfully.
Oct 08 10:10:56 compute-0 nova_compute[262220]: 2025-10-08 10:10:56.221 2 DEBUG nova.network.neutron [req-ffc5d50d-66c1-448e-8eb8-d0ec7d1a7a25 req-8cc5e77a-bcb3-4e18-bc87-70bb063bae12 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Updated VIF entry in instance network info cache for port d6bc221b-bf28-4c61-b116-cd61209c7f31. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 08 10:10:56 compute-0 nova_compute[262220]: 2025-10-08 10:10:56.223 2 DEBUG nova.network.neutron [req-ffc5d50d-66c1-448e-8eb8-d0ec7d1a7a25 req-8cc5e77a-bcb3-4e18-bc87-70bb063bae12 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Updating instance_info_cache with network_info: [{"id": "d6bc221b-bf28-4c61-b116-cd61209c7f31", "address": "fa:16:3e:9d:d1:5c", "network": {"id": "f5c6f88b-41ed-45ea-b491-931be9a75138", "bridge": "br-int", "label": "tempest-network-smoke--1726135850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6bc221b-bf", "ovs_interfaceid": "d6bc221b-bf28-4c61-b116-cd61209c7f31", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 08 10:10:56 compute-0 podman[269317]: 2025-10-08 10:10:56.371942784 +0000 UTC m=+0.055138822 container create c5335d511a0f91cfa82247b9f3110db348d7e037d689f9b91dd4f54e8685be95 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_ptolemy, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct 08 10:10:56 compute-0 systemd[1]: Started libpod-conmon-c5335d511a0f91cfa82247b9f3110db348d7e037d689f9b91dd4f54e8685be95.scope.
Oct 08 10:10:56 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:56 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:56 compute-0 podman[269317]: 2025-10-08 10:10:56.348159563 +0000 UTC m=+0.031355671 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:10:56 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:10:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e805d3b0f23eecf34c9fd92fee8abf7fbc87f114438935006337ffddfef2834a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:10:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e805d3b0f23eecf34c9fd92fee8abf7fbc87f114438935006337ffddfef2834a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:10:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e805d3b0f23eecf34c9fd92fee8abf7fbc87f114438935006337ffddfef2834a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:10:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e805d3b0f23eecf34c9fd92fee8abf7fbc87f114438935006337ffddfef2834a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:10:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e805d3b0f23eecf34c9fd92fee8abf7fbc87f114438935006337ffddfef2834a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 10:10:56 compute-0 podman[269317]: 2025-10-08 10:10:56.466595114 +0000 UTC m=+0.149791152 container init c5335d511a0f91cfa82247b9f3110db348d7e037d689f9b91dd4f54e8685be95 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:10:56 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:10:56 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:10:56 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:10:56.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:10:56 compute-0 podman[269317]: 2025-10-08 10:10:56.476562821 +0000 UTC m=+0.159758859 container start c5335d511a0f91cfa82247b9f3110db348d7e037d689f9b91dd4f54e8685be95 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_ptolemy, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325)
Oct 08 10:10:56 compute-0 podman[269317]: 2025-10-08 10:10:56.481075389 +0000 UTC m=+0.164271437 container attach c5335d511a0f91cfa82247b9f3110db348d7e037d689f9b91dd4f54e8685be95 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 10:10:56 compute-0 nova_compute[262220]: 2025-10-08 10:10:56.518 2 DEBUG oslo_concurrency.lockutils [req-ffc5d50d-66c1-448e-8eb8-d0ec7d1a7a25 req-8cc5e77a-bcb3-4e18-bc87-70bb063bae12 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Releasing lock "refresh_cache-f49b788e-70d1-4bc2-9f90-381017f2b232" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 08 10:10:56 compute-0 quizzical_ptolemy[269334]: --> passed data devices: 0 physical, 1 LVM
Oct 08 10:10:56 compute-0 quizzical_ptolemy[269334]: --> All data devices are unavailable
Oct 08 10:10:56 compute-0 systemd[1]: libpod-c5335d511a0f91cfa82247b9f3110db348d7e037d689f9b91dd4f54e8685be95.scope: Deactivated successfully.
Oct 08 10:10:56 compute-0 conmon[269334]: conmon c5335d511a0f91cfa822 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c5335d511a0f91cfa82247b9f3110db348d7e037d689f9b91dd4f54e8685be95.scope/container/memory.events
Oct 08 10:10:56 compute-0 podman[269317]: 2025-10-08 10:10:56.861131855 +0000 UTC m=+0.544327873 container died c5335d511a0f91cfa82247b9f3110db348d7e037d689f9b91dd4f54e8685be95 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_ptolemy, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 10:10:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-e805d3b0f23eecf34c9fd92fee8abf7fbc87f114438935006337ffddfef2834a-merged.mount: Deactivated successfully.
Oct 08 10:10:56 compute-0 podman[269317]: 2025-10-08 10:10:56.901467819 +0000 UTC m=+0.584663837 container remove c5335d511a0f91cfa82247b9f3110db348d7e037d689f9b91dd4f54e8685be95 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_ptolemy, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:10:56 compute-0 systemd[1]: libpod-conmon-c5335d511a0f91cfa82247b9f3110db348d7e037d689f9b91dd4f54e8685be95.scope: Deactivated successfully.
Oct 08 10:10:56 compute-0 sudo[269212]: pam_unix(sudo:session): session closed for user root
Oct 08 10:10:56 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:56 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:57 compute-0 sudo[269363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:10:57 compute-0 sudo[269363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:10:57 compute-0 sudo[269363]: pam_unix(sudo:session): session closed for user root
Oct 08 10:10:57 compute-0 nova_compute[262220]: 2025-10-08 10:10:57.039 2 DEBUG nova.compute.manager [req-4bd4c50d-20e5-4093-ad77-6a0c8f3ea410 req-b4434f43-329b-4808-a2ec-d638eb2f16ce 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Received event network-vif-plugged-d6bc221b-bf28-4c61-b116-cd61209c7f31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 08 10:10:57 compute-0 nova_compute[262220]: 2025-10-08 10:10:57.040 2 DEBUG oslo_concurrency.lockutils [req-4bd4c50d-20e5-4093-ad77-6a0c8f3ea410 req-b4434f43-329b-4808-a2ec-d638eb2f16ce 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "f49b788e-70d1-4bc2-9f90-381017f2b232-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:10:57 compute-0 nova_compute[262220]: 2025-10-08 10:10:57.040 2 DEBUG oslo_concurrency.lockutils [req-4bd4c50d-20e5-4093-ad77-6a0c8f3ea410 req-b4434f43-329b-4808-a2ec-d638eb2f16ce 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "f49b788e-70d1-4bc2-9f90-381017f2b232-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:10:57 compute-0 nova_compute[262220]: 2025-10-08 10:10:57.041 2 DEBUG oslo_concurrency.lockutils [req-4bd4c50d-20e5-4093-ad77-6a0c8f3ea410 req-b4434f43-329b-4808-a2ec-d638eb2f16ce 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "f49b788e-70d1-4bc2-9f90-381017f2b232-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:10:57 compute-0 nova_compute[262220]: 2025-10-08 10:10:57.041 2 DEBUG nova.compute.manager [req-4bd4c50d-20e5-4093-ad77-6a0c8f3ea410 req-b4434f43-329b-4808-a2ec-d638eb2f16ce 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] No waiting events found dispatching network-vif-plugged-d6bc221b-bf28-4c61-b116-cd61209c7f31 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 08 10:10:57 compute-0 nova_compute[262220]: 2025-10-08 10:10:57.041 2 WARNING nova.compute.manager [req-4bd4c50d-20e5-4093-ad77-6a0c8f3ea410 req-b4434f43-329b-4808-a2ec-d638eb2f16ce 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Received unexpected event network-vif-plugged-d6bc221b-bf28-4c61-b116-cd61209c7f31 for instance with vm_state active and task_state deleting.
Oct 08 10:10:57 compute-0 nova_compute[262220]: 2025-10-08 10:10:57.061 2 DEBUG nova.network.neutron [-] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 08 10:10:57 compute-0 nova_compute[262220]: 2025-10-08 10:10:57.080 2 INFO nova.compute.manager [-] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Took 1.88 seconds to deallocate network for instance.
Oct 08 10:10:57 compute-0 sudo[269388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- lvm list --format json
Oct 08 10:10:57 compute-0 sudo[269388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:10:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:10:57.127Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:10:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:10:57.128Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:10:57 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:10:57 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:10:57 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:10:57.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:10:57 compute-0 nova_compute[262220]: 2025-10-08 10:10:57.129 2 DEBUG nova.compute.manager [req-67c2bf7d-64a5-4b56-ab38-76ecbfc8e0e0 req-e2a8d733-e0d1-4600-a37a-73bd3ee92768 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Received event network-vif-deleted-d6bc221b-bf28-4c61-b116-cd61209c7f31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 08 10:10:57 compute-0 nova_compute[262220]: 2025-10-08 10:10:57.138 2 DEBUG oslo_concurrency.lockutils [None req-4ac922e2-0630-40c6-8951-49f328d415e4 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:10:57 compute-0 nova_compute[262220]: 2025-10-08 10:10:57.138 2 DEBUG oslo_concurrency.lockutils [None req-4ac922e2-0630-40c6-8951-49f328d415e4 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:10:57 compute-0 nova_compute[262220]: 2025-10-08 10:10:57.187 2 DEBUG oslo_concurrency.processutils [None req-4ac922e2-0630-40c6-8951-49f328d415e4 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:10:57 compute-0 ceph-mon[73572]: pgmap v790: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Oct 08 10:10:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:10:57.408 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:10:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:10:57.409 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:10:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:10:57.409 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:10:57 compute-0 podman[269474]: 2025-10-08 10:10:57.554550153 +0000 UTC m=+0.073824146 container create 402c737ac0ee75732dbffac9d328ea297f5d84c857a4ce442bdebcaaf2dea52a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 08 10:10:57 compute-0 podman[269474]: 2025-10-08 10:10:57.523445561 +0000 UTC m=+0.042719584 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:10:57 compute-0 systemd[1]: Started libpod-conmon-402c737ac0ee75732dbffac9d328ea297f5d84c857a4ce442bdebcaaf2dea52a.scope.
Oct 08 10:10:57 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:10:57 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 08 10:10:57 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/285040432' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:10:57 compute-0 podman[269474]: 2025-10-08 10:10:57.664799105 +0000 UTC m=+0.184073118 container init 402c737ac0ee75732dbffac9d328ea297f5d84c857a4ce442bdebcaaf2dea52a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:10:57 compute-0 podman[269474]: 2025-10-08 10:10:57.671554337 +0000 UTC m=+0.190828330 container start 402c737ac0ee75732dbffac9d328ea297f5d84c857a4ce442bdebcaaf2dea52a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:10:57 compute-0 podman[269474]: 2025-10-08 10:10:57.675093803 +0000 UTC m=+0.194367826 container attach 402c737ac0ee75732dbffac9d328ea297f5d84c857a4ce442bdebcaaf2dea52a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_zhukovsky, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Oct 08 10:10:57 compute-0 goofy_zhukovsky[269490]: 167 167
Oct 08 10:10:57 compute-0 systemd[1]: libpod-402c737ac0ee75732dbffac9d328ea297f5d84c857a4ce442bdebcaaf2dea52a.scope: Deactivated successfully.
Oct 08 10:10:57 compute-0 podman[269474]: 2025-10-08 10:10:57.677523543 +0000 UTC m=+0.196797526 container died 402c737ac0ee75732dbffac9d328ea297f5d84c857a4ce442bdebcaaf2dea52a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_zhukovsky, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 10:10:57 compute-0 nova_compute[262220]: 2025-10-08 10:10:57.680 2 DEBUG oslo_concurrency.processutils [None req-4ac922e2-0630-40c6-8951-49f328d415e4 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:10:57 compute-0 nova_compute[262220]: 2025-10-08 10:10:57.692 2 DEBUG nova.compute.provider_tree [None req-4ac922e2-0630-40c6-8951-49f328d415e4 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 08 10:10:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc33d6d77ab6b74a557550b2163513f654d27a164ffdc2444660c5c1db0d1f3f-merged.mount: Deactivated successfully.
Oct 08 10:10:57 compute-0 podman[269474]: 2025-10-08 10:10:57.720615449 +0000 UTC m=+0.239889442 container remove 402c737ac0ee75732dbffac9d328ea297f5d84c857a4ce442bdebcaaf2dea52a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct 08 10:10:57 compute-0 systemd[1]: libpod-conmon-402c737ac0ee75732dbffac9d328ea297f5d84c857a4ce442bdebcaaf2dea52a.scope: Deactivated successfully.
Oct 08 10:10:57 compute-0 nova_compute[262220]: 2025-10-08 10:10:57.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:10:57 compute-0 nova_compute[262220]: 2025-10-08 10:10:57.898 2 DEBUG nova.scheduler.client.report [None req-4ac922e2-0630-40c6-8951-49f328d415e4 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 08 10:10:57 compute-0 podman[269518]: 2025-10-08 10:10:57.910885559 +0000 UTC m=+0.052374082 container create 5cd83e9ec5080b13ac43831e6ccd8cd39405c5e465bbbac263fbddccf74f8803 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_colden, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True)
Oct 08 10:10:57 compute-0 systemd[1]: Started libpod-conmon-5cd83e9ec5080b13ac43831e6ccd8cd39405c5e465bbbac263fbddccf74f8803.scope.
Oct 08 10:10:57 compute-0 podman[269518]: 2025-10-08 10:10:57.892608138 +0000 UTC m=+0.034096681 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:10:57 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:10:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f6b332470eb9df3c68e58d1c52413bce77286115e17f1cdee52344f177685e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:10:58 compute-0 nova_compute[262220]: 2025-10-08 10:10:58.003 2 DEBUG oslo_concurrency.lockutils [None req-4ac922e2-0630-40c6-8951-49f328d415e4 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.865s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:10:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f6b332470eb9df3c68e58d1c52413bce77286115e17f1cdee52344f177685e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:10:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f6b332470eb9df3c68e58d1c52413bce77286115e17f1cdee52344f177685e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:10:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f6b332470eb9df3c68e58d1c52413bce77286115e17f1cdee52344f177685e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:10:58 compute-0 podman[269518]: 2025-10-08 10:10:58.016477818 +0000 UTC m=+0.157966341 container init 5cd83e9ec5080b13ac43831e6ccd8cd39405c5e465bbbac263fbddccf74f8803 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_colden, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct 08 10:10:58 compute-0 podman[269518]: 2025-10-08 10:10:58.023505688 +0000 UTC m=+0.164994221 container start 5cd83e9ec5080b13ac43831e6ccd8cd39405c5e465bbbac263fbddccf74f8803 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Oct 08 10:10:58 compute-0 podman[269518]: 2025-10-08 10:10:58.026986283 +0000 UTC m=+0.168474826 container attach 5cd83e9ec5080b13ac43831e6ccd8cd39405c5e465bbbac263fbddccf74f8803 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Oct 08 10:10:58 compute-0 nova_compute[262220]: 2025-10-08 10:10:58.042 2 INFO nova.scheduler.client.report [None req-4ac922e2-0630-40c6-8951-49f328d415e4 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Deleted allocations for instance f49b788e-70d1-4bc2-9f90-381017f2b232
Oct 08 10:10:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:58 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:58 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v791: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Oct 08 10:10:58 compute-0 nova_compute[262220]: 2025-10-08 10:10:58.101 2 DEBUG oslo_concurrency.lockutils [None req-4ac922e2-0630-40c6-8951-49f328d415e4 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "f49b788e-70d1-4bc2-9f90-381017f2b232" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.726s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:10:58 compute-0 festive_colden[269535]: {
Oct 08 10:10:58 compute-0 festive_colden[269535]:     "1": [
Oct 08 10:10:58 compute-0 festive_colden[269535]:         {
Oct 08 10:10:58 compute-0 festive_colden[269535]:             "devices": [
Oct 08 10:10:58 compute-0 festive_colden[269535]:                 "/dev/loop3"
Oct 08 10:10:58 compute-0 festive_colden[269535]:             ],
Oct 08 10:10:58 compute-0 festive_colden[269535]:             "lv_name": "ceph_lv0",
Oct 08 10:10:58 compute-0 festive_colden[269535]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:10:58 compute-0 festive_colden[269535]:             "lv_size": "21470642176",
Oct 08 10:10:58 compute-0 festive_colden[269535]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 08 10:10:58 compute-0 festive_colden[269535]:             "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 10:10:58 compute-0 festive_colden[269535]:             "name": "ceph_lv0",
Oct 08 10:10:58 compute-0 festive_colden[269535]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:10:58 compute-0 festive_colden[269535]:             "tags": {
Oct 08 10:10:58 compute-0 festive_colden[269535]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:10:58 compute-0 festive_colden[269535]:                 "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 10:10:58 compute-0 festive_colden[269535]:                 "ceph.cephx_lockbox_secret": "",
Oct 08 10:10:58 compute-0 festive_colden[269535]:                 "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct 08 10:10:58 compute-0 festive_colden[269535]:                 "ceph.cluster_name": "ceph",
Oct 08 10:10:58 compute-0 festive_colden[269535]:                 "ceph.crush_device_class": "",
Oct 08 10:10:58 compute-0 festive_colden[269535]:                 "ceph.encrypted": "0",
Oct 08 10:10:58 compute-0 festive_colden[269535]:                 "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct 08 10:10:58 compute-0 festive_colden[269535]:                 "ceph.osd_id": "1",
Oct 08 10:10:58 compute-0 festive_colden[269535]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 08 10:10:58 compute-0 festive_colden[269535]:                 "ceph.type": "block",
Oct 08 10:10:58 compute-0 festive_colden[269535]:                 "ceph.vdo": "0",
Oct 08 10:10:58 compute-0 festive_colden[269535]:                 "ceph.with_tpm": "0"
Oct 08 10:10:58 compute-0 festive_colden[269535]:             },
Oct 08 10:10:58 compute-0 festive_colden[269535]:             "type": "block",
Oct 08 10:10:58 compute-0 festive_colden[269535]:             "vg_name": "ceph_vg0"
Oct 08 10:10:58 compute-0 festive_colden[269535]:         }
Oct 08 10:10:58 compute-0 festive_colden[269535]:     ]
Oct 08 10:10:58 compute-0 festive_colden[269535]: }
Oct 08 10:10:58 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/285040432' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:10:58 compute-0 systemd[1]: libpod-5cd83e9ec5080b13ac43831e6ccd8cd39405c5e465bbbac263fbddccf74f8803.scope: Deactivated successfully.
Oct 08 10:10:58 compute-0 podman[269518]: 2025-10-08 10:10:58.305976008 +0000 UTC m=+0.447464531 container died 5cd83e9ec5080b13ac43831e6ccd8cd39405c5e465bbbac263fbddccf74f8803 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True)
Oct 08 10:10:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f6b332470eb9df3c68e58d1c52413bce77286115e17f1cdee52344f177685e9-merged.mount: Deactivated successfully.
Oct 08 10:10:58 compute-0 podman[269518]: 2025-10-08 10:10:58.348101621 +0000 UTC m=+0.489590144 container remove 5cd83e9ec5080b13ac43831e6ccd8cd39405c5e465bbbac263fbddccf74f8803 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:10:58 compute-0 systemd[1]: libpod-conmon-5cd83e9ec5080b13ac43831e6ccd8cd39405c5e465bbbac263fbddccf74f8803.scope: Deactivated successfully.
Oct 08 10:10:58 compute-0 sudo[269388]: pam_unix(sudo:session): session closed for user root
Oct 08 10:10:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:58 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca00022a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:58 compute-0 sudo[269556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:10:58 compute-0 sudo[269556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:10:58 compute-0 sudo[269556]: pam_unix(sudo:session): session closed for user root
Oct 08 10:10:58 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:10:58 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:10:58 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:10:58.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:10:58 compute-0 sudo[269581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- raw list --format json
Oct 08 10:10:58 compute-0 sudo[269581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:10:58 compute-0 podman[269648]: 2025-10-08 10:10:58.896527258 +0000 UTC m=+0.041434683 container create 88dc17a31515280b41d9a1fa364c4d1dcb59a5bb2786570405afc316393833ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_galois, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:10:58 compute-0 systemd[1]: Started libpod-conmon-88dc17a31515280b41d9a1fa364c4d1dcb59a5bb2786570405afc316393833ed.scope.
Oct 08 10:10:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:58 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:10:58 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:10:58 compute-0 podman[269648]: 2025-10-08 10:10:58.880220211 +0000 UTC m=+0.025127656 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:10:58 compute-0 podman[269648]: 2025-10-08 10:10:58.978904743 +0000 UTC m=+0.123812188 container init 88dc17a31515280b41d9a1fa364c4d1dcb59a5bb2786570405afc316393833ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_galois, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct 08 10:10:58 compute-0 podman[269648]: 2025-10-08 10:10:58.991960243 +0000 UTC m=+0.136867678 container start 88dc17a31515280b41d9a1fa364c4d1dcb59a5bb2786570405afc316393833ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_galois, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct 08 10:10:58 compute-0 podman[269648]: 2025-10-08 10:10:58.995934873 +0000 UTC m=+0.140842298 container attach 88dc17a31515280b41d9a1fa364c4d1dcb59a5bb2786570405afc316393833ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_galois, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True)
Oct 08 10:10:58 compute-0 lucid_galois[269664]: 167 167
Oct 08 10:10:58 compute-0 systemd[1]: libpod-88dc17a31515280b41d9a1fa364c4d1dcb59a5bb2786570405afc316393833ed.scope: Deactivated successfully.
Oct 08 10:10:58 compute-0 podman[269648]: 2025-10-08 10:10:58.997423952 +0000 UTC m=+0.142331377 container died 88dc17a31515280b41d9a1fa364c4d1dcb59a5bb2786570405afc316393833ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_galois, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:10:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-bccc18c5e20d44ab47bb3838995fb5f35a31121347a1beb9236df24684c714d0-merged.mount: Deactivated successfully.
Oct 08 10:10:59 compute-0 podman[269648]: 2025-10-08 10:10:59.037773407 +0000 UTC m=+0.182680832 container remove 88dc17a31515280b41d9a1fa364c4d1dcb59a5bb2786570405afc316393833ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 10:10:59 compute-0 systemd[1]: libpod-conmon-88dc17a31515280b41d9a1fa364c4d1dcb59a5bb2786570405afc316393833ed.scope: Deactivated successfully.
Oct 08 10:10:59 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:10:59 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:10:59 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:10:59 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:10:59.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:10:59 compute-0 podman[269689]: 2025-10-08 10:10:59.191732845 +0000 UTC m=+0.035391673 container create 4b5d314d8a48b0c41396f6482776ddba7c0ef7cfa4d41f0ce25eec669df90883 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_thompson, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 10:10:59 compute-0 systemd[1]: Started libpod-conmon-4b5d314d8a48b0c41396f6482776ddba7c0ef7cfa4d41f0ce25eec669df90883.scope.
Oct 08 10:10:59 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:10:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc1fd6af2ca569e2d4f2ca0d5ace606ce8d6802a56f214dd0e38119e5ec9cd44/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:10:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc1fd6af2ca569e2d4f2ca0d5ace606ce8d6802a56f214dd0e38119e5ec9cd44/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:10:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc1fd6af2ca569e2d4f2ca0d5ace606ce8d6802a56f214dd0e38119e5ec9cd44/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:10:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc1fd6af2ca569e2d4f2ca0d5ace606ce8d6802a56f214dd0e38119e5ec9cd44/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:10:59 compute-0 podman[269689]: 2025-10-08 10:10:59.261563649 +0000 UTC m=+0.105222497 container init 4b5d314d8a48b0c41396f6482776ddba7c0ef7cfa4d41f0ce25eec669df90883 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_thompson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:10:59 compute-0 podman[269689]: 2025-10-08 10:10:59.268022311 +0000 UTC m=+0.111681139 container start 4b5d314d8a48b0c41396f6482776ddba7c0ef7cfa4d41f0ce25eec669df90883 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_thompson, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid)
Oct 08 10:10:59 compute-0 podman[269689]: 2025-10-08 10:10:59.176527826 +0000 UTC m=+0.020186664 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:10:59 compute-0 podman[269689]: 2025-10-08 10:10:59.273439589 +0000 UTC m=+0.117098447 container attach 4b5d314d8a48b0c41396f6482776ddba7c0ef7cfa4d41f0ce25eec669df90883 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_thompson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:10:59 compute-0 ceph-mon[73572]: pgmap v791: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Oct 08 10:10:59 compute-0 nova_compute[262220]: 2025-10-08 10:10:59.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:10:59 compute-0 lvm[269780]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 08 10:10:59 compute-0 lvm[269780]: VG ceph_vg0 finished
Oct 08 10:10:59 compute-0 wizardly_thompson[269706]: {}
Oct 08 10:11:00 compute-0 systemd[1]: libpod-4b5d314d8a48b0c41396f6482776ddba7c0ef7cfa4d41f0ce25eec669df90883.scope: Deactivated successfully.
Oct 08 10:11:00 compute-0 systemd[1]: libpod-4b5d314d8a48b0c41396f6482776ddba7c0ef7cfa4d41f0ce25eec669df90883.scope: Consumed 1.126s CPU time.
Oct 08 10:11:00 compute-0 podman[269785]: 2025-10-08 10:11:00.071685952 +0000 UTC m=+0.027000839 container died 4b5d314d8a48b0c41396f6482776ddba7c0ef7cfa4d41f0ce25eec669df90883 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_thompson, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 10:11:00 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:00 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:00 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v792: 353 pgs: 353 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 4.3 KiB/s wr, 56 op/s
Oct 08 10:11:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc1fd6af2ca569e2d4f2ca0d5ace606ce8d6802a56f214dd0e38119e5ec9cd44-merged.mount: Deactivated successfully.
Oct 08 10:11:00 compute-0 podman[269785]: 2025-10-08 10:11:00.116853256 +0000 UTC m=+0.072168123 container remove 4b5d314d8a48b0c41396f6482776ddba7c0ef7cfa4d41f0ce25eec669df90883 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_thompson, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:11:00 compute-0 systemd[1]: libpod-conmon-4b5d314d8a48b0c41396f6482776ddba7c0ef7cfa4d41f0ce25eec669df90883.scope: Deactivated successfully.
Oct 08 10:11:00 compute-0 sudo[269581]: pam_unix(sudo:session): session closed for user root
Oct 08 10:11:00 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 10:11:00 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:11:00 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 10:11:00 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:11:00 compute-0 sudo[269801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 08 10:11:00 compute-0 sudo[269801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:11:00 compute-0 sudo[269801]: pam_unix(sudo:session): session closed for user root
Oct 08 10:11:00 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:00 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:00 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:11:00 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:11:00 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:11:00.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:11:00 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:00 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a6a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:01 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:11:01 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:11:01 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:11:01.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:11:01 compute-0 ceph-mon[73572]: pgmap v792: 353 pgs: 353 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 4.3 KiB/s wr, 56 op/s
Oct 08 10:11:01 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:11:01 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:11:02 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:02 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:02 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v793: 353 pgs: 353 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 08 10:11:02 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:02 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:02 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:11:02 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:11:02 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:11:02.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:11:02 compute-0 nova_compute[262220]: 2025-10-08 10:11:02.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:11:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:11:02 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:11:02 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:02 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004790 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:03 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:11:03 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:11:03 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:11:03.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:11:03 compute-0 ceph-mon[73572]: pgmap v793: 353 pgs: 353 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 08 10:11:03 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:11:03 compute-0 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Oct 08 10:11:03 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:11:03.217170) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 08 10:11:03 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Oct 08 10:11:03 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918263217219, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 952, "num_deletes": 251, "total_data_size": 1483731, "memory_usage": 1513072, "flush_reason": "Manual Compaction"}
Oct 08 10:11:03 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Oct 08 10:11:03 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918263225772, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 1448051, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23835, "largest_seqno": 24786, "table_properties": {"data_size": 1443544, "index_size": 2095, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10469, "raw_average_key_size": 19, "raw_value_size": 1434284, "raw_average_value_size": 2711, "num_data_blocks": 94, "num_entries": 529, "num_filter_entries": 529, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759918189, "oldest_key_time": 1759918189, "file_creation_time": 1759918263, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Oct 08 10:11:03 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 8625 microseconds, and 3652 cpu microseconds.
Oct 08 10:11:03 compute-0 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 08 10:11:03 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:11:03.225802) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 1448051 bytes OK
Oct 08 10:11:03 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:11:03.225818) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Oct 08 10:11:03 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:11:03.227087) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Oct 08 10:11:03 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:11:03.227098) EVENT_LOG_v1 {"time_micros": 1759918263227094, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 08 10:11:03 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:11:03.227110) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 08 10:11:03 compute-0 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1479252, prev total WAL file size 1479252, number of live WAL files 2.
Oct 08 10:11:03 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 10:11:03 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:11:03.227700) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Oct 08 10:11:03 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 08 10:11:03 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(1414KB)], [53(12MB)]
Oct 08 10:11:03 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918263227772, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 14531023, "oldest_snapshot_seqno": -1}
Oct 08 10:11:03 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 5369 keys, 12379342 bytes, temperature: kUnknown
Oct 08 10:11:03 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918263292059, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 12379342, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12344196, "index_size": 20636, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13445, "raw_key_size": 137979, "raw_average_key_size": 25, "raw_value_size": 12247670, "raw_average_value_size": 2281, "num_data_blocks": 835, "num_entries": 5369, "num_filter_entries": 5369, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916581, "oldest_key_time": 0, "file_creation_time": 1759918263, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Oct 08 10:11:03 compute-0 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 08 10:11:03 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:11:03.292269) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 12379342 bytes
Oct 08 10:11:03 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:11:03.293454) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 225.9 rd, 192.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 12.5 +0.0 blob) out(11.8 +0.0 blob), read-write-amplify(18.6) write-amplify(8.5) OK, records in: 5885, records dropped: 516 output_compression: NoCompression
Oct 08 10:11:03 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:11:03.293468) EVENT_LOG_v1 {"time_micros": 1759918263293461, "job": 28, "event": "compaction_finished", "compaction_time_micros": 64331, "compaction_time_cpu_micros": 26413, "output_level": 6, "num_output_files": 1, "total_output_size": 12379342, "num_input_records": 5885, "num_output_records": 5369, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 08 10:11:03 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 10:11:03 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918263293751, "job": 28, "event": "table_file_deletion", "file_number": 55}
Oct 08 10:11:03 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 10:11:03 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918263295962, "job": 28, "event": "table_file_deletion", "file_number": 53}
Oct 08 10:11:03 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:11:03.227621) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:11:03 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:11:03.296120) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:11:03 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:11:03.296125) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:11:03 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:11:03.296127) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:11:03 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:11:03.296129) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:11:03 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:11:03.296130) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:11:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:04 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a6a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:04 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:11:04 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v794: 353 pgs: 353 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 08 10:11:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:04 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:04 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:11:04 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:11:04 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:11:04.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:11:04 compute-0 nova_compute[262220]: 2025-10-08 10:11:04.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:11:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:04 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:05 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:11:05 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:11:05 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:11:05.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:11:05 compute-0 nova_compute[262220]: 2025-10-08 10:11:05.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:11:05 compute-0 ceph-mon[73572]: pgmap v794: 353 pgs: 353 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 08 10:11:05 compute-0 nova_compute[262220]: 2025-10-08 10:11:05.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:11:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:11:05] "GET /metrics HTTP/1.1" 200 48440 "" "Prometheus/2.51.0"
Oct 08 10:11:05 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:11:05] "GET /metrics HTTP/1.1" 200 48440 "" "Prometheus/2.51.0"
Oct 08 10:11:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:06 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004790 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:06 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v795: 353 pgs: 353 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 08 10:11:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:06 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004790 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:06 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:11:06 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 10:11:06 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:11:06.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 10:11:06 compute-0 sudo[269833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:11:06 compute-0 sudo[269833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:11:06 compute-0 sudo[269833]: pam_unix(sudo:session): session closed for user root
Oct 08 10:11:06 compute-0 podman[269857]: 2025-10-08 10:11:06.950903296 +0000 UTC m=+0.127401116 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 08 10:11:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:06 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:11:07.129Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:11:07 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:11:07 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:11:07 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:11:07.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:11:07 compute-0 ceph-mon[73572]: pgmap v795: 353 pgs: 353 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 08 10:11:07 compute-0 nova_compute[262220]: 2025-10-08 10:11:07.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:11:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:08 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:08 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v796: 353 pgs: 353 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 08 10:11:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:08 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:08 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:11:08 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:11:08 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:11:08.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:11:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:08 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:09 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:11:09 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:11:09 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:11:09 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:11:09.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:11:09 compute-0 ceph-mon[73572]: pgmap v796: 353 pgs: 353 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 08 10:11:09 compute-0 nova_compute[262220]: 2025-10-08 10:11:09.608 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759918254.6060555, f49b788e-70d1-4bc2-9f90-381017f2b232 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 08 10:11:09 compute-0 nova_compute[262220]: 2025-10-08 10:11:09.608 2 INFO nova.compute.manager [-] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] VM Stopped (Lifecycle Event)
Oct 08 10:11:09 compute-0 nova_compute[262220]: 2025-10-08 10:11:09.630 2 DEBUG nova.compute.manager [None req-e6220c05-7f4a-4b31-aeab-99262f396f92 - - - - - -] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 08 10:11:09 compute-0 nova_compute[262220]: 2025-10-08 10:11:09.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:11:10 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:10 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:10 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v797: 353 pgs: 353 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 08 10:11:10 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:10 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:10 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:11:10 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:11:10 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:11:10.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:11:10 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:10 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:11 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:11:11 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:11:11 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:11:11.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:11:11 compute-0 ceph-mon[73572]: pgmap v797: 353 pgs: 353 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 08 10:11:12 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:12 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:12 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v798: 353 pgs: 353 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:11:12 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:12 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:12 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:11:12 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:11:12 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:11:12.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:11:12 compute-0 nova_compute[262220]: 2025-10-08 10:11:12.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:11:12 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:12 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c000b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:13 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:11:13 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:11:13 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:11:13.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:11:13 compute-0 ceph-mon[73572]: pgmap v798: 353 pgs: 353 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:11:13 compute-0 podman[269895]: 2025-10-08 10:11:13.894970261 +0000 UTC m=+0.053695045 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Oct 08 10:11:13 compute-0 podman[269915]: 2025-10-08 10:11:13.983757347 +0000 UTC m=+0.057375055 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 08 10:11:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:14 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:14 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:11:14 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v799: 353 pgs: 353 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:11:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:14 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a6a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:14 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:11:14 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:11:14 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:11:14.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:11:14 compute-0 nova_compute[262220]: 2025-10-08 10:11:14.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:11:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:14 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a6a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:15 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:11:15 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:11:15 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:11:15.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:11:15 compute-0 ceph-mon[73572]: pgmap v799: 353 pgs: 353 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:11:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:11:15] "GET /metrics HTTP/1.1" 200 48440 "" "Prometheus/2.51.0"
Oct 08 10:11:15 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:11:15] "GET /metrics HTTP/1.1" 200 48440 "" "Prometheus/2.51.0"
Oct 08 10:11:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:16 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:16 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v800: 353 pgs: 353 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:11:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:16 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:16 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:11:16 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:11:16 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:11:16.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:11:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:16 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:11:17.129Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:11:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:11:17.129Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:11:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:11:17.130Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:11:17 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:11:17 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:11:17 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:11:17.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:11:17 compute-0 ceph-mon[73572]: pgmap v800: 353 pgs: 353 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:11:17 compute-0 nova_compute[262220]: 2025-10-08 10:11:17.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:11:17 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:11:17 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:11:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:11:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:11:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:18 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a6a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:18 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v801: 353 pgs: 353 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:11:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:11:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:11:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:11:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:11:18 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:11:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:18 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:18 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:11:18 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:11:18 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:11:18.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:11:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:18 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:19 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:11:19 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:11:19 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:11:19 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:11:19.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:11:19 compute-0 ceph-mon[73572]: pgmap v801: 353 pgs: 353 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:11:19 compute-0 nova_compute[262220]: 2025-10-08 10:11:19.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:11:20 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:20 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:20 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v802: 353 pgs: 353 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 08 10:11:20 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:20 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a6a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:20 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/3422621586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:11:20 compute-0 ceph-mon[73572]: from='client.? 192.168.122.10:0/2889715994' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 08 10:11:20 compute-0 ceph-mon[73572]: from='client.? 192.168.122.10:0/2889715994' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 08 10:11:20 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:11:20 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:11:20 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:11:20.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:11:20 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:20 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:21 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:11:21 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:11:21 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:11:21.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:11:21 compute-0 ceph-mon[73572]: pgmap v802: 353 pgs: 353 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 08 10:11:21 compute-0 nova_compute[262220]: 2025-10-08 10:11:21.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:11:21 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:11:21.704 163175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2a:d6:ca', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'b2:8b:1e:40:84:a3'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 08 10:11:21 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:11:21.704 163175 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 08 10:11:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:22 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:22 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v803: 353 pgs: 353 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:11:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:22 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:22 compute-0 ceph-mon[73572]: pgmap v803: 353 pgs: 353 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 08 10:11:22 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:11:22 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:11:22 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:11:22.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:11:22 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:11:22.706 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26869918-b723-425c-a2e1-0d697f3d0fec, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 10:11:22 compute-0 nova_compute[262220]: 2025-10-08 10:11:22.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:11:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:22 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a6a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:23 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:11:23 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 10:11:23 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:11:23.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 10:11:24 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:11:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:24 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:24 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v804: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 08 10:11:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:24 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:24 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:11:24 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:11:24 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:11:24.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:11:24 compute-0 nova_compute[262220]: 2025-10-08 10:11:24.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:11:24 compute-0 podman[269946]: 2025-10-08 10:11:24.886380581 +0000 UTC m=+0.047842642 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 08 10:11:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:24 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:25 compute-0 ceph-mon[73572]: pgmap v804: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 08 10:11:25 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:11:25 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:11:25 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:11:25.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:11:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:11:25] "GET /metrics HTTP/1.1" 200 48435 "" "Prometheus/2.51.0"
Oct 08 10:11:25 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:11:25] "GET /metrics HTTP/1.1" 200 48435 "" "Prometheus/2.51.0"
Oct 08 10:11:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:26 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a6a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:26 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v805: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 08 10:11:26 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/442168325' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 08 10:11:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:26 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0032f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:26 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:11:26 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:11:26 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:11:26.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:11:26 compute-0 sudo[269968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:11:26 compute-0 sudo[269968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:11:26 compute-0 sudo[269968]: pam_unix(sudo:session): session closed for user root
Oct 08 10:11:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:26 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:11:27.130Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:11:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:11:27.131Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:11:27 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:11:27 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:11:27 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:11:27.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:11:27 compute-0 ceph-mon[73572]: pgmap v805: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 08 10:11:27 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/3944000282' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 08 10:11:27 compute-0 nova_compute[262220]: 2025-10-08 10:11:27.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:11:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:28 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:28 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v806: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 08 10:11:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:28 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a6a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:28 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:11:28 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:11:28 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:11:28.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:11:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:28 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0032f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:29 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:11:29 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:11:29 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:11:29 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:11:29.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:11:29 compute-0 ceph-mon[73572]: pgmap v806: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 08 10:11:29 compute-0 nova_compute[262220]: 2025-10-08 10:11:29.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:11:30 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:30 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:30 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v807: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 08 10:11:30 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:30 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:30 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:11:30 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:11:30 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:11:30.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:11:30 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:30 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:31 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:11:31 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:11:31 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:11:31.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:11:31 compute-0 ceph-mon[73572]: pgmap v807: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 08 10:11:32 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:32 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:32 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v808: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 08 10:11:32 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:32 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:32 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:11:32 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:11:32 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:11:32.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:11:32 compute-0 nova_compute[262220]: 2025-10-08 10:11:32.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:11:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:11:32 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:11:32 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:32 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:33 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:11:33 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:11:33 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:11:33.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:11:33 compute-0 ceph-mon[73572]: pgmap v808: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 08 10:11:33 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:11:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:11:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:34 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:34 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v809: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 107 op/s
Oct 08 10:11:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:34 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004790 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:34 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:11:34 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:11:34 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:11:34.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:11:34 compute-0 nova_compute[262220]: 2025-10-08 10:11:34.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:11:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:34 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:35 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:11:35 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:11:35 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:11:35.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:11:35 compute-0 ceph-mon[73572]: pgmap v809: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 107 op/s
Oct 08 10:11:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/101135 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 08 10:11:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:11:35] "GET /metrics HTTP/1.1" 200 48463 "" "Prometheus/2.51.0"
Oct 08 10:11:35 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:11:35] "GET /metrics HTTP/1.1" 200 48463 "" "Prometheus/2.51.0"
Oct 08 10:11:36 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:36 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:36 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v810: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 08 10:11:36 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:36 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:36 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:11:36 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:11:36 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:11:36.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:11:36 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:36 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004790 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:11:37.132Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:11:37 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:11:37 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:11:37 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:11:37.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:11:37 compute-0 ceph-mon[73572]: pgmap v810: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 08 10:11:37 compute-0 nova_compute[262220]: 2025-10-08 10:11:37.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:11:37 compute-0 podman[270004]: 2025-10-08 10:11:37.943827564 +0000 UTC m=+0.109015762 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Oct 08 10:11:38 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v811: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 08 10:11:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:38 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:38 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:38 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:11:38 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:11:38 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:11:38.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:11:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:38 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:39 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:11:39 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:11:39 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:11:39 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:11:39.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:11:39 compute-0 ceph-mon[73572]: pgmap v811: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 08 10:11:39 compute-0 nova_compute[262220]: 2025-10-08 10:11:39.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:11:39 compute-0 nova_compute[262220]: 2025-10-08 10:11:39.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:11:39 compute-0 nova_compute[262220]: 2025-10-08 10:11:39.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:11:39 compute-0 nova_compute[262220]: 2025-10-08 10:11:39.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:11:40 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v812: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 08 10:11:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:40 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:40 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004790 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:40 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:11:40 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:11:40 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:11:40.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:11:40 compute-0 ovn_controller[153187]: 2025-10-08T10:11:40Z|00036|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Oct 08 10:11:40 compute-0 nova_compute[262220]: 2025-10-08 10:11:40.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:11:40 compute-0 nova_compute[262220]: 2025-10-08 10:11:40.887 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 08 10:11:40 compute-0 nova_compute[262220]: 2025-10-08 10:11:40.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:11:40 compute-0 nova_compute[262220]: 2025-10-08 10:11:40.913 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:11:40 compute-0 nova_compute[262220]: 2025-10-08 10:11:40.914 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:11:40 compute-0 nova_compute[262220]: 2025-10-08 10:11:40.914 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:11:40 compute-0 nova_compute[262220]: 2025-10-08 10:11:40.914 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 08 10:11:40 compute-0 nova_compute[262220]: 2025-10-08 10:11:40.914 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:11:41 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:40 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:41 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:11:41 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:11:41 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:11:41.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:11:41 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 08 10:11:41 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/337810784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:11:41 compute-0 nova_compute[262220]: 2025-10-08 10:11:41.404 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:11:41 compute-0 ceph-mon[73572]: pgmap v812: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 08 10:11:41 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/337810784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:11:41 compute-0 nova_compute[262220]: 2025-10-08 10:11:41.593 2 WARNING nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 10:11:41 compute-0 nova_compute[262220]: 2025-10-08 10:11:41.594 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4587MB free_disk=59.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 08 10:11:41 compute-0 nova_compute[262220]: 2025-10-08 10:11:41.594 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:11:41 compute-0 nova_compute[262220]: 2025-10-08 10:11:41.594 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:11:41 compute-0 nova_compute[262220]: 2025-10-08 10:11:41.672 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 08 10:11:41 compute-0 nova_compute[262220]: 2025-10-08 10:11:41.673 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 08 10:11:41 compute-0 nova_compute[262220]: 2025-10-08 10:11:41.689 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:11:42 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v813: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 08 10:11:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:42 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:42 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 08 10:11:42 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/987466161' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:11:42 compute-0 nova_compute[262220]: 2025-10-08 10:11:42.161 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:11:42 compute-0 nova_compute[262220]: 2025-10-08 10:11:42.166 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 08 10:11:42 compute-0 nova_compute[262220]: 2025-10-08 10:11:42.191 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 08 10:11:42 compute-0 nova_compute[262220]: 2025-10-08 10:11:42.212 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 08 10:11:42 compute-0 nova_compute[262220]: 2025-10-08 10:11:42.213 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:11:42 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/987466161' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:11:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:42 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:42 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:11:42 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:11:42 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:11:42.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:11:42 compute-0 nova_compute[262220]: 2025-10-08 10:11:42.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:11:42 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct 08 10:11:43 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:43 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004790 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:43 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:11:43 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:11:43 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:11:43.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:11:43 compute-0 nova_compute[262220]: 2025-10-08 10:11:43.214 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:11:43 compute-0 nova_compute[262220]: 2025-10-08 10:11:43.214 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:11:43 compute-0 nova_compute[262220]: 2025-10-08 10:11:43.215 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:11:43 compute-0 ceph-mon[73572]: pgmap v813: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 08 10:11:43 compute-0 nova_compute[262220]: 2025-10-08 10:11:43.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:11:43 compute-0 nova_compute[262220]: 2025-10-08 10:11:43.888 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 08 10:11:43 compute-0 nova_compute[262220]: 2025-10-08 10:11:43.888 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 08 10:11:43 compute-0 nova_compute[262220]: 2025-10-08 10:11:43.906 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 08 10:11:44 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:11:44 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v814: 353 pgs: 353 active+clean; 121 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Oct 08 10:11:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:44 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940032c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:44 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/2702994280' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:11:44 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/3243878718' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:11:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:44 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:44 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:11:44 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:11:44 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:11:44.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:11:44 compute-0 nova_compute[262220]: 2025-10-08 10:11:44.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:11:44 compute-0 podman[270087]: 2025-10-08 10:11:44.903300462 +0000 UTC m=+0.059741804 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 08 10:11:44 compute-0 podman[270088]: 2025-10-08 10:11:44.916607469 +0000 UTC m=+0.063306580 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 08 10:11:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:45 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:45 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:11:45 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:11:45 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:11:45.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:11:45 compute-0 ceph-mon[73572]: pgmap v814: 353 pgs: 353 active+clean; 121 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Oct 08 10:11:45 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/2908615475' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:11:45 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/4017782709' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:11:45 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/1646762580' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:11:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:11:45] "GET /metrics HTTP/1.1" 200 48463 "" "Prometheus/2.51.0"
Oct 08 10:11:45 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:11:45] "GET /metrics HTTP/1.1" 200 48463 "" "Prometheus/2.51.0"
Oct 08 10:11:46 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v815: 353 pgs: 353 active+clean; 121 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 367 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 08 10:11:46 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:46 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004790 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:46 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:46 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940032c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:46 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:11:46 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:11:46 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:11:46.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:11:46 compute-0 ceph-mon[73572]: pgmap v815: 353 pgs: 353 active+clean; 121 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 367 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 08 10:11:46 compute-0 sudo[270129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:11:46 compute-0 sudo[270129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:11:46 compute-0 sudo[270129]: pam_unix(sudo:session): session closed for user root
Oct 08 10:11:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:47 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:11:47.133Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:11:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:11:47.133Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:11:47 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:11:47 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:11:47 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:11:47.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:11:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:11:47
Oct 08 10:11:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 08 10:11:47 compute-0 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct 08 10:11:47 compute-0 ceph-mgr[73869]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'default.rgw.meta', 'volumes', '.mgr', '.nfs', 'vms', 'default.rgw.log', 'default.rgw.control', '.rgw.root', 'backups', 'cephfs.cephfs.data']
Oct 08 10:11:47 compute-0 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct 08 10:11:47 compute-0 nova_compute[262220]: 2025-10-08 10:11:47.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:11:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:11:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:11:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:11:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:11:47 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:11:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct 08 10:11:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:11:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 08 10:11:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:11:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00075666583235658 of space, bias 1.0, pg target 0.226999749706974 quantized to 32 (current 32)
Oct 08 10:11:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:11:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:11:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:11:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:11:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:11:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 08 10:11:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:11:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 08 10:11:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:11:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:11:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:11:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 08 10:11:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:11:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 08 10:11:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:11:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:11:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:11:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 08 10:11:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:11:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 10:11:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 08 10:11:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:11:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:11:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:11:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:11:48 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v816: 353 pgs: 353 active+clean; 121 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 367 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 08 10:11:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:48 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:11:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:11:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:11:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:11:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 08 10:11:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:11:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:11:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:11:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:11:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:48 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004790 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:48 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:11:48 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:11:48 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:11:48.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:11:48 compute-0 ceph-mon[73572]: pgmap v816: 353 pgs: 353 active+clean; 121 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 367 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 08 10:11:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:49 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940032c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:11:49 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:11:49 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:11:49 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:11:49.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:11:49 compute-0 nova_compute[262220]: 2025-10-08 10:11:49.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:11:50 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v817: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 368 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 08 10:11:50 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:50 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:50 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:50 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:50 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:11:50 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:11:50 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:11:50.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:11:51 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:51 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:51 compute-0 ceph-mon[73572]: pgmap v817: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 368 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 08 10:11:51 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:11:51 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:11:51 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:11:51.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:11:52 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v818: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 368 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 08 10:11:52 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:52 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:52 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:52 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:52 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:11:52 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:11:52 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:11:52.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:11:52 compute-0 nova_compute[262220]: 2025-10-08 10:11:52.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:11:53 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:53 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:53 compute-0 ceph-mon[73572]: pgmap v818: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 368 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 08 10:11:53 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:11:53 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:11:53 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:11:53.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:11:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:11:54 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v819: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 369 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 08 10:11:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:54 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004790 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:54 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940032c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:54 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:11:54 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:11:54 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:11:54.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:11:54 compute-0 nova_compute[262220]: 2025-10-08 10:11:54.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:11:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:55 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:55 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:11:55 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:11:55 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:11:55.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:11:55 compute-0 ceph-mon[73572]: pgmap v819: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 369 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 08 10:11:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:11:55] "GET /metrics HTTP/1.1" 200 48465 "" "Prometheus/2.51.0"
Oct 08 10:11:55 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:11:55] "GET /metrics HTTP/1.1" 200 48465 "" "Prometheus/2.51.0"
Oct 08 10:11:55 compute-0 podman[270164]: 2025-10-08 10:11:55.91141358 +0000 UTC m=+0.073975952 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 08 10:11:56 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v820: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 12 KiB/s wr, 1 op/s
Oct 08 10:11:56 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:56 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:56 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:56 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004790 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:56 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:11:56 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:11:56 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:11:56.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:11:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:57 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940032c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:11:57.134Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:11:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:11:57.134Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:11:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:11:57.134Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:11:57 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:11:57 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:11:57 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:11:57.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:11:57 compute-0 ceph-mon[73572]: pgmap v820: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 12 KiB/s wr, 1 op/s
Oct 08 10:11:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:11:57.409 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:11:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:11:57.410 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:11:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:11:57.410 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:11:57 compute-0 nova_compute[262220]: 2025-10-08 10:11:57.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:11:58 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v821: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 12 KiB/s wr, 1 op/s
Oct 08 10:11:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:58 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:58 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:58 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:11:58 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:11:58 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:11:58.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:11:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:59 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac900047b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:11:59 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:11:59 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:11:59 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:11:59 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:11:59.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:11:59 compute-0 ceph-mon[73572]: pgmap v821: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 12 KiB/s wr, 1 op/s
Oct 08 10:11:59 compute-0 nova_compute[262220]: 2025-10-08 10:11:59.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:12:00 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v822: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 14 KiB/s wr, 1 op/s
Oct 08 10:12:00 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:00 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940032c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:00 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:00 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:00 compute-0 sudo[270189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:12:00 compute-0 sudo[270189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:12:00 compute-0 sudo[270189]: pam_unix(sudo:session): session closed for user root
Oct 08 10:12:00 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:12:00 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:12:00 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:12:00.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:12:00 compute-0 sudo[270214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Oct 08 10:12:00 compute-0 sudo[270214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:12:01 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:01 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:01 compute-0 podman[270311]: 2025-10-08 10:12:01.08127557 +0000 UTC m=+0.060937743 container exec 01c666addd8584f02eb7d29040d97e45d8cb7f834fd134029c1f7cca550aeb9d (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True)
Oct 08 10:12:01 compute-0 podman[270311]: 2025-10-08 10:12:01.169206099 +0000 UTC m=+0.148868272 container exec_died 01c666addd8584f02eb7d29040d97e45d8cb7f834fd134029c1f7cca550aeb9d (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 10:12:01 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:12:01 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:12:01 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:12:01.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:12:01 compute-0 ceph-mon[73572]: pgmap v822: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 14 KiB/s wr, 1 op/s
Oct 08 10:12:01 compute-0 podman[270446]: 2025-10-08 10:12:01.655158522 +0000 UTC m=+0.055801234 container exec 4eb6b712d09e2820826e6f961f0aba1bff09f13eee1664dbb03edca7615a708b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 10:12:01 compute-0 podman[270446]: 2025-10-08 10:12:01.661730398 +0000 UTC m=+0.062373110 container exec_died 4eb6b712d09e2820826e6f961f0aba1bff09f13eee1664dbb03edca7615a708b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 10:12:01 compute-0 podman[270519]: 2025-10-08 10:12:01.921567894 +0000 UTC m=+0.045826546 container exec ff96eb6387ce0570d856673e2f9d2de072dc140ca0ed9b744a95fbf6029b655c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:12:01 compute-0 podman[270519]: 2025-10-08 10:12:01.9333301 +0000 UTC m=+0.057588742 container exec_died ff96eb6387ce0570d856673e2f9d2de072dc140ca0ed9b744a95fbf6029b655c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct 08 10:12:02 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v823: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 13 KiB/s wr, 0 op/s
Oct 08 10:12:02 compute-0 podman[270590]: 2025-10-08 10:12:02.1260017 +0000 UTC m=+0.052339021 container exec 1c113ffcb0d41c6337c256a4892f13e82bdea6ca8062b10324516aa631301ea5 (image=quay.io/ceph/haproxy:2.3, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp)
Oct 08 10:12:02 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:02 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac900047d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:02 compute-0 podman[270590]: 2025-10-08 10:12:02.140386572 +0000 UTC m=+0.066723853 container exec_died 1c113ffcb0d41c6337c256a4892f13e82bdea6ca8062b10324516aa631301ea5 (image=quay.io/ceph/haproxy:2.3, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp)
Oct 08 10:12:02 compute-0 podman[270655]: 2025-10-08 10:12:02.382672652 +0000 UTC m=+0.052828977 container exec 5814788fb4b63196d097e0c60082efdca22825a3ba337ddec5c99170a1bfd89d (image=quay.io/ceph/keepalived:2.2.4, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, vcs-type=git, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, name=keepalived, architecture=x86_64, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793)
Oct 08 10:12:02 compute-0 podman[270655]: 2025-10-08 10:12:02.395342988 +0000 UTC m=+0.065499303 container exec_died 5814788fb4b63196d097e0c60082efdca22825a3ba337ddec5c99170a1bfd89d (image=quay.io/ceph/keepalived:2.2.4, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, io.openshift.expose-services=, name=keepalived, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, release=1793, vcs-type=git, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc.)
Oct 08 10:12:02 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:02 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940032c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:02 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:12:02 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:12:02 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:12:02.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:12:02 compute-0 podman[270721]: 2025-10-08 10:12:02.590761007 +0000 UTC m=+0.048559836 container exec feb968c21a5fda3cd2e7b86379d5576dbe3dadfd4cb62b92318e18ddbaaafb75 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 10:12:02 compute-0 podman[270721]: 2025-10-08 10:12:02.61886602 +0000 UTC m=+0.076664829 container exec_died feb968c21a5fda3cd2e7b86379d5576dbe3dadfd4cb62b92318e18ddbaaafb75 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 10:12:02 compute-0 podman[270795]: 2025-10-08 10:12:02.796514317 +0000 UTC m=+0.041804385 container exec 73efcf403aa6fcb4b04e1ef714b7bf4169b576e4a4c1b34cdc0b458f62db65fd (image=quay.io/ceph/grafana:10.4.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 08 10:12:02 compute-0 nova_compute[262220]: 2025-10-08 10:12:02.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:12:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:12:02 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:12:02 compute-0 podman[270795]: 2025-10-08 10:12:02.989968272 +0000 UTC m=+0.235258330 container exec_died 73efcf403aa6fcb4b04e1ef714b7bf4169b576e4a4c1b34cdc0b458f62db65fd (image=quay.io/ceph/grafana:10.4.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 08 10:12:03 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:03 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:03 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:12:03 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:12:03 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:12:03.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:12:03 compute-0 ceph-mon[73572]: pgmap v823: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 13 KiB/s wr, 0 op/s
Oct 08 10:12:03 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:12:03 compute-0 podman[270907]: 2025-10-08 10:12:03.339652409 +0000 UTC m=+0.066247038 container exec 50d7285a7766fc915688c3cc737e186f9ad2230d7b0b7e759880375ad996bb6c (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 10:12:03 compute-0 podman[270907]: 2025-10-08 10:12:03.371353529 +0000 UTC m=+0.097948138 container exec_died 50d7285a7766fc915688c3cc737e186f9ad2230d7b0b7e759880375ad996bb6c (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 10:12:03 compute-0 sudo[270214]: pam_unix(sudo:session): session closed for user root
Oct 08 10:12:03 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 10:12:03 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:12:03 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 10:12:03 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:12:03 compute-0 sudo[270951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:12:03 compute-0 sudo[270951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:12:03 compute-0 sudo[270951]: pam_unix(sudo:session): session closed for user root
Oct 08 10:12:03 compute-0 sudo[270976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 08 10:12:03 compute-0 sudo[270976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:12:04 compute-0 sudo[270976]: pam_unix(sudo:session): session closed for user root
Oct 08 10:12:04 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 10:12:04 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:12:04 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 08 10:12:04 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 10:12:04 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 08 10:12:04 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:12:04 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 08 10:12:04 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:12:04 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 08 10:12:04 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 10:12:04 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 08 10:12:04 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 10:12:04 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 10:12:04 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:12:04 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:12:04 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v824: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 14 KiB/s wr, 0 op/s
Oct 08 10:12:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:04 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:04 compute-0 sudo[271034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:12:04 compute-0 sudo[271034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:12:04 compute-0 sudo[271034]: pam_unix(sudo:session): session closed for user root
Oct 08 10:12:04 compute-0 sudo[271059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 08 10:12:04 compute-0 sudo[271059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:12:04 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:12:04 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:12:04 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:12:04 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 10:12:04 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:12:04 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:12:04 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 10:12:04 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 10:12:04 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:12:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:04 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac900047f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:04 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:12:04 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:12:04 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:12:04.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:12:04 compute-0 podman[271127]: 2025-10-08 10:12:04.555479048 +0000 UTC m=+0.033985326 container create 31a48f51a8fe22041c90ca7083805d18274ea53c4bc6629633be2ae20358c501 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:12:04 compute-0 systemd[1]: Started libpod-conmon-31a48f51a8fe22041c90ca7083805d18274ea53c4bc6629633be2ae20358c501.scope.
Oct 08 10:12:04 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:12:04 compute-0 podman[271127]: 2025-10-08 10:12:04.634185214 +0000 UTC m=+0.112691512 container init 31a48f51a8fe22041c90ca7083805d18274ea53c4bc6629633be2ae20358c501 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_galileo, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:12:04 compute-0 podman[271127]: 2025-10-08 10:12:04.540521088 +0000 UTC m=+0.019027386 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:12:04 compute-0 podman[271127]: 2025-10-08 10:12:04.642180726 +0000 UTC m=+0.120687004 container start 31a48f51a8fe22041c90ca7083805d18274ea53c4bc6629633be2ae20358c501 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_galileo, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:12:04 compute-0 xenodochial_galileo[271143]: 167 167
Oct 08 10:12:04 compute-0 systemd[1]: libpod-31a48f51a8fe22041c90ca7083805d18274ea53c4bc6629633be2ae20358c501.scope: Deactivated successfully.
Oct 08 10:12:04 compute-0 podman[271127]: 2025-10-08 10:12:04.650732917 +0000 UTC m=+0.129239245 container attach 31a48f51a8fe22041c90ca7083805d18274ea53c4bc6629633be2ae20358c501 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:12:04 compute-0 podman[271127]: 2025-10-08 10:12:04.652463825 +0000 UTC m=+0.130970133 container died 31a48f51a8fe22041c90ca7083805d18274ea53c4bc6629633be2ae20358c501 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_galileo, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 08 10:12:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-47634be7e7d183265b59f1378c80519fc6df6accbed64e0480061fb1c5f03ed6-merged.mount: Deactivated successfully.
Oct 08 10:12:04 compute-0 podman[271127]: 2025-10-08 10:12:04.706734737 +0000 UTC m=+0.185241055 container remove 31a48f51a8fe22041c90ca7083805d18274ea53c4bc6629633be2ae20358c501 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_galileo, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:12:04 compute-0 systemd[1]: libpod-conmon-31a48f51a8fe22041c90ca7083805d18274ea53c4bc6629633be2ae20358c501.scope: Deactivated successfully.
Oct 08 10:12:04 compute-0 nova_compute[262220]: 2025-10-08 10:12:04.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:12:04 compute-0 podman[271169]: 2025-10-08 10:12:04.87759257 +0000 UTC m=+0.046628573 container create 1a6fecc0aabb7898a67b5eaa243869b28a430592e55ad1299e177af25bfbe4a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_herschel, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 08 10:12:04 compute-0 systemd[1]: Started libpod-conmon-1a6fecc0aabb7898a67b5eaa243869b28a430592e55ad1299e177af25bfbe4a9.scope.
Oct 08 10:12:04 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:12:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95ebecc99f7ec47733021879dfe9ff4a60cd145c8096a68f14f5a70846b3be54/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:12:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95ebecc99f7ec47733021879dfe9ff4a60cd145c8096a68f14f5a70846b3be54/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:12:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95ebecc99f7ec47733021879dfe9ff4a60cd145c8096a68f14f5a70846b3be54/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:12:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95ebecc99f7ec47733021879dfe9ff4a60cd145c8096a68f14f5a70846b3be54/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:12:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95ebecc99f7ec47733021879dfe9ff4a60cd145c8096a68f14f5a70846b3be54/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 10:12:04 compute-0 podman[271169]: 2025-10-08 10:12:04.857111078 +0000 UTC m=+0.026147101 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:12:04 compute-0 podman[271169]: 2025-10-08 10:12:04.965366034 +0000 UTC m=+0.134402057 container init 1a6fecc0aabb7898a67b5eaa243869b28a430592e55ad1299e177af25bfbe4a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_herschel, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Oct 08 10:12:04 compute-0 podman[271169]: 2025-10-08 10:12:04.97258873 +0000 UTC m=+0.141624723 container start 1a6fecc0aabb7898a67b5eaa243869b28a430592e55ad1299e177af25bfbe4a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_herschel, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct 08 10:12:04 compute-0 podman[271169]: 2025-10-08 10:12:04.976127497 +0000 UTC m=+0.145163530 container attach 1a6fecc0aabb7898a67b5eaa243869b28a430592e55ad1299e177af25bfbe4a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_herschel, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:12:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:05 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940032c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:05 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:12:05 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:12:05 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:12:05.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:12:05 compute-0 clever_herschel[271185]: --> passed data devices: 0 physical, 1 LVM
Oct 08 10:12:05 compute-0 clever_herschel[271185]: --> All data devices are unavailable
Oct 08 10:12:05 compute-0 systemd[1]: libpod-1a6fecc0aabb7898a67b5eaa243869b28a430592e55ad1299e177af25bfbe4a9.scope: Deactivated successfully.
Oct 08 10:12:05 compute-0 podman[271169]: 2025-10-08 10:12:05.290772924 +0000 UTC m=+0.459808937 container died 1a6fecc0aabb7898a67b5eaa243869b28a430592e55ad1299e177af25bfbe4a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_herschel, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True)
Oct 08 10:12:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-95ebecc99f7ec47733021879dfe9ff4a60cd145c8096a68f14f5a70846b3be54-merged.mount: Deactivated successfully.
Oct 08 10:12:05 compute-0 podman[271169]: 2025-10-08 10:12:05.343284688 +0000 UTC m=+0.512320691 container remove 1a6fecc0aabb7898a67b5eaa243869b28a430592e55ad1299e177af25bfbe4a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_herschel, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:12:05 compute-0 systemd[1]: libpod-conmon-1a6fecc0aabb7898a67b5eaa243869b28a430592e55ad1299e177af25bfbe4a9.scope: Deactivated successfully.
Oct 08 10:12:05 compute-0 sudo[271059]: pam_unix(sudo:session): session closed for user root
Oct 08 10:12:05 compute-0 sudo[271214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:12:05 compute-0 sudo[271214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:12:05 compute-0 sudo[271214]: pam_unix(sudo:session): session closed for user root
Oct 08 10:12:05 compute-0 ceph-mon[73572]: pgmap v824: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 14 KiB/s wr, 0 op/s
Oct 08 10:12:05 compute-0 sudo[271239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- lvm list --format json
Oct 08 10:12:05 compute-0 sudo[271239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:12:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:12:05] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Oct 08 10:12:05 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:12:05] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Oct 08 10:12:05 compute-0 podman[271304]: 2025-10-08 10:12:05.906272923 +0000 UTC m=+0.047728839 container create 6c7fba861b48e35903464613a87ba5b16165552528190acb54501bbb2675edc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_margulis, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 10:12:05 compute-0 systemd[1]: Started libpod-conmon-6c7fba861b48e35903464613a87ba5b16165552528190acb54501bbb2675edc2.scope.
Oct 08 10:12:05 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:12:05 compute-0 podman[271304]: 2025-10-08 10:12:05.977354878 +0000 UTC m=+0.118810824 container init 6c7fba861b48e35903464613a87ba5b16165552528190acb54501bbb2675edc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_margulis, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:12:05 compute-0 podman[271304]: 2025-10-08 10:12:05.887333811 +0000 UTC m=+0.028789777 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:12:05 compute-0 podman[271304]: 2025-10-08 10:12:05.983676965 +0000 UTC m=+0.125132881 container start 6c7fba861b48e35903464613a87ba5b16165552528190acb54501bbb2675edc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_margulis, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 10:12:05 compute-0 podman[271304]: 2025-10-08 10:12:05.987388747 +0000 UTC m=+0.128844693 container attach 6c7fba861b48e35903464613a87ba5b16165552528190acb54501bbb2675edc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True)
Oct 08 10:12:05 compute-0 focused_margulis[271322]: 167 167
Oct 08 10:12:05 compute-0 systemd[1]: libpod-6c7fba861b48e35903464613a87ba5b16165552528190acb54501bbb2675edc2.scope: Deactivated successfully.
Oct 08 10:12:05 compute-0 conmon[271322]: conmon 6c7fba861b48e3590346 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6c7fba861b48e35903464613a87ba5b16165552528190acb54501bbb2675edc2.scope/container/memory.events
Oct 08 10:12:05 compute-0 podman[271304]: 2025-10-08 10:12:05.991852154 +0000 UTC m=+0.133308180 container died 6c7fba861b48e35903464613a87ba5b16165552528190acb54501bbb2675edc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Oct 08 10:12:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-ecfe23675fb3b0d4a819316cdf7cfef4259950e57586acf0603e22ef267147fa-merged.mount: Deactivated successfully.
Oct 08 10:12:06 compute-0 podman[271304]: 2025-10-08 10:12:06.037358439 +0000 UTC m=+0.178814345 container remove 6c7fba861b48e35903464613a87ba5b16165552528190acb54501bbb2675edc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_margulis, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 10:12:06 compute-0 systemd[1]: libpod-conmon-6c7fba861b48e35903464613a87ba5b16165552528190acb54501bbb2675edc2.scope: Deactivated successfully.
Oct 08 10:12:06 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v825: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 3.0 KiB/s wr, 0 op/s
Oct 08 10:12:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:06 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:06 compute-0 podman[271346]: 2025-10-08 10:12:06.203237968 +0000 UTC m=+0.044958998 container create 3a5b5de9b1ba9fb735607356f1890ac7c5f1cde190c2359c261e4317d4e6c64b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_euler, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 08 10:12:06 compute-0 systemd[1]: Started libpod-conmon-3a5b5de9b1ba9fb735607356f1890ac7c5f1cde190c2359c261e4317d4e6c64b.scope.
Oct 08 10:12:06 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:12:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e23dc5210568349a80673d8deb732ffb1a18cd09a1839a678d66c0c8877390f1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:12:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e23dc5210568349a80673d8deb732ffb1a18cd09a1839a678d66c0c8877390f1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:12:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e23dc5210568349a80673d8deb732ffb1a18cd09a1839a678d66c0c8877390f1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:12:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e23dc5210568349a80673d8deb732ffb1a18cd09a1839a678d66c0c8877390f1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:12:06 compute-0 podman[271346]: 2025-10-08 10:12:06.263089334 +0000 UTC m=+0.104810394 container init 3a5b5de9b1ba9fb735607356f1890ac7c5f1cde190c2359c261e4317d4e6c64b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 08 10:12:06 compute-0 podman[271346]: 2025-10-08 10:12:06.271070497 +0000 UTC m=+0.112791527 container start 3a5b5de9b1ba9fb735607356f1890ac7c5f1cde190c2359c261e4317d4e6c64b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 08 10:12:06 compute-0 podman[271346]: 2025-10-08 10:12:06.27514347 +0000 UTC m=+0.116864520 container attach 3a5b5de9b1ba9fb735607356f1890ac7c5f1cde190c2359c261e4317d4e6c64b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Oct 08 10:12:06 compute-0 podman[271346]: 2025-10-08 10:12:06.183268132 +0000 UTC m=+0.024989182 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:12:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:06 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:06 compute-0 inspiring_euler[271363]: {
Oct 08 10:12:06 compute-0 inspiring_euler[271363]:     "1": [
Oct 08 10:12:06 compute-0 inspiring_euler[271363]:         {
Oct 08 10:12:06 compute-0 inspiring_euler[271363]:             "devices": [
Oct 08 10:12:06 compute-0 inspiring_euler[271363]:                 "/dev/loop3"
Oct 08 10:12:06 compute-0 inspiring_euler[271363]:             ],
Oct 08 10:12:06 compute-0 inspiring_euler[271363]:             "lv_name": "ceph_lv0",
Oct 08 10:12:06 compute-0 inspiring_euler[271363]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:12:06 compute-0 inspiring_euler[271363]:             "lv_size": "21470642176",
Oct 08 10:12:06 compute-0 inspiring_euler[271363]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 08 10:12:06 compute-0 inspiring_euler[271363]:             "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 10:12:06 compute-0 inspiring_euler[271363]:             "name": "ceph_lv0",
Oct 08 10:12:06 compute-0 inspiring_euler[271363]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:12:06 compute-0 inspiring_euler[271363]:             "tags": {
Oct 08 10:12:06 compute-0 inspiring_euler[271363]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:12:06 compute-0 inspiring_euler[271363]:                 "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 10:12:06 compute-0 inspiring_euler[271363]:                 "ceph.cephx_lockbox_secret": "",
Oct 08 10:12:06 compute-0 inspiring_euler[271363]:                 "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct 08 10:12:06 compute-0 inspiring_euler[271363]:                 "ceph.cluster_name": "ceph",
Oct 08 10:12:06 compute-0 inspiring_euler[271363]:                 "ceph.crush_device_class": "",
Oct 08 10:12:06 compute-0 inspiring_euler[271363]:                 "ceph.encrypted": "0",
Oct 08 10:12:06 compute-0 inspiring_euler[271363]:                 "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct 08 10:12:06 compute-0 inspiring_euler[271363]:                 "ceph.osd_id": "1",
Oct 08 10:12:06 compute-0 inspiring_euler[271363]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 08 10:12:06 compute-0 inspiring_euler[271363]:                 "ceph.type": "block",
Oct 08 10:12:06 compute-0 inspiring_euler[271363]:                 "ceph.vdo": "0",
Oct 08 10:12:06 compute-0 inspiring_euler[271363]:                 "ceph.with_tpm": "0"
Oct 08 10:12:06 compute-0 inspiring_euler[271363]:             },
Oct 08 10:12:06 compute-0 inspiring_euler[271363]:             "type": "block",
Oct 08 10:12:06 compute-0 inspiring_euler[271363]:             "vg_name": "ceph_vg0"
Oct 08 10:12:06 compute-0 inspiring_euler[271363]:         }
Oct 08 10:12:06 compute-0 inspiring_euler[271363]:     ]
Oct 08 10:12:06 compute-0 inspiring_euler[271363]: }
Oct 08 10:12:06 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:12:06 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:12:06 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:12:06.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:12:06 compute-0 systemd[1]: libpod-3a5b5de9b1ba9fb735607356f1890ac7c5f1cde190c2359c261e4317d4e6c64b.scope: Deactivated successfully.
Oct 08 10:12:06 compute-0 podman[271346]: 2025-10-08 10:12:06.56524065 +0000 UTC m=+0.406961700 container died 3a5b5de9b1ba9fb735607356f1890ac7c5f1cde190c2359c261e4317d4e6c64b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_euler, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 08 10:12:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-e23dc5210568349a80673d8deb732ffb1a18cd09a1839a678d66c0c8877390f1-merged.mount: Deactivated successfully.
Oct 08 10:12:06 compute-0 podman[271346]: 2025-10-08 10:12:06.614282661 +0000 UTC m=+0.456003691 container remove 3a5b5de9b1ba9fb735607356f1890ac7c5f1cde190c2359c261e4317d4e6c64b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_euler, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 10:12:06 compute-0 systemd[1]: libpod-conmon-3a5b5de9b1ba9fb735607356f1890ac7c5f1cde190c2359c261e4317d4e6c64b.scope: Deactivated successfully.
Oct 08 10:12:06 compute-0 sudo[271239]: pam_unix(sudo:session): session closed for user root
Oct 08 10:12:06 compute-0 sudo[271386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:12:06 compute-0 sudo[271386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:12:06 compute-0 sudo[271386]: pam_unix(sudo:session): session closed for user root
Oct 08 10:12:06 compute-0 sudo[271411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- raw list --format json
Oct 08 10:12:06 compute-0 sudo[271411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:12:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:07 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:07 compute-0 sudo[271462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:12:07 compute-0 sudo[271462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:12:07 compute-0 sudo[271462]: pam_unix(sudo:session): session closed for user root
Oct 08 10:12:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:12:07.136Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:12:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:12:07.136Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:12:07 compute-0 podman[271502]: 2025-10-08 10:12:07.158587072 +0000 UTC m=+0.053004362 container create 63f80120a7261a4db34e6da9516900e6f297a54419df0c98c2f5d2c2359000a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct 08 10:12:07 compute-0 systemd[1]: Started libpod-conmon-63f80120a7261a4db34e6da9516900e6f297a54419df0c98c2f5d2c2359000a0.scope.
Oct 08 10:12:07 compute-0 podman[271502]: 2025-10-08 10:12:07.12960913 +0000 UTC m=+0.024026450 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:12:07 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:12:07 compute-0 podman[271502]: 2025-10-08 10:12:07.242740107 +0000 UTC m=+0.137157457 container init 63f80120a7261a4db34e6da9516900e6f297a54419df0c98c2f5d2c2359000a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_brahmagupta, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 10:12:07 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:12:07 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:12:07 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:12:07.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:12:07 compute-0 podman[271502]: 2025-10-08 10:12:07.252172476 +0000 UTC m=+0.146589756 container start 63f80120a7261a4db34e6da9516900e6f297a54419df0c98c2f5d2c2359000a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_brahmagupta, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 10:12:07 compute-0 podman[271502]: 2025-10-08 10:12:07.255743323 +0000 UTC m=+0.150160643 container attach 63f80120a7261a4db34e6da9516900e6f297a54419df0c98c2f5d2c2359000a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_brahmagupta, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True)
Oct 08 10:12:07 compute-0 suspicious_brahmagupta[271520]: 167 167
Oct 08 10:12:07 compute-0 systemd[1]: libpod-63f80120a7261a4db34e6da9516900e6f297a54419df0c98c2f5d2c2359000a0.scope: Deactivated successfully.
Oct 08 10:12:07 compute-0 podman[271502]: 2025-10-08 10:12:07.259879009 +0000 UTC m=+0.154296309 container died 63f80120a7261a4db34e6da9516900e6f297a54419df0c98c2f5d2c2359000a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_brahmagupta, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Oct 08 10:12:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd24e48410452b4d1734a5625e4b3124d4804702bac6ae2a4e2b53f468ad9e35-merged.mount: Deactivated successfully.
Oct 08 10:12:07 compute-0 podman[271502]: 2025-10-08 10:12:07.306940865 +0000 UTC m=+0.201358155 container remove 63f80120a7261a4db34e6da9516900e6f297a54419df0c98c2f5d2c2359000a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_brahmagupta, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 08 10:12:07 compute-0 systemd[1]: libpod-conmon-63f80120a7261a4db34e6da9516900e6f297a54419df0c98c2f5d2c2359000a0.scope: Deactivated successfully.
Oct 08 10:12:07 compute-0 ceph-mon[73572]: pgmap v825: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 3.0 KiB/s wr, 0 op/s
Oct 08 10:12:07 compute-0 podman[271544]: 2025-10-08 10:12:07.54664693 +0000 UTC m=+0.064812421 container create e54953d1b1c0dd5242d8241d3b6f0407242f4243b9c9d718daa96bd195e2031a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_payne, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct 08 10:12:07 compute-0 systemd[1]: Started libpod-conmon-e54953d1b1c0dd5242d8241d3b6f0407242f4243b9c9d718daa96bd195e2031a.scope.
Oct 08 10:12:07 compute-0 podman[271544]: 2025-10-08 10:12:07.523069995 +0000 UTC m=+0.041235536 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:12:07 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:12:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c05b02c0914b29da858a791433e9fd4bf4e81973fa0f63dc51da2d4517843dfd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:12:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c05b02c0914b29da858a791433e9fd4bf4e81973fa0f63dc51da2d4517843dfd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:12:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c05b02c0914b29da858a791433e9fd4bf4e81973fa0f63dc51da2d4517843dfd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:12:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c05b02c0914b29da858a791433e9fd4bf4e81973fa0f63dc51da2d4517843dfd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:12:07 compute-0 podman[271544]: 2025-10-08 10:12:07.638799707 +0000 UTC m=+0.156965218 container init e54953d1b1c0dd5242d8241d3b6f0407242f4243b9c9d718daa96bd195e2031a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_payne, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 10:12:07 compute-0 podman[271544]: 2025-10-08 10:12:07.646318783 +0000 UTC m=+0.164484274 container start e54953d1b1c0dd5242d8241d3b6f0407242f4243b9c9d718daa96bd195e2031a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_payne, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct 08 10:12:07 compute-0 podman[271544]: 2025-10-08 10:12:07.649473937 +0000 UTC m=+0.167639428 container attach e54953d1b1c0dd5242d8241d3b6f0407242f4243b9c9d718daa96bd195e2031a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_payne, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 10:12:07 compute-0 nova_compute[262220]: 2025-10-08 10:12:07.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:12:08 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v826: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 3.0 KiB/s wr, 0 op/s
Oct 08 10:12:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:08 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940032c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:08 compute-0 lvm[271649]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 08 10:12:08 compute-0 lvm[271649]: VG ceph_vg0 finished
Oct 08 10:12:08 compute-0 objective_payne[271561]: {}
Oct 08 10:12:08 compute-0 systemd[1]: libpod-e54953d1b1c0dd5242d8241d3b6f0407242f4243b9c9d718daa96bd195e2031a.scope: Deactivated successfully.
Oct 08 10:12:08 compute-0 systemd[1]: libpod-e54953d1b1c0dd5242d8241d3b6f0407242f4243b9c9d718daa96bd195e2031a.scope: Consumed 1.280s CPU time.
Oct 08 10:12:08 compute-0 podman[271635]: 2025-10-08 10:12:08.425251922 +0000 UTC m=+0.100364178 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 08 10:12:08 compute-0 podman[271666]: 2025-10-08 10:12:08.46446477 +0000 UTC m=+0.024180665 container died e54953d1b1c0dd5242d8241d3b6f0407242f4243b9c9d718daa96bd195e2031a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2)
Oct 08 10:12:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-c05b02c0914b29da858a791433e9fd4bf4e81973fa0f63dc51da2d4517843dfd-merged.mount: Deactivated successfully.
Oct 08 10:12:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:08 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940032c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:08 compute-0 podman[271666]: 2025-10-08 10:12:08.518098622 +0000 UTC m=+0.077814487 container remove e54953d1b1c0dd5242d8241d3b6f0407242f4243b9c9d718daa96bd195e2031a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct 08 10:12:08 compute-0 systemd[1]: libpod-conmon-e54953d1b1c0dd5242d8241d3b6f0407242f4243b9c9d718daa96bd195e2031a.scope: Deactivated successfully.
Oct 08 10:12:08 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:12:08 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:12:08 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:12:08.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:12:08 compute-0 sudo[271411]: pam_unix(sudo:session): session closed for user root
Oct 08 10:12:08 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 10:12:08 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:12:08 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 10:12:08 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:12:08 compute-0 sudo[271681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 08 10:12:08 compute-0 sudo[271681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:12:08 compute-0 sudo[271681]: pam_unix(sudo:session): session closed for user root
Oct 08 10:12:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:09 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:09 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:12:09 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:12:09 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:12:09 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:12:09.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:12:09 compute-0 ceph-mon[73572]: pgmap v826: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 3.0 KiB/s wr, 0 op/s
Oct 08 10:12:09 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:12:09 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:12:09 compute-0 nova_compute[262220]: 2025-10-08 10:12:09.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:12:10 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v827: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 7.0 KiB/s wr, 1 op/s
Oct 08 10:12:10 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:10 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:10 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:10 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004830 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:10 compute-0 ceph-mon[73572]: pgmap v827: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 7.0 KiB/s wr, 1 op/s
Oct 08 10:12:10 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:12:10 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:12:10 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:12:10.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:12:11 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:11 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940032c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:11 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:12:11 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:12:11 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:12:11.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:12:12 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v828: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 7.8 KiB/s rd, 5.0 KiB/s wr, 1 op/s
Oct 08 10:12:12 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:12 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:12 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:12 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:12 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:12:12 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:12:12 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:12:12.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:12:12 compute-0 nova_compute[262220]: 2025-10-08 10:12:12.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:12:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:13 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:13 compute-0 ceph-mon[73572]: pgmap v828: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 7.8 KiB/s rd, 5.0 KiB/s wr, 1 op/s
Oct 08 10:12:13 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/2818842747' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:12:13 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:12:13 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:12:13 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:12:13.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:12:14 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:12:14 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v829: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 6.2 KiB/s wr, 29 op/s
Oct 08 10:12:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:14 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940032c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:14 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:14 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:12:14 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:12:14 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:12:14.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:12:14 compute-0 nova_compute[262220]: 2025-10-08 10:12:14.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:12:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:15 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001d50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:15 compute-0 ceph-mon[73572]: pgmap v829: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 6.2 KiB/s wr, 29 op/s
Oct 08 10:12:15 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:12:15 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:12:15 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:12:15.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:12:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:12:15] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Oct 08 10:12:15 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:12:15] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Oct 08 10:12:15 compute-0 podman[271717]: 2025-10-08 10:12:15.900106881 +0000 UTC m=+0.055205734 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Oct 08 10:12:15 compute-0 podman[271716]: 2025-10-08 10:12:15.90889313 +0000 UTC m=+0.066910539 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 08 10:12:16 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v830: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 5.2 KiB/s wr, 28 op/s
Oct 08 10:12:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:16 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0008f00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:16 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004920 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:16 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:12:16 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:12:16 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:12:16.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:12:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:17 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:12:17.136Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:12:17 compute-0 ceph-mon[73572]: pgmap v830: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 5.2 KiB/s wr, 28 op/s
Oct 08 10:12:17 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:12:17 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:12:17 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:12:17.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:12:17 compute-0 nova_compute[262220]: 2025-10-08 10:12:17.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:12:17 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:12:17 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:12:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:12:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:12:18 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v831: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 5.2 KiB/s wr, 28 op/s
Oct 08 10:12:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:18 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001d50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:12:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:12:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:12:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:12:18 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:12:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:18 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0008f00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:18 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:12:18 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:12:18 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:12:18.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:12:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:19 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004940 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:19 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:12:19 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:12:19 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:12:19 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:12:19.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:12:19 compute-0 ceph-mon[73572]: pgmap v831: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 5.2 KiB/s wr, 28 op/s
Oct 08 10:12:19 compute-0 nova_compute[262220]: 2025-10-08 10:12:19.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:12:20 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v832: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 5.2 KiB/s wr, 29 op/s
Oct 08 10:12:20 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:20 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:20 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:20 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001d50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:20 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:12:20 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:12:20 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:12:20.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:12:21 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:21 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001d50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:21 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:12:21 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:12:21 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:12:21.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:12:21 compute-0 ceph-mon[73572]: pgmap v832: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 5.2 KiB/s wr, 29 op/s
Oct 08 10:12:21 compute-0 ceph-mon[73572]: from='client.? 192.168.122.10:0/1435371414' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 08 10:12:21 compute-0 ceph-mon[73572]: from='client.? 192.168.122.10:0/1435371414' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 08 10:12:22 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v833: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 08 10:12:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:22 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004960 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:22 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:22 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:12:22 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:12:22 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:12:22.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:12:22 compute-0 nova_compute[262220]: 2025-10-08 10:12:22.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:12:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:23 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001d50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:23 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:12:23 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:12:23 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:12:23.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:12:23 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:12:23.301 163175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2a:d6:ca', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'b2:8b:1e:40:84:a3'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 08 10:12:23 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:12:23.302 163175 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 08 10:12:23 compute-0 ceph-mon[73572]: pgmap v833: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 08 10:12:23 compute-0 nova_compute[262220]: 2025-10-08 10:12:23.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:12:24 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:12:24 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v834: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 08 10:12:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:24 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001d50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:24 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004980 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:24 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:12:24 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:12:24 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:12:24.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:12:24 compute-0 nova_compute[262220]: 2025-10-08 10:12:24.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:12:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:25 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:25 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:12:25 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 10:12:25 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:12:25.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 10:12:25 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:12:25.304 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26869918-b723-425c-a2e1-0d697f3d0fec, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 10:12:25 compute-0 ceph-mon[73572]: pgmap v834: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 08 10:12:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:12:25] "GET /metrics HTTP/1.1" 200 48445 "" "Prometheus/2.51.0"
Oct 08 10:12:25 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:12:25] "GET /metrics HTTP/1.1" 200 48445 "" "Prometheus/2.51.0"
Oct 08 10:12:26 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v835: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 10:12:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:26 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:26 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0008f00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:26 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:12:26 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:12:26 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:12:26.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:12:26 compute-0 podman[271765]: 2025-10-08 10:12:26.902096719 +0000 UTC m=+0.063048381 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 08 10:12:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:27 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac900049a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:12:27.137Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:12:27 compute-0 sudo[271785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:12:27 compute-0 sudo[271785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:12:27 compute-0 sudo[271785]: pam_unix(sudo:session): session closed for user root
Oct 08 10:12:27 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:12:27 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:12:27 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:12:27.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:12:27 compute-0 ceph-mon[73572]: pgmap v835: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 10:12:27 compute-0 nova_compute[262220]: 2025-10-08 10:12:27.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:12:28 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v836: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 10:12:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:28 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:28 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7000c220 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:28 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:12:28 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:12:28 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:12:28.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:12:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:29 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0008f00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:29 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:12:29 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:12:29 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:12:29 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:12:29.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:12:29 compute-0 ceph-mon[73572]: pgmap v836: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 10:12:29 compute-0 nova_compute[262220]: 2025-10-08 10:12:29.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:12:30 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v837: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 08 10:12:30 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:30 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac900049c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:30 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:30 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:30 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:12:30 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:12:30 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:12:30.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:12:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7000c220 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:31 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:12:31 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:12:31 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:12:31.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:12:31 compute-0 ceph-mon[73572]: pgmap v837: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 08 10:12:32 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v838: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 10:12:32 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:32 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:32 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:32 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac900049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:32 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:12:32 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:12:32 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:12:32.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:12:32 compute-0 nova_compute[262220]: 2025-10-08 10:12:32.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:12:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:12:32 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:12:33 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:33 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:33 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:12:33 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:12:33 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:12:33.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:12:33 compute-0 ceph-mon[73572]: pgmap v838: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 10:12:33 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:12:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:12:34 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v839: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 10:12:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:34 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7000c220 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:34 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:34 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:12:34 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:12:34 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:12:34.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:12:34 compute-0 nova_compute[262220]: 2025-10-08 10:12:34.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:12:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:35 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004a00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:35 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:12:35 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:12:35 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:12:35.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:12:35 compute-0 ceph-mon[73572]: pgmap v839: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 10:12:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:12:35] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Oct 08 10:12:35 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:12:35] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Oct 08 10:12:36 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v840: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 10:12:36 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:36 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:36 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:36 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7000c220 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:36 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:12:36 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:12:36 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:12:36.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:12:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:37 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:12:37.138Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:12:37 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:12:37 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:12:37 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:12:37.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:12:37 compute-0 ceph-mon[73572]: pgmap v840: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 10:12:37 compute-0 nova_compute[262220]: 2025-10-08 10:12:37.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:12:38 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v841: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 10:12:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:38 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004a20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:38 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:38 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:12:38 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:12:38 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:12:38.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:12:38 compute-0 podman[271822]: 2025-10-08 10:12:38.967383015 +0000 UTC m=+0.120917853 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, container_name=ovn_controller, managed_by=edpm_ansible)
Oct 08 10:12:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:39 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7000c220 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:39 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:12:39 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:12:39 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:12:39 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:12:39.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:12:39 compute-0 ceph-mon[73572]: pgmap v841: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 10:12:39 compute-0 nova_compute[262220]: 2025-10-08 10:12:39.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:12:40 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v842: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 08 10:12:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:40 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:40 compute-0 ceph-mon[73572]: pgmap v842: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 08 10:12:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:40 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004a40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:40 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:12:40 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:12:40 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:12:40.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:12:40 compute-0 nova_compute[262220]: 2025-10-08 10:12:40.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:12:40 compute-0 nova_compute[262220]: 2025-10-08 10:12:40.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:12:41 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:41 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:41 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:12:41 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:12:41 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:12:41.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:12:41 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/1609142708' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:12:41 compute-0 nova_compute[262220]: 2025-10-08 10:12:41.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:12:42 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v843: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 10:12:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:42 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7000c220 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:42 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:42 compute-0 ceph-mon[73572]: pgmap v843: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 08 10:12:42 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:12:42 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:12:42 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:12:42.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:12:42 compute-0 nova_compute[262220]: 2025-10-08 10:12:42.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:12:42 compute-0 nova_compute[262220]: 2025-10-08 10:12:42.882 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:12:42 compute-0 nova_compute[262220]: 2025-10-08 10:12:42.897 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:12:42 compute-0 nova_compute[262220]: 2025-10-08 10:12:42.897 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 08 10:12:42 compute-0 nova_compute[262220]: 2025-10-08 10:12:42.897 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:12:42 compute-0 nova_compute[262220]: 2025-10-08 10:12:42.924 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:12:42 compute-0 nova_compute[262220]: 2025-10-08 10:12:42.925 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:12:42 compute-0 nova_compute[262220]: 2025-10-08 10:12:42.925 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:12:42 compute-0 nova_compute[262220]: 2025-10-08 10:12:42.926 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 08 10:12:42 compute-0 nova_compute[262220]: 2025-10-08 10:12:42.927 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:12:43 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:43 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004a60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:43 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:12:43 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 10:12:43 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:12:43.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 10:12:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 08 10:12:43 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2505331608' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:12:43 compute-0 nova_compute[262220]: 2025-10-08 10:12:43.409 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:12:43 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/2505331608' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:12:43 compute-0 nova_compute[262220]: 2025-10-08 10:12:43.593 2 WARNING nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 10:12:43 compute-0 nova_compute[262220]: 2025-10-08 10:12:43.595 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4600MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 08 10:12:43 compute-0 nova_compute[262220]: 2025-10-08 10:12:43.595 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:12:43 compute-0 nova_compute[262220]: 2025-10-08 10:12:43.595 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:12:43 compute-0 nova_compute[262220]: 2025-10-08 10:12:43.649 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 08 10:12:43 compute-0 nova_compute[262220]: 2025-10-08 10:12:43.650 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 08 10:12:43 compute-0 nova_compute[262220]: 2025-10-08 10:12:43.672 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:12:44 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:12:44 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 08 10:12:44 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3856953484' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:12:44 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v844: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 08 10:12:44 compute-0 nova_compute[262220]: 2025-10-08 10:12:44.164 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:12:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:44 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:44 compute-0 nova_compute[262220]: 2025-10-08 10:12:44.169 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 08 10:12:44 compute-0 nova_compute[262220]: 2025-10-08 10:12:44.185 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 08 10:12:44 compute-0 nova_compute[262220]: 2025-10-08 10:12:44.187 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 08 10:12:44 compute-0 nova_compute[262220]: 2025-10-08 10:12:44.188 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:12:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:44 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7000c220 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:44 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3856953484' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:12:44 compute-0 ceph-mon[73572]: pgmap v844: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 08 10:12:44 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:12:44 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:12:44 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:12:44.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:12:44 compute-0 nova_compute[262220]: 2025-10-08 10:12:44.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:12:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:45 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:45 compute-0 nova_compute[262220]: 2025-10-08 10:12:45.178 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:12:45 compute-0 nova_compute[262220]: 2025-10-08 10:12:45.178 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 08 10:12:45 compute-0 nova_compute[262220]: 2025-10-08 10:12:45.178 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 08 10:12:45 compute-0 nova_compute[262220]: 2025-10-08 10:12:45.195 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 08 10:12:45 compute-0 nova_compute[262220]: 2025-10-08 10:12:45.196 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:12:45 compute-0 nova_compute[262220]: 2025-10-08 10:12:45.196 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:12:45 compute-0 nova_compute[262220]: 2025-10-08 10:12:45.196 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:12:45 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:12:45 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:12:45 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:12:45.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:12:45 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/843562555' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:12:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:12:45] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Oct 08 10:12:45 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:12:45] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Oct 08 10:12:46 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v845: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 08 10:12:46 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:46 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004a60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:46 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:46 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:46 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:12:46 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:12:46 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:12:46.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:12:46 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/1591240475' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:12:46 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/271333014' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:12:46 compute-0 ceph-mon[73572]: pgmap v845: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 08 10:12:46 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/1090040394' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 08 10:12:46 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/667254830' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:12:46 compute-0 podman[271904]: 2025-10-08 10:12:46.681283738 +0000 UTC m=+0.060990384 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 08 10:12:46 compute-0 podman[271905]: 2025-10-08 10:12:46.700995426 +0000 UTC m=+0.065812042 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 08 10:12:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:47 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:12:47.139Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:12:47 compute-0 sudo[271946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:12:47 compute-0 sudo[271946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:12:47 compute-0 sudo[271946]: pam_unix(sudo:session): session closed for user root
Oct 08 10:12:47 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:12:47 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:12:47 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:12:47.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:12:47 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/3914761667' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 08 10:12:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:12:47
Oct 08 10:12:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 08 10:12:47 compute-0 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct 08 10:12:47 compute-0 ceph-mgr[73869]: [balancer INFO root] pools ['vms', 'default.rgw.control', '.nfs', 'backups', 'cephfs.cephfs.data', '.mgr', 'volumes', '.rgw.root', 'images', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log']
Oct 08 10:12:47 compute-0 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct 08 10:12:47 compute-0 nova_compute[262220]: 2025-10-08 10:12:47.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:12:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:12:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:12:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:12:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:12:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct 08 10:12:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:12:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 08 10:12:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:12:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003459970412515465 of space, bias 1.0, pg target 0.10379911237546395 quantized to 32 (current 32)
Oct 08 10:12:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:12:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:12:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:12:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:12:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:12:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 08 10:12:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:12:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 08 10:12:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:12:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:12:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:12:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 08 10:12:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:12:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 08 10:12:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:12:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:12:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:12:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 08 10:12:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:12:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 10:12:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 08 10:12:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:12:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:12:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:12:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:12:48 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v846: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 08 10:12:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:12:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:12:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:12:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:12:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:48 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 08 10:12:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:12:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:12:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:12:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:12:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:48 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940032c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:48 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:12:48 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:12:48 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:12:48.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:12:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:12:48 compute-0 ceph-mon[73572]: pgmap v846: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 08 10:12:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:49 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:12:49 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:12:49 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:12:49 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:12:49.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:12:49 compute-0 nova_compute[262220]: 2025-10-08 10:12:49.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:12:50 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v847: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 08 10:12:50 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:50 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:50 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:50 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:50 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:12:50 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:12:50 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:12:50.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:12:51 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:51 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940032c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:51 compute-0 ceph-mon[73572]: pgmap v847: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 08 10:12:51 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:12:51 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:12:51 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:12:51.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:12:52 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v848: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 08 10:12:52 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:52 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:52 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:52 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:52 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:12:52 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:12:52 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:12:52.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:12:52 compute-0 nova_compute[262220]: 2025-10-08 10:12:52.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:12:53 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:53 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:53 compute-0 ceph-mon[73572]: pgmap v848: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 08 10:12:53 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:12:53 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:12:53 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:12:53.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:12:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:12:54 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v849: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 08 10:12:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:54 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940032c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:54 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:54 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:12:54 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:12:54 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:12:54.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:12:55 compute-0 nova_compute[262220]: 2025-10-08 10:12:54.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:12:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:55 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:55 compute-0 ceph-mon[73572]: pgmap v849: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 08 10:12:55 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:12:55 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:12:55 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:12:55.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:12:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:12:55] "GET /metrics HTTP/1.1" 200 48480 "" "Prometheus/2.51.0"
Oct 08 10:12:55 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:12:55] "GET /metrics HTTP/1.1" 200 48480 "" "Prometheus/2.51.0"
Oct 08 10:12:56 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v850: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 08 10:12:56 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:56 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:56 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:56 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940032c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:56 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:12:56 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:12:56 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:12:56.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:12:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:57 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0029f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:12:57.139Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:12:57 compute-0 ceph-mon[73572]: pgmap v850: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 08 10:12:57 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:12:57 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:12:57 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:12:57.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:12:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:12:57.410 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:12:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:12:57.411 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:12:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:12:57.411 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:12:57 compute-0 nova_compute[262220]: 2025-10-08 10:12:57.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:12:57 compute-0 podman[271981]: 2025-10-08 10:12:57.932124792 +0000 UTC m=+0.079684518 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 08 10:12:58 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v851: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 08 10:12:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:58 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:58 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:58 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:12:58 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:12:58 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:12:58.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:12:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:59 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940032c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:12:59 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:12:59 compute-0 ceph-mon[73572]: pgmap v851: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 08 10:12:59 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:12:59 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:12:59 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:12:59.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:13:00 compute-0 nova_compute[262220]: 2025-10-08 10:13:00.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:13:00 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v852: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 08 10:13:00 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:00 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0029f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:00 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:00 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:00 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:13:00 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:13:00 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:13:00.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:13:01 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:01 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:01 compute-0 ceph-mon[73572]: pgmap v852: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 08 10:13:01 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:13:01 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:13:01 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:13:01.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:13:02 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v853: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 68 op/s
Oct 08 10:13:02 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:02 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940032c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:02 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:02 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0029f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:02 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:13:02 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:13:02 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:13:02.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:13:02 compute-0 nova_compute[262220]: 2025-10-08 10:13:02.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:13:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:13:02 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:13:03 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:03 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:03 compute-0 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 08 10:13:03 compute-0 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 5782 writes, 25K keys, 5782 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.03 MB/s
                                           Cumulative WAL: 5782 writes, 5782 syncs, 1.00 writes per sync, written: 0.05 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1544 writes, 6558 keys, 1544 commit groups, 1.0 writes per commit group, ingest: 11.14 MB, 0.02 MB/s
                                           Interval WAL: 1544 writes, 1544 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     95.0      0.42              0.10        14    0.030       0      0       0.0       0.0
                                             L6      1/0   11.81 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   4.1    136.2    116.4      1.41              0.36        13    0.109     67K   6910       0.0       0.0
                                            Sum      1/0   11.81 MB   0.0      0.2     0.0      0.1       0.2      0.1       0.0   5.1    105.1    111.5      1.83              0.46        27    0.068     67K   6910       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.3     94.5     93.3      0.77              0.15        10    0.077     29K   2558       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   0.0    136.2    116.4      1.41              0.36        13    0.109     67K   6910       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     95.7      0.42              0.10        13    0.032       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     16.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.039, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.20 GB write, 0.11 MB/s write, 0.19 GB read, 0.11 MB/s read, 1.8 seconds
                                           Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.8 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f7a1ce3350#2 capacity: 304.00 MB usage: 15.15 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.00011 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(819,14.60 MB,4.80332%) FilterBlock(28,201.17 KB,0.064624%) IndexBlock(28,359.95 KB,0.115631%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 08 10:13:03 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:13:03 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:13:03 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:13:03.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:13:03 compute-0 ceph-mon[73572]: pgmap v853: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 68 op/s
Oct 08 10:13:03 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:13:04 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:13:04 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v854: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Oct 08 10:13:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:04 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:04 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940032c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:04 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:13:04 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:13:04 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:13:04.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:13:05 compute-0 nova_compute[262220]: 2025-10-08 10:13:05.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:13:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:05 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:05 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:13:05 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:13:05 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:13:05.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:13:05 compute-0 ceph-mon[73572]: pgmap v854: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Oct 08 10:13:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:13:05] "GET /metrics HTTP/1.1" 200 48471 "" "Prometheus/2.51.0"
Oct 08 10:13:05 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:13:05] "GET /metrics HTTP/1.1" 200 48471 "" "Prometheus/2.51.0"
Oct 08 10:13:06 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v855: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 08 10:13:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:06 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:06 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:06 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:13:06 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:13:06 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:13:06.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:13:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:07 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94004f90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:13:07.141Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:13:07 compute-0 sudo[272012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:13:07 compute-0 sudo[272012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:13:07 compute-0 sudo[272012]: pam_unix(sudo:session): session closed for user root
Oct 08 10:13:07 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:13:07 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:13:07 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:13:07.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:13:07 compute-0 ceph-mon[73572]: pgmap v855: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 08 10:13:07 compute-0 nova_compute[262220]: 2025-10-08 10:13:07.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:13:08 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v856: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 08 10:13:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:08 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:08 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:08 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:13:08 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.002000066s ======
Oct 08 10:13:08 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:13:08.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000066s
Oct 08 10:13:08 compute-0 sudo[272038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:13:08 compute-0 sudo[272038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:13:08 compute-0 sudo[272038]: pam_unix(sudo:session): session closed for user root
Oct 08 10:13:09 compute-0 sudo[272063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 08 10:13:09 compute-0 sudo[272063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:13:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:09 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:09 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:13:09 compute-0 podman[272087]: 2025-10-08 10:13:09.173853396 +0000 UTC m=+0.144319313 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 08 10:13:09 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:13:09 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:13:09 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:13:09.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:13:09 compute-0 ceph-mon[73572]: pgmap v856: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 08 10:13:09 compute-0 sudo[272063]: pam_unix(sudo:session): session closed for user root
Oct 08 10:13:09 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 10:13:09 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:13:09 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 08 10:13:09 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 10:13:09 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 08 10:13:09 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:13:09 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 08 10:13:09 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:13:09 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 08 10:13:09 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 10:13:09 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 08 10:13:09 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 10:13:09 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 10:13:09 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:13:09 compute-0 sudo[272145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:13:09 compute-0 sudo[272145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:13:09 compute-0 sudo[272145]: pam_unix(sudo:session): session closed for user root
Oct 08 10:13:09 compute-0 sudo[272170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 08 10:13:09 compute-0 sudo[272170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:13:10 compute-0 nova_compute[262220]: 2025-10-08 10:13:10.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:13:10 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v857: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 08 10:13:10 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:10 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:10 compute-0 podman[272237]: 2025-10-08 10:13:10.273911673 +0000 UTC m=+0.055025418 container create a1800684c5147a3e752852237e291da1047419a46061907609f8693507130600 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Oct 08 10:13:10 compute-0 systemd[1]: Started libpod-conmon-a1800684c5147a3e752852237e291da1047419a46061907609f8693507130600.scope.
Oct 08 10:13:10 compute-0 podman[272237]: 2025-10-08 10:13:10.249803711 +0000 UTC m=+0.030917476 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:13:10 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:13:10 compute-0 podman[272237]: 2025-10-08 10:13:10.374862489 +0000 UTC m=+0.155976234 container init a1800684c5147a3e752852237e291da1047419a46061907609f8693507130600 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_euler, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:13:10 compute-0 podman[272237]: 2025-10-08 10:13:10.385991695 +0000 UTC m=+0.167105420 container start a1800684c5147a3e752852237e291da1047419a46061907609f8693507130600 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_euler, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 08 10:13:10 compute-0 podman[272237]: 2025-10-08 10:13:10.389116328 +0000 UTC m=+0.170230053 container attach a1800684c5147a3e752852237e291da1047419a46061907609f8693507130600 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_euler, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct 08 10:13:10 compute-0 dreamy_euler[272254]: 167 167
Oct 08 10:13:10 compute-0 systemd[1]: libpod-a1800684c5147a3e752852237e291da1047419a46061907609f8693507130600.scope: Deactivated successfully.
Oct 08 10:13:10 compute-0 podman[272237]: 2025-10-08 10:13:10.394349999 +0000 UTC m=+0.175463724 container died a1800684c5147a3e752852237e291da1047419a46061907609f8693507130600 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_euler, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:13:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-0388ff08ff862d5c57b6b83b0c9817809cfaf42f6172fe87b51a71c62f253057-merged.mount: Deactivated successfully.
Oct 08 10:13:10 compute-0 podman[272237]: 2025-10-08 10:13:10.4375869 +0000 UTC m=+0.218700625 container remove a1800684c5147a3e752852237e291da1047419a46061907609f8693507130600 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 08 10:13:10 compute-0 systemd[1]: libpod-conmon-a1800684c5147a3e752852237e291da1047419a46061907609f8693507130600.scope: Deactivated successfully.
Oct 08 10:13:10 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:13:10 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 10:13:10 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:13:10 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:13:10 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 10:13:10 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 10:13:10 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:13:10 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:10 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:10 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:13:10 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:13:10 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:13:10.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:13:10 compute-0 podman[272279]: 2025-10-08 10:13:10.630978842 +0000 UTC m=+0.050719256 container create 4f1c4d889b32557c79d5553efc4d3576ee3955caaf11532420ee8dce79405355 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_babbage, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 08 10:13:10 compute-0 systemd[1]: Started libpod-conmon-4f1c4d889b32557c79d5553efc4d3576ee3955caaf11532420ee8dce79405355.scope.
Oct 08 10:13:10 compute-0 podman[272279]: 2025-10-08 10:13:10.60867998 +0000 UTC m=+0.028420444 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:13:10 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:13:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81afca82c7c86116e3b0321a91b5d4c1631dff51503a02b125fc685599b30bf7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:13:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81afca82c7c86116e3b0321a91b5d4c1631dff51503a02b125fc685599b30bf7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:13:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81afca82c7c86116e3b0321a91b5d4c1631dff51503a02b125fc685599b30bf7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:13:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81afca82c7c86116e3b0321a91b5d4c1631dff51503a02b125fc685599b30bf7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:13:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81afca82c7c86116e3b0321a91b5d4c1631dff51503a02b125fc685599b30bf7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 10:13:10 compute-0 podman[272279]: 2025-10-08 10:13:10.736364885 +0000 UTC m=+0.156105319 container init 4f1c4d889b32557c79d5553efc4d3576ee3955caaf11532420ee8dce79405355 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_babbage, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 10:13:10 compute-0 podman[272279]: 2025-10-08 10:13:10.744360007 +0000 UTC m=+0.164100421 container start 4f1c4d889b32557c79d5553efc4d3576ee3955caaf11532420ee8dce79405355 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_babbage, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 08 10:13:10 compute-0 podman[272279]: 2025-10-08 10:13:10.747910754 +0000 UTC m=+0.167651188 container attach 4f1c4d889b32557c79d5553efc4d3576ee3955caaf11532420ee8dce79405355 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_babbage, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 10:13:11 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:11 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94004f90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:11 compute-0 interesting_babbage[272295]: --> passed data devices: 0 physical, 1 LVM
Oct 08 10:13:11 compute-0 interesting_babbage[272295]: --> All data devices are unavailable
Oct 08 10:13:11 compute-0 systemd[1]: libpod-4f1c4d889b32557c79d5553efc4d3576ee3955caaf11532420ee8dce79405355.scope: Deactivated successfully.
Oct 08 10:13:11 compute-0 podman[272279]: 2025-10-08 10:13:11.158190042 +0000 UTC m=+0.577930476 container died 4f1c4d889b32557c79d5553efc4d3576ee3955caaf11532420ee8dce79405355 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_babbage, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 08 10:13:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-81afca82c7c86116e3b0321a91b5d4c1631dff51503a02b125fc685599b30bf7-merged.mount: Deactivated successfully.
Oct 08 10:13:11 compute-0 podman[272279]: 2025-10-08 10:13:11.21506112 +0000 UTC m=+0.634801534 container remove 4f1c4d889b32557c79d5553efc4d3576ee3955caaf11532420ee8dce79405355 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_babbage, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:13:11 compute-0 systemd[1]: libpod-conmon-4f1c4d889b32557c79d5553efc4d3576ee3955caaf11532420ee8dce79405355.scope: Deactivated successfully.
Oct 08 10:13:11 compute-0 sudo[272170]: pam_unix(sudo:session): session closed for user root
Oct 08 10:13:11 compute-0 sudo[272323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:13:11 compute-0 sudo[272323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:13:11 compute-0 sudo[272323]: pam_unix(sudo:session): session closed for user root
Oct 08 10:13:11 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:13:11 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:13:11 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:13:11.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:13:11 compute-0 sudo[272348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- lvm list --format json
Oct 08 10:13:11 compute-0 sudo[272348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:13:11 compute-0 ceph-mon[73572]: pgmap v857: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 08 10:13:11 compute-0 podman[272414]: 2025-10-08 10:13:11.869577661 +0000 UTC m=+0.048237806 container create 1f410068d85182ebfbee133c46f0e7592336b25b326fec08ce361fbc37e31b70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_albattani, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:13:11 compute-0 systemd[1]: Started libpod-conmon-1f410068d85182ebfbee133c46f0e7592336b25b326fec08ce361fbc37e31b70.scope.
Oct 08 10:13:11 compute-0 podman[272414]: 2025-10-08 10:13:11.846367298 +0000 UTC m=+0.025027433 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:13:11 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:13:11 compute-0 podman[272414]: 2025-10-08 10:13:11.966002859 +0000 UTC m=+0.144663004 container init 1f410068d85182ebfbee133c46f0e7592336b25b326fec08ce361fbc37e31b70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 08 10:13:11 compute-0 podman[272414]: 2025-10-08 10:13:11.97578377 +0000 UTC m=+0.154443925 container start 1f410068d85182ebfbee133c46f0e7592336b25b326fec08ce361fbc37e31b70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_albattani, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct 08 10:13:11 compute-0 nervous_albattani[272431]: 167 167
Oct 08 10:13:11 compute-0 systemd[1]: libpod-1f410068d85182ebfbee133c46f0e7592336b25b326fec08ce361fbc37e31b70.scope: Deactivated successfully.
Oct 08 10:13:11 compute-0 podman[272414]: 2025-10-08 10:13:11.985616814 +0000 UTC m=+0.164276979 container attach 1f410068d85182ebfbee133c46f0e7592336b25b326fec08ce361fbc37e31b70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 10:13:11 compute-0 podman[272414]: 2025-10-08 10:13:11.986942807 +0000 UTC m=+0.165602922 container died 1f410068d85182ebfbee133c46f0e7592336b25b326fec08ce361fbc37e31b70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_albattani, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 10:13:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-f85fa1d4c027b5c982f624f8c7cb2c6b8c8cf749606fb31886978e54a5f29bf0-merged.mount: Deactivated successfully.
Oct 08 10:13:12 compute-0 podman[272414]: 2025-10-08 10:13:12.025427291 +0000 UTC m=+0.204087406 container remove 1f410068d85182ebfbee133c46f0e7592336b25b326fec08ce361fbc37e31b70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_albattani, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Oct 08 10:13:12 compute-0 systemd[1]: libpod-conmon-1f410068d85182ebfbee133c46f0e7592336b25b326fec08ce361fbc37e31b70.scope: Deactivated successfully.
Oct 08 10:13:12 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v858: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 08 10:13:12 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:12 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:12 compute-0 podman[272458]: 2025-10-08 10:13:12.230892331 +0000 UTC m=+0.048804645 container create af1324ae037e9d8fa66e3ebee64bf8df85bb90e6ba5950d960f66b07b463bb58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 08 10:13:12 compute-0 systemd[1]: Started libpod-conmon-af1324ae037e9d8fa66e3ebee64bf8df85bb90e6ba5950d960f66b07b463bb58.scope.
Oct 08 10:13:12 compute-0 podman[272458]: 2025-10-08 10:13:12.211916367 +0000 UTC m=+0.029828681 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:13:12 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:13:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b24e694bc9e2a7079fd5a7e49b2948d2e867d657904ea38086b321d6c85b22d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:13:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b24e694bc9e2a7079fd5a7e49b2948d2e867d657904ea38086b321d6c85b22d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:13:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b24e694bc9e2a7079fd5a7e49b2948d2e867d657904ea38086b321d6c85b22d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:13:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b24e694bc9e2a7079fd5a7e49b2948d2e867d657904ea38086b321d6c85b22d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:13:12 compute-0 podman[272458]: 2025-10-08 10:13:12.340639676 +0000 UTC m=+0.158552020 container init af1324ae037e9d8fa66e3ebee64bf8df85bb90e6ba5950d960f66b07b463bb58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_tu, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 10:13:12 compute-0 podman[272458]: 2025-10-08 10:13:12.352135694 +0000 UTC m=+0.170047988 container start af1324ae037e9d8fa66e3ebee64bf8df85bb90e6ba5950d960f66b07b463bb58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_tu, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:13:12 compute-0 podman[272458]: 2025-10-08 10:13:12.355679889 +0000 UTC m=+0.173592233 container attach af1324ae037e9d8fa66e3ebee64bf8df85bb90e6ba5950d960f66b07b463bb58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_tu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0)
Oct 08 10:13:12 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:12 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:12 compute-0 ceph-mon[73572]: pgmap v858: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 08 10:13:12 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:13:12 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 10:13:12 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:13:12.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 10:13:12 compute-0 kind_tu[272474]: {
Oct 08 10:13:12 compute-0 kind_tu[272474]:     "1": [
Oct 08 10:13:12 compute-0 kind_tu[272474]:         {
Oct 08 10:13:12 compute-0 kind_tu[272474]:             "devices": [
Oct 08 10:13:12 compute-0 kind_tu[272474]:                 "/dev/loop3"
Oct 08 10:13:12 compute-0 kind_tu[272474]:             ],
Oct 08 10:13:12 compute-0 kind_tu[272474]:             "lv_name": "ceph_lv0",
Oct 08 10:13:12 compute-0 kind_tu[272474]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:13:12 compute-0 kind_tu[272474]:             "lv_size": "21470642176",
Oct 08 10:13:12 compute-0 kind_tu[272474]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 08 10:13:12 compute-0 kind_tu[272474]:             "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 10:13:12 compute-0 kind_tu[272474]:             "name": "ceph_lv0",
Oct 08 10:13:12 compute-0 kind_tu[272474]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:13:12 compute-0 kind_tu[272474]:             "tags": {
Oct 08 10:13:12 compute-0 kind_tu[272474]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:13:12 compute-0 kind_tu[272474]:                 "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 10:13:12 compute-0 kind_tu[272474]:                 "ceph.cephx_lockbox_secret": "",
Oct 08 10:13:12 compute-0 kind_tu[272474]:                 "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct 08 10:13:12 compute-0 kind_tu[272474]:                 "ceph.cluster_name": "ceph",
Oct 08 10:13:12 compute-0 kind_tu[272474]:                 "ceph.crush_device_class": "",
Oct 08 10:13:12 compute-0 kind_tu[272474]:                 "ceph.encrypted": "0",
Oct 08 10:13:12 compute-0 kind_tu[272474]:                 "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct 08 10:13:12 compute-0 kind_tu[272474]:                 "ceph.osd_id": "1",
Oct 08 10:13:12 compute-0 kind_tu[272474]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 08 10:13:12 compute-0 kind_tu[272474]:                 "ceph.type": "block",
Oct 08 10:13:12 compute-0 kind_tu[272474]:                 "ceph.vdo": "0",
Oct 08 10:13:12 compute-0 kind_tu[272474]:                 "ceph.with_tpm": "0"
Oct 08 10:13:12 compute-0 kind_tu[272474]:             },
Oct 08 10:13:12 compute-0 kind_tu[272474]:             "type": "block",
Oct 08 10:13:12 compute-0 kind_tu[272474]:             "vg_name": "ceph_vg0"
Oct 08 10:13:12 compute-0 kind_tu[272474]:         }
Oct 08 10:13:12 compute-0 kind_tu[272474]:     ]
Oct 08 10:13:12 compute-0 kind_tu[272474]: }
Oct 08 10:13:12 compute-0 systemd[1]: libpod-af1324ae037e9d8fa66e3ebee64bf8df85bb90e6ba5950d960f66b07b463bb58.scope: Deactivated successfully.
Oct 08 10:13:12 compute-0 podman[272458]: 2025-10-08 10:13:12.710446334 +0000 UTC m=+0.528358658 container died af1324ae037e9d8fa66e3ebee64bf8df85bb90e6ba5950d960f66b07b463bb58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_tu, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct 08 10:13:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b24e694bc9e2a7079fd5a7e49b2948d2e867d657904ea38086b321d6c85b22d-merged.mount: Deactivated successfully.
Oct 08 10:13:12 compute-0 podman[272458]: 2025-10-08 10:13:12.765375798 +0000 UTC m=+0.583288092 container remove af1324ae037e9d8fa66e3ebee64bf8df85bb90e6ba5950d960f66b07b463bb58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_tu, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 10:13:12 compute-0 systemd[1]: libpod-conmon-af1324ae037e9d8fa66e3ebee64bf8df85bb90e6ba5950d960f66b07b463bb58.scope: Deactivated successfully.
Oct 08 10:13:12 compute-0 sudo[272348]: pam_unix(sudo:session): session closed for user root
Oct 08 10:13:12 compute-0 nova_compute[262220]: 2025-10-08 10:13:12.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:13:12 compute-0 sudo[272496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:13:12 compute-0 sudo[272496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:13:12 compute-0 sudo[272496]: pam_unix(sudo:session): session closed for user root
Oct 08 10:13:12 compute-0 sudo[272521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- raw list --format json
Oct 08 10:13:12 compute-0 sudo[272521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:13:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:13 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:13 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:13:13 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 10:13:13 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:13:13.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 10:13:13 compute-0 podman[272590]: 2025-10-08 10:13:13.477847084 +0000 UTC m=+0.043149539 container create db6498b457c8f8e30456ed3c36e090bf8db1c85fd32e7b0a30a4b930eebe96f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 08 10:13:13 compute-0 systemd[1]: Started libpod-conmon-db6498b457c8f8e30456ed3c36e090bf8db1c85fd32e7b0a30a4b930eebe96f3.scope.
Oct 08 10:13:13 compute-0 podman[272590]: 2025-10-08 10:13:13.45946712 +0000 UTC m=+0.024769575 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:13:13 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:13:13 compute-0 podman[272590]: 2025-10-08 10:13:13.578307484 +0000 UTC m=+0.143609939 container init db6498b457c8f8e30456ed3c36e090bf8db1c85fd32e7b0a30a4b930eebe96f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct 08 10:13:13 compute-0 podman[272590]: 2025-10-08 10:13:13.588777828 +0000 UTC m=+0.154080273 container start db6498b457c8f8e30456ed3c36e090bf8db1c85fd32e7b0a30a4b930eebe96f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:13:13 compute-0 podman[272590]: 2025-10-08 10:13:13.593869695 +0000 UTC m=+0.159172180 container attach db6498b457c8f8e30456ed3c36e090bf8db1c85fd32e7b0a30a4b930eebe96f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Oct 08 10:13:13 compute-0 confident_taussig[272607]: 167 167
Oct 08 10:13:13 compute-0 systemd[1]: libpod-db6498b457c8f8e30456ed3c36e090bf8db1c85fd32e7b0a30a4b930eebe96f3.scope: Deactivated successfully.
Oct 08 10:13:13 compute-0 podman[272590]: 2025-10-08 10:13:13.596568533 +0000 UTC m=+0.161871018 container died db6498b457c8f8e30456ed3c36e090bf8db1c85fd32e7b0a30a4b930eebe96f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True)
Oct 08 10:13:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d96874b7e38e735b008dc94d49396f25a7709b540e0e835f2919c063c9663f0-merged.mount: Deactivated successfully.
Oct 08 10:13:13 compute-0 podman[272590]: 2025-10-08 10:13:13.647957212 +0000 UTC m=+0.213259697 container remove db6498b457c8f8e30456ed3c36e090bf8db1c85fd32e7b0a30a4b930eebe96f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_taussig, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:13:13 compute-0 systemd[1]: libpod-conmon-db6498b457c8f8e30456ed3c36e090bf8db1c85fd32e7b0a30a4b930eebe96f3.scope: Deactivated successfully.
Oct 08 10:13:13 compute-0 podman[272630]: 2025-10-08 10:13:13.862676955 +0000 UTC m=+0.042971773 container create 83ab6173cc5181d744a6617084fb0d1b80c465bb1ebbabf035ab425ce0686390 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 08 10:13:13 compute-0 systemd[1]: Started libpod-conmon-83ab6173cc5181d744a6617084fb0d1b80c465bb1ebbabf035ab425ce0686390.scope.
Oct 08 10:13:13 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:13:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53c9ae953af4c84206ac83106fe5548b474b7b24627c4bbbb2200a64aae1456c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:13:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53c9ae953af4c84206ac83106fe5548b474b7b24627c4bbbb2200a64aae1456c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:13:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53c9ae953af4c84206ac83106fe5548b474b7b24627c4bbbb2200a64aae1456c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:13:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53c9ae953af4c84206ac83106fe5548b474b7b24627c4bbbb2200a64aae1456c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:13:13 compute-0 podman[272630]: 2025-10-08 10:13:13.846122301 +0000 UTC m=+0.026417119 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:13:13 compute-0 podman[272630]: 2025-10-08 10:13:13.954787711 +0000 UTC m=+0.135082529 container init 83ab6173cc5181d744a6617084fb0d1b80c465bb1ebbabf035ab425ce0686390 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_edison, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 10:13:13 compute-0 podman[272630]: 2025-10-08 10:13:13.96753186 +0000 UTC m=+0.147826658 container start 83ab6173cc5181d744a6617084fb0d1b80c465bb1ebbabf035ab425ce0686390 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_edison, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1)
Oct 08 10:13:13 compute-0 podman[272630]: 2025-10-08 10:13:13.971788569 +0000 UTC m=+0.152083387 container attach 83ab6173cc5181d744a6617084fb0d1b80c465bb1ebbabf035ab425ce0686390 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct 08 10:13:14 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:13:14 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v859: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 08 10:13:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:14 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94004f90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:14 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94004f90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:14 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:13:14 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:13:14 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:13:14.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:13:14 compute-0 lvm[272722]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 08 10:13:14 compute-0 lvm[272722]: VG ceph_vg0 finished
Oct 08 10:13:14 compute-0 quirky_edison[272646]: {}
Oct 08 10:13:14 compute-0 systemd[1]: libpod-83ab6173cc5181d744a6617084fb0d1b80c465bb1ebbabf035ab425ce0686390.scope: Deactivated successfully.
Oct 08 10:13:14 compute-0 podman[272630]: 2025-10-08 10:13:14.763651432 +0000 UTC m=+0.943946260 container died 83ab6173cc5181d744a6617084fb0d1b80c465bb1ebbabf035ab425ce0686390 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Oct 08 10:13:14 compute-0 systemd[1]: libpod-83ab6173cc5181d744a6617084fb0d1b80c465bb1ebbabf035ab425ce0686390.scope: Consumed 1.316s CPU time.
Oct 08 10:13:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-53c9ae953af4c84206ac83106fe5548b474b7b24627c4bbbb2200a64aae1456c-merged.mount: Deactivated successfully.
Oct 08 10:13:14 compute-0 podman[272630]: 2025-10-08 10:13:14.804796804 +0000 UTC m=+0.985091602 container remove 83ab6173cc5181d744a6617084fb0d1b80c465bb1ebbabf035ab425ce0686390 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_edison, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:13:14 compute-0 systemd[1]: libpod-conmon-83ab6173cc5181d744a6617084fb0d1b80c465bb1ebbabf035ab425ce0686390.scope: Deactivated successfully.
Oct 08 10:13:14 compute-0 sudo[272521]: pam_unix(sudo:session): session closed for user root
Oct 08 10:13:14 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 10:13:14 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:13:14 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 10:13:14 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:13:15 compute-0 nova_compute[262220]: 2025-10-08 10:13:15.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:13:15 compute-0 sudo[272738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 08 10:13:15 compute-0 sudo[272738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:13:15 compute-0 sudo[272738]: pam_unix(sudo:session): session closed for user root
Oct 08 10:13:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:15 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:15 compute-0 ceph-mon[73572]: pgmap v859: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 08 10:13:15 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:13:15 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:13:15 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:13:15 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:13:15 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:13:15.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:13:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:13:15] "GET /metrics HTTP/1.1" 200 48471 "" "Prometheus/2.51.0"
Oct 08 10:13:15 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:13:15] "GET /metrics HTTP/1.1" 200 48471 "" "Prometheus/2.51.0"
Oct 08 10:13:16 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v860: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 5.8 KiB/s rd, 13 KiB/s wr, 0 op/s
Oct 08 10:13:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:16 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:16 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:16 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:13:16 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:13:16 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:13:16.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:13:16 compute-0 podman[272765]: 2025-10-08 10:13:16.922668576 +0000 UTC m=+0.074524498 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 08 10:13:16 compute-0 podman[272766]: 2025-10-08 10:13:16.94621472 +0000 UTC m=+0.098616971 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:13:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:17 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94004f90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:13:17.143Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:13:17 compute-0 ceph-mon[73572]: pgmap v860: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 5.8 KiB/s rd, 13 KiB/s wr, 0 op/s
Oct 08 10:13:17 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:13:17 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:13:17 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:13:17.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:13:17 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:13:17 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:13:17 compute-0 nova_compute[262220]: 2025-10-08 10:13:17.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:13:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:13:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:13:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:13:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:13:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:13:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:13:18 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v861: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 5.8 KiB/s rd, 13 KiB/s wr, 0 op/s
Oct 08 10:13:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:18 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:18 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:13:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:18 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:18 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:13:18 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:13:18 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:13:18.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:13:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:19 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:19 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:13:19 compute-0 ceph-mon[73572]: pgmap v861: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 5.8 KiB/s rd, 13 KiB/s wr, 0 op/s
Oct 08 10:13:19 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/3524107414' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:13:19 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:13:19 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:13:19 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:13:19.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:13:20 compute-0 nova_compute[262220]: 2025-10-08 10:13:20.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:13:20 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v862: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s rd, 15 KiB/s wr, 1 op/s
Oct 08 10:13:20 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:20 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94004f90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:20 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:20 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:20 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:13:20 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:13:20 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:13:20.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:13:21 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:21 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac900029c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:21 compute-0 ceph-mon[73572]: pgmap v862: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s rd, 15 KiB/s wr, 1 op/s
Oct 08 10:13:21 compute-0 ceph-mon[73572]: from='client.? 192.168.122.10:0/2653787984' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 08 10:13:21 compute-0 ceph-mon[73572]: from='client.? 192.168.122.10:0/2653787984' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 08 10:13:21 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:13:21 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:13:21 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:13:21.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:13:22 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v863: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 2.3 KiB/s wr, 0 op/s
Oct 08 10:13:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:22 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:22 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94004f90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:22 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:13:22 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:13:22 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:13:22.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:13:22 compute-0 nova_compute[262220]: 2025-10-08 10:13:22.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:13:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:23 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94004f90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:23 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:13:23 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:13:23 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:13:23.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:13:23 compute-0 ceph-mon[73572]: pgmap v863: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 2.3 KiB/s wr, 0 op/s
Oct 08 10:13:24 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:13:24 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v864: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 08 10:13:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:24 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac900029c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:24 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/2923868389' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 08 10:13:24 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/2730878526' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 08 10:13:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:24 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:24 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:13:24 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:13:24 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:13:24.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:13:25 compute-0 nova_compute[262220]: 2025-10-08 10:13:25.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:13:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:25 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:25 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:13:25 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:13:25 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:13:25.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:13:25 compute-0 ceph-mon[73572]: pgmap v864: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 08 10:13:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:13:25] "GET /metrics HTTP/1.1" 200 48470 "" "Prometheus/2.51.0"
Oct 08 10:13:25 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:13:25] "GET /metrics HTTP/1.1" 200 48470 "" "Prometheus/2.51.0"
Oct 08 10:13:26 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v865: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 08 10:13:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:26 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94004f90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:26 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:13:26.352 163175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2a:d6:ca', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'b2:8b:1e:40:84:a3'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 08 10:13:26 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:13:26.353 163175 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 08 10:13:26 compute-0 nova_compute[262220]: 2025-10-08 10:13:26.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:13:26 compute-0 ceph-mon[73572]: pgmap v865: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 08 10:13:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:26 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac900029c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:26 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:13:26 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:13:26 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:13:26.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:13:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:27 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:13:27.144Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:13:27 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:13:27 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:13:27 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:13:27.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:13:27 compute-0 sudo[272817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:13:27 compute-0 sudo[272817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:13:27 compute-0 sudo[272817]: pam_unix(sudo:session): session closed for user root
Oct 08 10:13:27 compute-0 nova_compute[262220]: 2025-10-08 10:13:27.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:13:28 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v866: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 08 10:13:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:28 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:28 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94004f90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:28 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:13:28 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:13:28 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:13:28.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:13:28 compute-0 podman[272843]: 2025-10-08 10:13:28.904232924 +0000 UTC m=+0.068357666 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 08 10:13:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:29 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac900029c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:29 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:13:29 compute-0 ceph-mon[73572]: pgmap v866: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 08 10:13:29 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:13:29 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:13:29 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:13:29.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:13:30 compute-0 nova_compute[262220]: 2025-10-08 10:13:30.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:13:30 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v867: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 08 10:13:30 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:30 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:30 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:30 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:30 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:13:30 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:13:30 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:13:30.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:13:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94004f90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:31 compute-0 ceph-mon[73572]: pgmap v867: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 08 10:13:31 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:13:31 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:13:31 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:13:31.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:13:32 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v868: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 08 10:13:32 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:32 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90000cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:32 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:32 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:32 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:13:32 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:13:32 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:13:32.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:13:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:13:32 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:13:32 compute-0 nova_compute[262220]: 2025-10-08 10:13:32.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:13:33 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:33 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:33 compute-0 ceph-mon[73572]: pgmap v868: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 08 10:13:33 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:13:33 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:13:33 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:13:33 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:13:33.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:13:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:13:34 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v869: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 08 10:13:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:34 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94004f90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:34 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:13:34.354 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26869918-b723-425c-a2e1-0d697f3d0fec, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 10:13:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:34 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90000cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:34 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:13:34 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:13:34 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:13:34.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:13:35 compute-0 nova_compute[262220]: 2025-10-08 10:13:35.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:13:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:35 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004b30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:35 compute-0 ceph-mon[73572]: pgmap v869: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 08 10:13:35 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:13:35 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:13:35 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:13:35.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:13:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:13:35] "GET /metrics HTTP/1.1" 200 48481 "" "Prometheus/2.51.0"
Oct 08 10:13:35 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:13:35] "GET /metrics HTTP/1.1" 200 48481 "" "Prometheus/2.51.0"
Oct 08 10:13:36 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v870: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Oct 08 10:13:36 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:36 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:36 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:36 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94004f90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:36 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:13:36 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:13:36 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:13:36.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:13:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:37 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90000cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:13:37.144Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:13:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:13:37.145Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:13:37 compute-0 ceph-mon[73572]: pgmap v870: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Oct 08 10:13:37 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:13:37 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:13:37 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:13:37.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:13:37 compute-0 nova_compute[262220]: 2025-10-08 10:13:37.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:13:38 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v871: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Oct 08 10:13:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:38 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004b50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:38 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:38 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:13:38 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:13:38 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:13:38.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:13:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:39 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94004f90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:39 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:13:39 compute-0 ceph-mon[73572]: pgmap v871: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Oct 08 10:13:39 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:13:39 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:13:39 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:13:39.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:13:39 compute-0 podman[272875]: 2025-10-08 10:13:39.971730395 +0000 UTC m=+0.125011888 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 08 10:13:40 compute-0 nova_compute[262220]: 2025-10-08 10:13:40.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:13:40 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v872: 353 pgs: 353 active+clean; 188 MiB data, 347 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 113 op/s
Oct 08 10:13:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:40 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90000cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:40 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004b70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:40 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:13:40 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:13:40 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:13:40.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:13:40 compute-0 nova_compute[262220]: 2025-10-08 10:13:40.901 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:13:41 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:41 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:41 compute-0 ceph-mon[73572]: pgmap v872: 353 pgs: 353 active+clean; 188 MiB data, 347 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 113 op/s
Oct 08 10:13:41 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:13:41 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:13:41 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:13:41.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:13:41 compute-0 nova_compute[262220]: 2025-10-08 10:13:41.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:13:42 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v873: 353 pgs: 353 active+clean; 188 MiB data, 347 MiB used, 60 GiB / 60 GiB avail; 129 KiB/s rd, 2.0 MiB/s wr, 39 op/s
Oct 08 10:13:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:42 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94004f90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:42 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90000cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:42 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:13:42 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 10:13:42 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:13:42.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 10:13:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-crash-compute-0[78863]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Oct 08 10:13:42 compute-0 nova_compute[262220]: 2025-10-08 10:13:42.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:13:42 compute-0 nova_compute[262220]: 2025-10-08 10:13:42.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:13:42 compute-0 nova_compute[262220]: 2025-10-08 10:13:42.917 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:13:42 compute-0 nova_compute[262220]: 2025-10-08 10:13:42.918 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:13:42 compute-0 nova_compute[262220]: 2025-10-08 10:13:42.918 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:13:42 compute-0 nova_compute[262220]: 2025-10-08 10:13:42.918 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 08 10:13:42 compute-0 nova_compute[262220]: 2025-10-08 10:13:42.918 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:13:42 compute-0 nova_compute[262220]: 2025-10-08 10:13:42.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:13:43 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:43 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004b90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:43 compute-0 ceph-mon[73572]: pgmap v873: 353 pgs: 353 active+clean; 188 MiB data, 347 MiB used, 60 GiB / 60 GiB avail; 129 KiB/s rd, 2.0 MiB/s wr, 39 op/s
Oct 08 10:13:43 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:13:43 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:13:43 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:13:43.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:13:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 08 10:13:43 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/629483059' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:13:43 compute-0 nova_compute[262220]: 2025-10-08 10:13:43.450 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:13:43 compute-0 nova_compute[262220]: 2025-10-08 10:13:43.606 2 WARNING nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 10:13:43 compute-0 nova_compute[262220]: 2025-10-08 10:13:43.607 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4579MB free_disk=59.8980827331543GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 08 10:13:43 compute-0 nova_compute[262220]: 2025-10-08 10:13:43.607 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:13:43 compute-0 nova_compute[262220]: 2025-10-08 10:13:43.607 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:13:43 compute-0 nova_compute[262220]: 2025-10-08 10:13:43.660 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 08 10:13:43 compute-0 nova_compute[262220]: 2025-10-08 10:13:43.661 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 08 10:13:43 compute-0 nova_compute[262220]: 2025-10-08 10:13:43.677 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:13:44 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 08 10:13:44 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1760045672' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:13:44 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:13:44 compute-0 nova_compute[262220]: 2025-10-08 10:13:44.137 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:13:44 compute-0 nova_compute[262220]: 2025-10-08 10:13:44.141 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 08 10:13:44 compute-0 nova_compute[262220]: 2025-10-08 10:13:44.156 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 08 10:13:44 compute-0 nova_compute[262220]: 2025-10-08 10:13:44.157 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 08 10:13:44 compute-0 nova_compute[262220]: 2025-10-08 10:13:44.158 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.550s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:13:44 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v874: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 206 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Oct 08 10:13:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:44 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:44 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/629483059' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:13:44 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/1760045672' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:13:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:44 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94004f90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:44 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:13:44 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:13:44 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:13:44.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:13:45 compute-0 nova_compute[262220]: 2025-10-08 10:13:45.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:13:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:45 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90000cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:45 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:13:45 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:13:45 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:13:45.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:13:45 compute-0 ceph-mon[73572]: pgmap v874: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 206 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Oct 08 10:13:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:13:45] "GET /metrics HTTP/1.1" 200 48481 "" "Prometheus/2.51.0"
Oct 08 10:13:45 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:13:45] "GET /metrics HTTP/1.1" 200 48481 "" "Prometheus/2.51.0"
Oct 08 10:13:46 compute-0 nova_compute[262220]: 2025-10-08 10:13:46.158 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:13:46 compute-0 nova_compute[262220]: 2025-10-08 10:13:46.159 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 08 10:13:46 compute-0 nova_compute[262220]: 2025-10-08 10:13:46.159 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 08 10:13:46 compute-0 nova_compute[262220]: 2025-10-08 10:13:46.172 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 08 10:13:46 compute-0 nova_compute[262220]: 2025-10-08 10:13:46.172 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:13:46 compute-0 nova_compute[262220]: 2025-10-08 10:13:46.173 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:13:46 compute-0 nova_compute[262220]: 2025-10-08 10:13:46.174 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:13:46 compute-0 nova_compute[262220]: 2025-10-08 10:13:46.174 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:13:46 compute-0 nova_compute[262220]: 2025-10-08 10:13:46.174 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 08 10:13:46 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v875: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 206 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Oct 08 10:13:46 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:46 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004bb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:46 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/3370075844' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:13:46 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:46 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:46 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:13:46 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:13:46 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:13:46.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:13:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:47 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78002c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:13:47.146Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:13:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:13:47.147Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:13:47 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:13:47 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:13:47 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:13:47.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:13:47 compute-0 ceph-mon[73572]: pgmap v875: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 206 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Oct 08 10:13:47 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/549649352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:13:47 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/1482379896' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:13:47 compute-0 sudo[272954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:13:47 compute-0 sudo[272954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:13:47 compute-0 sudo[272954]: pam_unix(sudo:session): session closed for user root
Oct 08 10:13:47 compute-0 podman[272978]: 2025-10-08 10:13:47.595057385 +0000 UTC m=+0.055715621 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 08 10:13:47 compute-0 podman[272979]: 2025-10-08 10:13:47.595617294 +0000 UTC m=+0.053336543 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 08 10:13:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:13:47
Oct 08 10:13:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 08 10:13:47 compute-0 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct 08 10:13:47 compute-0 ceph-mgr[73869]: [balancer INFO root] pools ['images', '.mgr', 'volumes', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms', '.nfs', '.rgw.root', 'default.rgw.log', 'backups']
Oct 08 10:13:47 compute-0 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct 08 10:13:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:13:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:13:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:13:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:13:47 compute-0 nova_compute[262220]: 2025-10-08 10:13:47.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:13:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct 08 10:13:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:13:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 08 10:13:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:13:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015163204279807253 of space, bias 1.0, pg target 0.4548961283942176 quantized to 32 (current 32)
Oct 08 10:13:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:13:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:13:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:13:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:13:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:13:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 08 10:13:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:13:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 08 10:13:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:13:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:13:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:13:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 08 10:13:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:13:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 08 10:13:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:13:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:13:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:13:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 08 10:13:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:13:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 10:13:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 08 10:13:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:13:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:13:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:13:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:13:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:13:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:13:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:13:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:13:48 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v876: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 206 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Oct 08 10:13:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:48 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90000cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 08 10:13:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:13:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:13:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:13:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:13:48 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/1023776002' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:13:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:13:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:48 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:48 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:13:48 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:13:48 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:13:48.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:13:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:49 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:13:49 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:13:49 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 08 10:13:49 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:13:49.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 08 10:13:49 compute-0 ceph-mon[73572]: pgmap v876: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 206 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Oct 08 10:13:50 compute-0 nova_compute[262220]: 2025-10-08 10:13:50.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:13:50 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v877: 353 pgs: 353 active+clean; 144 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 217 KiB/s rd, 2.1 MiB/s wr, 74 op/s
Oct 08 10:13:50 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:50 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78002c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:50 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:50 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90000cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:50 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:13:50 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:13:50 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:13:50.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:13:51 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:51 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:51 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:13:51 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:13:51 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:13:51.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:13:51 compute-0 ceph-mon[73572]: pgmap v877: 353 pgs: 353 active+clean; 144 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 217 KiB/s rd, 2.1 MiB/s wr, 74 op/s
Oct 08 10:13:51 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/896626167' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:13:52 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v878: 353 pgs: 353 active+clean; 144 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 87 KiB/s rd, 107 KiB/s wr, 35 op/s
Oct 08 10:13:52 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:52 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:52 compute-0 ceph-mon[73572]: pgmap v878: 353 pgs: 353 active+clean; 144 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 87 KiB/s rd, 107 KiB/s wr, 35 op/s
Oct 08 10:13:52 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:52 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78002c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:52 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:13:52 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:13:52 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:13:52.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:13:52 compute-0 nova_compute[262220]: 2025-10-08 10:13:52.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:13:53 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:53 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90000cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:53 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:13:53 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:13:53 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:13:53.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:13:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:13:54 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v879: 353 pgs: 353 active+clean; 41 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 108 KiB/s wr, 63 op/s
Oct 08 10:13:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:54 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:54 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:54 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:13:54 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:13:54 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:13:54.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:13:55 compute-0 nova_compute[262220]: 2025-10-08 10:13:55.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:13:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:55 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78002c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:55 compute-0 ceph-mon[73572]: pgmap v879: 353 pgs: 353 active+clean; 41 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 108 KiB/s wr, 63 op/s
Oct 08 10:13:55 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:13:55 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:13:55 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:13:55.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:13:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:13:55] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Oct 08 10:13:55 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:13:55] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Oct 08 10:13:56 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v880: 353 pgs: 353 active+clean; 41 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 13 KiB/s wr, 44 op/s
Oct 08 10:13:56 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:56 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90000cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:56 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/101356 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 08 10:13:56 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:56 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:56 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:13:56 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:13:56 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:13:56.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:13:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:57 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:13:57.147Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:13:57 compute-0 ceph-mon[73572]: pgmap v880: 353 pgs: 353 active+clean; 41 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 13 KiB/s wr, 44 op/s
Oct 08 10:13:57 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/204039879' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:13:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:13:57.411 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:13:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:13:57.412 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:13:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:13:57.413 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:13:57 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:13:57 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:13:57 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:13:57.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:13:57 compute-0 nova_compute[262220]: 2025-10-08 10:13:57.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:13:58 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v881: 353 pgs: 353 active+clean; 41 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 13 KiB/s wr, 44 op/s
Oct 08 10:13:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:58 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78002c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:58 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90000cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:58 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:13:58 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:13:58 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:13:58.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:13:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:59 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:13:59 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:13:59 compute-0 ceph-mon[73572]: pgmap v881: 353 pgs: 353 active+clean; 41 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 13 KiB/s wr, 44 op/s
Oct 08 10:13:59 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:13:59 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:13:59 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:13:59.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:13:59 compute-0 podman[273029]: 2025-10-08 10:13:59.900275672 +0000 UTC m=+0.059751096 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 08 10:14:00 compute-0 nova_compute[262220]: 2025-10-08 10:14:00.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:14:00 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v882: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 14 KiB/s wr, 56 op/s
Oct 08 10:14:00 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:00 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:00 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:00 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78002c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:00 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:14:00 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:14:00 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:14:00.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:14:01 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:01 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90000cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:01 compute-0 ceph-mon[73572]: pgmap v882: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 14 KiB/s wr, 56 op/s
Oct 08 10:14:01 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:14:01 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:14:01 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:14:01.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:14:02 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v883: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.7 KiB/s wr, 39 op/s
Oct 08 10:14:02 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:02 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:02 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:02 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:02 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:14:02 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:14:02 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:14:02.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:14:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:14:02 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:14:02 compute-0 nova_compute[262220]: 2025-10-08 10:14:02.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:14:03 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:03 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78002c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:03 compute-0 ceph-mon[73572]: pgmap v883: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.7 KiB/s wr, 39 op/s
Oct 08 10:14:03 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:14:03 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:14:03 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:14:03 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:14:03.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:14:04 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:14:04 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v884: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.8 KiB/s wr, 40 op/s
Oct 08 10:14:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:04 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90000cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:04 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:04 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:14:04 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:14:04 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:14:04.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:14:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:04 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:14:05 compute-0 nova_compute[262220]: 2025-10-08 10:14:05.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:14:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:05 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:05 compute-0 ceph-mon[73572]: pgmap v884: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.8 KiB/s wr, 40 op/s
Oct 08 10:14:05 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:14:05 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:14:05 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:14:05.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:14:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:14:05] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Oct 08 10:14:05 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:14:05] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Oct 08 10:14:06 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v885: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 9.2 KiB/s rd, 682 B/s wr, 12 op/s
Oct 08 10:14:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:06 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78002c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:06 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90000cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:06 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:14:06 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:14:06 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:14:06.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:14:07 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:14:07.054 163175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2a:d6:ca', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'b2:8b:1e:40:84:a3'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 08 10:14:07 compute-0 nova_compute[262220]: 2025-10-08 10:14:07.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:14:07 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:14:07.055 163175 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 08 10:14:07 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:14:07.056 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26869918-b723-425c-a2e1-0d697f3d0fec, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 10:14:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:07 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:14:07.147Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:14:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:14:07.148Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:14:07 compute-0 ceph-mon[73572]: pgmap v885: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 9.2 KiB/s rd, 682 B/s wr, 12 op/s
Oct 08 10:14:07 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:14:07 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:14:07 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:14:07.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:14:07 compute-0 sudo[273057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:14:07 compute-0 sudo[273057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:14:07 compute-0 sudo[273057]: pam_unix(sudo:session): session closed for user root
Oct 08 10:14:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:07 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:14:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:07 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:14:07 compute-0 nova_compute[262220]: 2025-10-08 10:14:07.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:14:08 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v886: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 9.2 KiB/s rd, 682 B/s wr, 12 op/s
Oct 08 10:14:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:08 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:08 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78002c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:08 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:14:08 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:14:08 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:14:08.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:14:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:09 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90000cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:09 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:14:09 compute-0 ceph-mon[73572]: pgmap v886: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 9.2 KiB/s rd, 682 B/s wr, 12 op/s
Oct 08 10:14:09 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:14:09 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:14:09 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:14:09.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:14:10 compute-0 nova_compute[262220]: 2025-10-08 10:14:10.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:14:10 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v887: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1023 B/s wr, 13 op/s
Oct 08 10:14:10 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:10 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:10 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:10 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:10 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:14:10 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:14:10 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:14:10.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:14:10 compute-0 podman[273085]: 2025-10-08 10:14:10.979161017 +0000 UTC m=+0.138364057 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true)
Oct 08 10:14:11 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:11 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78002c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:11 compute-0 ceph-mon[73572]: pgmap v887: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1023 B/s wr, 13 op/s
Oct 08 10:14:11 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:14:11 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:14:11 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:14:11.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:14:12 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v888: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 08 10:14:12 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:12 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90000cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:12 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:12 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:12 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:14:12 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:14:12 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:14:12.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:14:12 compute-0 nova_compute[262220]: 2025-10-08 10:14:12.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:14:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:13 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:13 compute-0 ceph-mon[73572]: pgmap v888: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 08 10:14:13 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:14:13 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:14:13 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:14:13.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:14:14 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:14:14 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v889: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 1 op/s
Oct 08 10:14:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:14 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78002c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:14 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90000cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:14 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:14:14 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:14:14 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:14:14.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:14:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:15 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:15 compute-0 nova_compute[262220]: 2025-10-08 10:14:15.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:14:15 compute-0 sudo[273117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:14:15 compute-0 sudo[273117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:14:15 compute-0 sudo[273117]: pam_unix(sudo:session): session closed for user root
Oct 08 10:14:15 compute-0 sudo[273142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Oct 08 10:14:15 compute-0 sudo[273142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:14:15 compute-0 ceph-mon[73572]: pgmap v889: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 1 op/s
Oct 08 10:14:15 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:14:15 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:14:15 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:14:15.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:14:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:14:15] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Oct 08 10:14:15 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:14:15] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Oct 08 10:14:15 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 08 10:14:15 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:14:15 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 08 10:14:15 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:14:15 compute-0 sudo[273142]: pam_unix(sudo:session): session closed for user root
Oct 08 10:14:15 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 10:14:15 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:14:15 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 10:14:15 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:14:15 compute-0 sudo[273188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:14:15 compute-0 sudo[273188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:14:15 compute-0 sudo[273188]: pam_unix(sudo:session): session closed for user root
Oct 08 10:14:15 compute-0 sudo[273213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 08 10:14:15 compute-0 sudo[273213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:14:16 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v890: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Oct 08 10:14:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:16 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:16 compute-0 sudo[273213]: pam_unix(sudo:session): session closed for user root
Oct 08 10:14:16 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 10:14:16 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:14:16 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 08 10:14:16 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 10:14:16 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 08 10:14:16 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:14:16 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 08 10:14:16 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:14:16 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 08 10:14:16 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 10:14:16 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 08 10:14:16 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 10:14:16 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 10:14:16 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:14:16 compute-0 sudo[273268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:14:16 compute-0 sudo[273268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:14:16 compute-0 sudo[273268]: pam_unix(sudo:session): session closed for user root
Oct 08 10:14:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:16 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78002c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:16 compute-0 sudo[273293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 08 10:14:16 compute-0 sudo[273293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:14:16 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:14:16 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:14:16 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:14:16.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:14:16 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:14:16 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:14:16 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:14:16 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:14:16 compute-0 ceph-mon[73572]: pgmap v890: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Oct 08 10:14:16 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:14:16 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 10:14:16 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:14:16 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:14:16 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 10:14:16 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 10:14:16 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:14:17 compute-0 podman[273359]: 2025-10-08 10:14:17.135476817 +0000 UTC m=+0.048639387 container create 010834d370fbda2d6f44a973093f9b5abd950550e12fd412ff9918597f281720 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct 08 10:14:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:17 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90000cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:14:17.149Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:14:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:14:17.149Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:14:17 compute-0 systemd[1]: Started libpod-conmon-010834d370fbda2d6f44a973093f9b5abd950550e12fd412ff9918597f281720.scope.
Oct 08 10:14:17 compute-0 podman[273359]: 2025-10-08 10:14:17.111706282 +0000 UTC m=+0.024868872 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:14:17 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:14:17 compute-0 podman[273359]: 2025-10-08 10:14:17.229788796 +0000 UTC m=+0.142951386 container init 010834d370fbda2d6f44a973093f9b5abd950550e12fd412ff9918597f281720 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_engelbart, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:14:17 compute-0 podman[273359]: 2025-10-08 10:14:17.241992449 +0000 UTC m=+0.155155019 container start 010834d370fbda2d6f44a973093f9b5abd950550e12fd412ff9918597f281720 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_engelbart, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Oct 08 10:14:17 compute-0 podman[273359]: 2025-10-08 10:14:17.247154625 +0000 UTC m=+0.160317195 container attach 010834d370fbda2d6f44a973093f9b5abd950550e12fd412ff9918597f281720 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_engelbart, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:14:17 compute-0 cranky_engelbart[273377]: 167 167
Oct 08 10:14:17 compute-0 systemd[1]: libpod-010834d370fbda2d6f44a973093f9b5abd950550e12fd412ff9918597f281720.scope: Deactivated successfully.
Oct 08 10:14:17 compute-0 podman[273359]: 2025-10-08 10:14:17.249391088 +0000 UTC m=+0.162553658 container died 010834d370fbda2d6f44a973093f9b5abd950550e12fd412ff9918597f281720 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_engelbart, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:14:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-f21f1b8a8cdf6c1fff0dc6f5deca7c4031f2458fe4583634579e424ed965c287-merged.mount: Deactivated successfully.
Oct 08 10:14:17 compute-0 podman[273359]: 2025-10-08 10:14:17.294531521 +0000 UTC m=+0.207694111 container remove 010834d370fbda2d6f44a973093f9b5abd950550e12fd412ff9918597f281720 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Oct 08 10:14:17 compute-0 systemd[1]: libpod-conmon-010834d370fbda2d6f44a973093f9b5abd950550e12fd412ff9918597f281720.scope: Deactivated successfully.
Oct 08 10:14:17 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:14:17 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:14:17 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:14:17.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:14:17 compute-0 podman[273403]: 2025-10-08 10:14:17.47642666 +0000 UTC m=+0.040986941 container create 6588d4daa366a7181f96ca9c673250e73182d4c9336cdad07b52be455ebe7aa4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_dijkstra, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2)
Oct 08 10:14:17 compute-0 systemd[1]: Started libpod-conmon-6588d4daa366a7181f96ca9c673250e73182d4c9336cdad07b52be455ebe7aa4.scope.
Oct 08 10:14:17 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:14:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83b3759d586eddd89a9bb90905f5b57ba083701a22f070c801f50a0c851fea11/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:14:17 compute-0 podman[273403]: 2025-10-08 10:14:17.45808752 +0000 UTC m=+0.022647821 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:14:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83b3759d586eddd89a9bb90905f5b57ba083701a22f070c801f50a0c851fea11/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:14:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83b3759d586eddd89a9bb90905f5b57ba083701a22f070c801f50a0c851fea11/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:14:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83b3759d586eddd89a9bb90905f5b57ba083701a22f070c801f50a0c851fea11/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:14:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83b3759d586eddd89a9bb90905f5b57ba083701a22f070c801f50a0c851fea11/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 10:14:17 compute-0 podman[273403]: 2025-10-08 10:14:17.567276958 +0000 UTC m=+0.131837239 container init 6588d4daa366a7181f96ca9c673250e73182d4c9336cdad07b52be455ebe7aa4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Oct 08 10:14:17 compute-0 podman[273403]: 2025-10-08 10:14:17.577893689 +0000 UTC m=+0.142453970 container start 6588d4daa366a7181f96ca9c673250e73182d4c9336cdad07b52be455ebe7aa4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_dijkstra, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 08 10:14:17 compute-0 podman[273403]: 2025-10-08 10:14:17.581602769 +0000 UTC m=+0.146163070 container attach 6588d4daa366a7181f96ca9c673250e73182d4c9336cdad07b52be455ebe7aa4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_dijkstra, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Oct 08 10:14:17 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:14:17 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:14:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:17 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:14:17 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:14:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:14:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:14:17 compute-0 podman[273432]: 2025-10-08 10:14:17.926633505 +0000 UTC m=+0.069675926 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 08 10:14:17 compute-0 charming_dijkstra[273420]: --> passed data devices: 0 physical, 1 LVM
Oct 08 10:14:17 compute-0 charming_dijkstra[273420]: --> All data devices are unavailable
Oct 08 10:14:17 compute-0 podman[273431]: 2025-10-08 10:14:17.932427932 +0000 UTC m=+0.075287868 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Oct 08 10:14:17 compute-0 nova_compute[262220]: 2025-10-08 10:14:17.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:14:17 compute-0 systemd[1]: libpod-6588d4daa366a7181f96ca9c673250e73182d4c9336cdad07b52be455ebe7aa4.scope: Deactivated successfully.
Oct 08 10:14:17 compute-0 podman[273403]: 2025-10-08 10:14:17.970420775 +0000 UTC m=+0.534981066 container died 6588d4daa366a7181f96ca9c673250e73182d4c9336cdad07b52be455ebe7aa4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_dijkstra, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct 08 10:14:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-83b3759d586eddd89a9bb90905f5b57ba083701a22f070c801f50a0c851fea11-merged.mount: Deactivated successfully.
Oct 08 10:14:18 compute-0 podman[273403]: 2025-10-08 10:14:18.030261913 +0000 UTC m=+0.594822194 container remove 6588d4daa366a7181f96ca9c673250e73182d4c9336cdad07b52be455ebe7aa4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:14:18 compute-0 systemd[1]: libpod-conmon-6588d4daa366a7181f96ca9c673250e73182d4c9336cdad07b52be455ebe7aa4.scope: Deactivated successfully.
Oct 08 10:14:18 compute-0 sudo[273293]: pam_unix(sudo:session): session closed for user root
Oct 08 10:14:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:14:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:14:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:14:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:14:18 compute-0 sudo[273484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:14:18 compute-0 sudo[273484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:14:18 compute-0 sudo[273484]: pam_unix(sudo:session): session closed for user root
Oct 08 10:14:18 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v891: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Oct 08 10:14:18 compute-0 sudo[273509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- lvm list --format json
Oct 08 10:14:18 compute-0 sudo[273509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:14:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:18 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90000cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:18 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:18 compute-0 podman[273577]: 2025-10-08 10:14:18.681678778 +0000 UTC m=+0.055051664 container create 98aab81ff4d0f33449e09b33dfbb2f87989d972d6cbad579e232cfef34e22f7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_kapitsa, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 08 10:14:18 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:14:18 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:14:18 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:14:18.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:14:18 compute-0 systemd[1]: Started libpod-conmon-98aab81ff4d0f33449e09b33dfbb2f87989d972d6cbad579e232cfef34e22f7f.scope.
Oct 08 10:14:18 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:14:18 compute-0 podman[273577]: 2025-10-08 10:14:18.652763426 +0000 UTC m=+0.026136332 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:14:18 compute-0 podman[273577]: 2025-10-08 10:14:18.766234102 +0000 UTC m=+0.139607008 container init 98aab81ff4d0f33449e09b33dfbb2f87989d972d6cbad579e232cfef34e22f7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_kapitsa, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 08 10:14:18 compute-0 podman[273577]: 2025-10-08 10:14:18.776367008 +0000 UTC m=+0.149739914 container start 98aab81ff4d0f33449e09b33dfbb2f87989d972d6cbad579e232cfef34e22f7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct 08 10:14:18 compute-0 podman[273577]: 2025-10-08 10:14:18.780069538 +0000 UTC m=+0.153442444 container attach 98aab81ff4d0f33449e09b33dfbb2f87989d972d6cbad579e232cfef34e22f7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_kapitsa, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct 08 10:14:18 compute-0 flamboyant_kapitsa[273594]: 167 167
Oct 08 10:14:18 compute-0 systemd[1]: libpod-98aab81ff4d0f33449e09b33dfbb2f87989d972d6cbad579e232cfef34e22f7f.scope: Deactivated successfully.
Oct 08 10:14:18 compute-0 podman[273577]: 2025-10-08 10:14:18.784649205 +0000 UTC m=+0.158022091 container died 98aab81ff4d0f33449e09b33dfbb2f87989d972d6cbad579e232cfef34e22f7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325)
Oct 08 10:14:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf572df37a33e01d090573b7fc0a768b59f1819511c38405fd5fed7f5c6f71d3-merged.mount: Deactivated successfully.
Oct 08 10:14:18 compute-0 podman[273577]: 2025-10-08 10:14:18.823430494 +0000 UTC m=+0.196803380 container remove 98aab81ff4d0f33449e09b33dfbb2f87989d972d6cbad579e232cfef34e22f7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_kapitsa, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:14:18 compute-0 systemd[1]: libpod-conmon-98aab81ff4d0f33449e09b33dfbb2f87989d972d6cbad579e232cfef34e22f7f.scope: Deactivated successfully.
Oct 08 10:14:18 compute-0 ceph-mon[73572]: pgmap v891: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Oct 08 10:14:19 compute-0 podman[273618]: 2025-10-08 10:14:19.002584875 +0000 UTC m=+0.043934995 container create 9aab5b6e364513c8f6779937e7d564ad530a1f1829480b7aa866b087110094ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_kirch, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 08 10:14:19 compute-0 systemd[1]: Started libpod-conmon-9aab5b6e364513c8f6779937e7d564ad530a1f1829480b7aa866b087110094ec.scope.
Oct 08 10:14:19 compute-0 podman[273618]: 2025-10-08 10:14:18.9844074 +0000 UTC m=+0.025757540 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:14:19 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:14:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe7cdf5c83d3944c1fed05d4cf38a90175bf6578d0c1ee7129817e4982dcc133/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:14:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe7cdf5c83d3944c1fed05d4cf38a90175bf6578d0c1ee7129817e4982dcc133/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:14:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe7cdf5c83d3944c1fed05d4cf38a90175bf6578d0c1ee7129817e4982dcc133/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:14:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe7cdf5c83d3944c1fed05d4cf38a90175bf6578d0c1ee7129817e4982dcc133/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:14:19 compute-0 podman[273618]: 2025-10-08 10:14:19.116186566 +0000 UTC m=+0.157536706 container init 9aab5b6e364513c8f6779937e7d564ad530a1f1829480b7aa866b087110094ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Oct 08 10:14:19 compute-0 podman[273618]: 2025-10-08 10:14:19.123888084 +0000 UTC m=+0.165238214 container start 9aab5b6e364513c8f6779937e7d564ad530a1f1829480b7aa866b087110094ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_kirch, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct 08 10:14:19 compute-0 podman[273618]: 2025-10-08 10:14:19.127652915 +0000 UTC m=+0.169003145 container attach 9aab5b6e364513c8f6779937e7d564ad530a1f1829480b7aa866b087110094ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_kirch, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 08 10:14:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:19 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90000cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:19 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:14:19 compute-0 dazzling_kirch[273635]: {
Oct 08 10:14:19 compute-0 dazzling_kirch[273635]:     "1": [
Oct 08 10:14:19 compute-0 dazzling_kirch[273635]:         {
Oct 08 10:14:19 compute-0 dazzling_kirch[273635]:             "devices": [
Oct 08 10:14:19 compute-0 dazzling_kirch[273635]:                 "/dev/loop3"
Oct 08 10:14:19 compute-0 dazzling_kirch[273635]:             ],
Oct 08 10:14:19 compute-0 dazzling_kirch[273635]:             "lv_name": "ceph_lv0",
Oct 08 10:14:19 compute-0 dazzling_kirch[273635]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:14:19 compute-0 dazzling_kirch[273635]:             "lv_size": "21470642176",
Oct 08 10:14:19 compute-0 dazzling_kirch[273635]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 08 10:14:19 compute-0 dazzling_kirch[273635]:             "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 10:14:19 compute-0 dazzling_kirch[273635]:             "name": "ceph_lv0",
Oct 08 10:14:19 compute-0 dazzling_kirch[273635]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:14:19 compute-0 dazzling_kirch[273635]:             "tags": {
Oct 08 10:14:19 compute-0 dazzling_kirch[273635]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:14:19 compute-0 dazzling_kirch[273635]:                 "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 10:14:19 compute-0 dazzling_kirch[273635]:                 "ceph.cephx_lockbox_secret": "",
Oct 08 10:14:19 compute-0 dazzling_kirch[273635]:                 "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct 08 10:14:19 compute-0 dazzling_kirch[273635]:                 "ceph.cluster_name": "ceph",
Oct 08 10:14:19 compute-0 dazzling_kirch[273635]:                 "ceph.crush_device_class": "",
Oct 08 10:14:19 compute-0 dazzling_kirch[273635]:                 "ceph.encrypted": "0",
Oct 08 10:14:19 compute-0 dazzling_kirch[273635]:                 "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct 08 10:14:19 compute-0 dazzling_kirch[273635]:                 "ceph.osd_id": "1",
Oct 08 10:14:19 compute-0 dazzling_kirch[273635]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 08 10:14:19 compute-0 dazzling_kirch[273635]:                 "ceph.type": "block",
Oct 08 10:14:19 compute-0 dazzling_kirch[273635]:                 "ceph.vdo": "0",
Oct 08 10:14:19 compute-0 dazzling_kirch[273635]:                 "ceph.with_tpm": "0"
Oct 08 10:14:19 compute-0 dazzling_kirch[273635]:             },
Oct 08 10:14:19 compute-0 dazzling_kirch[273635]:             "type": "block",
Oct 08 10:14:19 compute-0 dazzling_kirch[273635]:             "vg_name": "ceph_vg0"
Oct 08 10:14:19 compute-0 dazzling_kirch[273635]:         }
Oct 08 10:14:19 compute-0 dazzling_kirch[273635]:     ]
Oct 08 10:14:19 compute-0 dazzling_kirch[273635]: }
Oct 08 10:14:19 compute-0 systemd[1]: libpod-9aab5b6e364513c8f6779937e7d564ad530a1f1829480b7aa866b087110094ec.scope: Deactivated successfully.
Oct 08 10:14:19 compute-0 podman[273618]: 2025-10-08 10:14:19.422514343 +0000 UTC m=+0.463864473 container died 9aab5b6e364513c8f6779937e7d564ad530a1f1829480b7aa866b087110094ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_kirch, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 10:14:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe7cdf5c83d3944c1fed05d4cf38a90175bf6578d0c1ee7129817e4982dcc133-merged.mount: Deactivated successfully.
Oct 08 10:14:19 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:14:19 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:14:19 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:14:19.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:14:19 compute-0 podman[273618]: 2025-10-08 10:14:19.476017257 +0000 UTC m=+0.517367377 container remove 9aab5b6e364513c8f6779937e7d564ad530a1f1829480b7aa866b087110094ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_kirch, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 10:14:19 compute-0 systemd[1]: libpod-conmon-9aab5b6e364513c8f6779937e7d564ad530a1f1829480b7aa866b087110094ec.scope: Deactivated successfully.
Oct 08 10:14:19 compute-0 sudo[273509]: pam_unix(sudo:session): session closed for user root
Oct 08 10:14:19 compute-0 sudo[273658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:14:19 compute-0 sudo[273658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:14:19 compute-0 sudo[273658]: pam_unix(sudo:session): session closed for user root
Oct 08 10:14:19 compute-0 sudo[273683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- raw list --format json
Oct 08 10:14:19 compute-0 sudo[273683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:14:20 compute-0 podman[273749]: 2025-10-08 10:14:20.129268901 +0000 UTC m=+0.104551499 container create 271d06a340e5a6150bb005e670e855f98f4eebfbcdfdc45a5f45daa582d1c249 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_heyrovsky, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 10:14:20 compute-0 podman[273749]: 2025-10-08 10:14:20.049277114 +0000 UTC m=+0.024559732 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:14:20 compute-0 nova_compute[262220]: 2025-10-08 10:14:20.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:14:20 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v892: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 341 B/s wr, 1 op/s
Oct 08 10:14:20 compute-0 systemd[1]: Started libpod-conmon-271d06a340e5a6150bb005e670e855f98f4eebfbcdfdc45a5f45daa582d1c249.scope.
Oct 08 10:14:20 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:14:20 compute-0 podman[273749]: 2025-10-08 10:14:20.241883779 +0000 UTC m=+0.217166397 container init 271d06a340e5a6150bb005e670e855f98f4eebfbcdfdc45a5f45daa582d1c249 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_heyrovsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 10:14:20 compute-0 podman[273749]: 2025-10-08 10:14:20.251217919 +0000 UTC m=+0.226500517 container start 271d06a340e5a6150bb005e670e855f98f4eebfbcdfdc45a5f45daa582d1c249 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 10:14:20 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:20 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003540 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:20 compute-0 podman[273749]: 2025-10-08 10:14:20.257425319 +0000 UTC m=+0.232707947 container attach 271d06a340e5a6150bb005e670e855f98f4eebfbcdfdc45a5f45daa582d1c249 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 10:14:20 compute-0 eager_heyrovsky[273766]: 167 167
Oct 08 10:14:20 compute-0 systemd[1]: libpod-271d06a340e5a6150bb005e670e855f98f4eebfbcdfdc45a5f45daa582d1c249.scope: Deactivated successfully.
Oct 08 10:14:20 compute-0 podman[273749]: 2025-10-08 10:14:20.261208751 +0000 UTC m=+0.236491369 container died 271d06a340e5a6150bb005e670e855f98f4eebfbcdfdc45a5f45daa582d1c249 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct 08 10:14:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c11a71ab188b9541b790feb6be18ef39b94facbd5af11d76014eb9b375fab6d-merged.mount: Deactivated successfully.
Oct 08 10:14:20 compute-0 podman[273749]: 2025-10-08 10:14:20.322399703 +0000 UTC m=+0.297682321 container remove 271d06a340e5a6150bb005e670e855f98f4eebfbcdfdc45a5f45daa582d1c249 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_heyrovsky, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:14:20 compute-0 systemd[1]: libpod-conmon-271d06a340e5a6150bb005e670e855f98f4eebfbcdfdc45a5f45daa582d1c249.scope: Deactivated successfully.
Oct 08 10:14:20 compute-0 podman[273789]: 2025-10-08 10:14:20.494182866 +0000 UTC m=+0.047543973 container create de3a609625f86acdfcd3c47bf98aa075604b5bd2959fe603c260cb252c6aaf6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_borg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 10:14:20 compute-0 systemd[1]: Started libpod-conmon-de3a609625f86acdfcd3c47bf98aa075604b5bd2959fe603c260cb252c6aaf6a.scope.
Oct 08 10:14:20 compute-0 podman[273789]: 2025-10-08 10:14:20.474796091 +0000 UTC m=+0.028157198 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:14:20 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:14:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dde8412c32bc848acfbc649bcf2888971730c18caaae85c1968416e1a372d6c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:14:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dde8412c32bc848acfbc649bcf2888971730c18caaae85c1968416e1a372d6c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:14:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dde8412c32bc848acfbc649bcf2888971730c18caaae85c1968416e1a372d6c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:14:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dde8412c32bc848acfbc649bcf2888971730c18caaae85c1968416e1a372d6c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:14:20 compute-0 podman[273789]: 2025-10-08 10:14:20.598222668 +0000 UTC m=+0.151584155 container init de3a609625f86acdfcd3c47bf98aa075604b5bd2959fe603c260cb252c6aaf6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:14:20 compute-0 podman[273789]: 2025-10-08 10:14:20.613574233 +0000 UTC m=+0.166935350 container start de3a609625f86acdfcd3c47bf98aa075604b5bd2959fe603c260cb252c6aaf6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_borg, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 08 10:14:20 compute-0 podman[273789]: 2025-10-08 10:14:20.61785584 +0000 UTC m=+0.171216957 container attach de3a609625f86acdfcd3c47bf98aa075604b5bd2959fe603c260cb252c6aaf6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_borg, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:14:20 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:20 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:20 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:14:20 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:14:20 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:14:20.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:14:21 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:21 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:21 compute-0 ceph-mon[73572]: pgmap v892: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 341 B/s wr, 1 op/s
Oct 08 10:14:21 compute-0 lvm[273880]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 08 10:14:21 compute-0 lvm[273880]: VG ceph_vg0 finished
Oct 08 10:14:21 compute-0 intelligent_borg[273805]: {}
Oct 08 10:14:21 compute-0 systemd[1]: libpod-de3a609625f86acdfcd3c47bf98aa075604b5bd2959fe603c260cb252c6aaf6a.scope: Deactivated successfully.
Oct 08 10:14:21 compute-0 systemd[1]: libpod-de3a609625f86acdfcd3c47bf98aa075604b5bd2959fe603c260cb252c6aaf6a.scope: Consumed 1.305s CPU time.
Oct 08 10:14:21 compute-0 podman[273789]: 2025-10-08 10:14:21.382214213 +0000 UTC m=+0.935575320 container died de3a609625f86acdfcd3c47bf98aa075604b5bd2959fe603c260cb252c6aaf6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_borg, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct 08 10:14:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-dde8412c32bc848acfbc649bcf2888971730c18caaae85c1968416e1a372d6c4-merged.mount: Deactivated successfully.
Oct 08 10:14:21 compute-0 podman[273789]: 2025-10-08 10:14:21.434688844 +0000 UTC m=+0.988049931 container remove de3a609625f86acdfcd3c47bf98aa075604b5bd2959fe603c260cb252c6aaf6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 08 10:14:21 compute-0 systemd[1]: libpod-conmon-de3a609625f86acdfcd3c47bf98aa075604b5bd2959fe603c260cb252c6aaf6a.scope: Deactivated successfully.
Oct 08 10:14:21 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:14:21 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:14:21 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:14:21.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:14:21 compute-0 sudo[273683]: pam_unix(sudo:session): session closed for user root
Oct 08 10:14:21 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 10:14:21 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:14:21 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 10:14:21 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:14:21 compute-0 sudo[273895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 08 10:14:21 compute-0 sudo[273895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:14:21 compute-0 sudo[273895]: pam_unix(sudo:session): session closed for user root
Oct 08 10:14:22 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 08 10:14:22 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1654234200' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 08 10:14:22 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 08 10:14:22 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1654234200' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 08 10:14:22 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v893: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Oct 08 10:14:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:22 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90005240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:22 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:14:22 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:14:22 compute-0 ceph-mon[73572]: from='client.? 192.168.122.10:0/1654234200' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 08 10:14:22 compute-0 ceph-mon[73572]: from='client.? 192.168.122.10:0/1654234200' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 08 10:14:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:22 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003540 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:22 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:14:22 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:14:22 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:14:22.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:14:22 compute-0 nova_compute[262220]: 2025-10-08 10:14:22.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:14:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:23 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:23 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:14:23 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:14:23 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:14:23.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:14:23 compute-0 ceph-mon[73572]: pgmap v893: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Oct 08 10:14:23 compute-0 nova_compute[262220]: 2025-10-08 10:14:23.591 2 DEBUG oslo_concurrency.lockutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:14:23 compute-0 nova_compute[262220]: 2025-10-08 10:14:23.591 2 DEBUG oslo_concurrency.lockutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:14:23 compute-0 nova_compute[262220]: 2025-10-08 10:14:23.621 2 DEBUG nova.compute.manager [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 08 10:14:23 compute-0 nova_compute[262220]: 2025-10-08 10:14:23.733 2 DEBUG oslo_concurrency.lockutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:14:23 compute-0 nova_compute[262220]: 2025-10-08 10:14:23.733 2 DEBUG oslo_concurrency.lockutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:14:23 compute-0 nova_compute[262220]: 2025-10-08 10:14:23.741 2 DEBUG nova.virt.hardware [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 08 10:14:23 compute-0 nova_compute[262220]: 2025-10-08 10:14:23.741 2 INFO nova.compute.claims [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Claim successful on node compute-0.ctlplane.example.com
Oct 08 10:14:23 compute-0 nova_compute[262220]: 2025-10-08 10:14:23.853 2 DEBUG oslo_concurrency.processutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:14:24 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:14:24 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v894: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 0 op/s
Oct 08 10:14:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:24 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:24 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 08 10:14:24 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3158757203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:14:24 compute-0 nova_compute[262220]: 2025-10-08 10:14:24.314 2 DEBUG oslo_concurrency.processutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:14:24 compute-0 nova_compute[262220]: 2025-10-08 10:14:24.320 2 DEBUG nova.compute.provider_tree [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 08 10:14:24 compute-0 nova_compute[262220]: 2025-10-08 10:14:24.335 2 DEBUG nova.scheduler.client.report [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 08 10:14:24 compute-0 nova_compute[262220]: 2025-10-08 10:14:24.364 2 DEBUG oslo_concurrency.lockutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.631s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:14:24 compute-0 nova_compute[262220]: 2025-10-08 10:14:24.365 2 DEBUG nova.compute.manager [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 08 10:14:24 compute-0 nova_compute[262220]: 2025-10-08 10:14:24.426 2 DEBUG nova.compute.manager [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 08 10:14:24 compute-0 nova_compute[262220]: 2025-10-08 10:14:24.427 2 DEBUG nova.network.neutron [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 08 10:14:24 compute-0 nova_compute[262220]: 2025-10-08 10:14:24.453 2 INFO nova.virt.libvirt.driver [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 08 10:14:24 compute-0 nova_compute[262220]: 2025-10-08 10:14:24.477 2 DEBUG nova.compute.manager [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 08 10:14:24 compute-0 ceph-mon[73572]: pgmap v894: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 0 op/s
Oct 08 10:14:24 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3158757203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:14:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:24 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90005240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:24 compute-0 nova_compute[262220]: 2025-10-08 10:14:24.708 2 DEBUG nova.compute.manager [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 08 10:14:24 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:14:24 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:14:24 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:14:24.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:14:24 compute-0 nova_compute[262220]: 2025-10-08 10:14:24.710 2 DEBUG nova.virt.libvirt.driver [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 08 10:14:24 compute-0 nova_compute[262220]: 2025-10-08 10:14:24.711 2 INFO nova.virt.libvirt.driver [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Creating image(s)
Oct 08 10:14:24 compute-0 nova_compute[262220]: 2025-10-08 10:14:24.748 2 DEBUG nova.storage.rbd_utils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] rbd image ea469a2e-bf09-495c-9b5e-02ad38d32d40_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 08 10:14:24 compute-0 nova_compute[262220]: 2025-10-08 10:14:24.783 2 DEBUG nova.storage.rbd_utils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] rbd image ea469a2e-bf09-495c-9b5e-02ad38d32d40_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 08 10:14:24 compute-0 nova_compute[262220]: 2025-10-08 10:14:24.814 2 DEBUG nova.storage.rbd_utils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] rbd image ea469a2e-bf09-495c-9b5e-02ad38d32d40_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 08 10:14:24 compute-0 nova_compute[262220]: 2025-10-08 10:14:24.820 2 DEBUG oslo_concurrency.processutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3cde70359534d4758cf71011630bd1fb14a90c92 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:14:24 compute-0 nova_compute[262220]: 2025-10-08 10:14:24.900 2 DEBUG oslo_concurrency.processutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3cde70359534d4758cf71011630bd1fb14a90c92 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:14:24 compute-0 nova_compute[262220]: 2025-10-08 10:14:24.902 2 DEBUG oslo_concurrency.lockutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "3cde70359534d4758cf71011630bd1fb14a90c92" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:14:24 compute-0 nova_compute[262220]: 2025-10-08 10:14:24.903 2 DEBUG oslo_concurrency.lockutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "3cde70359534d4758cf71011630bd1fb14a90c92" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:14:24 compute-0 nova_compute[262220]: 2025-10-08 10:14:24.903 2 DEBUG oslo_concurrency.lockutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "3cde70359534d4758cf71011630bd1fb14a90c92" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:14:24 compute-0 nova_compute[262220]: 2025-10-08 10:14:24.943 2 DEBUG nova.storage.rbd_utils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] rbd image ea469a2e-bf09-495c-9b5e-02ad38d32d40_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 08 10:14:24 compute-0 nova_compute[262220]: 2025-10-08 10:14:24.948 2 DEBUG oslo_concurrency.processutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/3cde70359534d4758cf71011630bd1fb14a90c92 ea469a2e-bf09-495c-9b5e-02ad38d32d40_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:14:25 compute-0 nova_compute[262220]: 2025-10-08 10:14:25.022 2 DEBUG nova.policy [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd50b19166a7245e390a6e29682191263', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0edb2c85b88b4b168a4b2e8c5ed4a05c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 08 10:14:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:25 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003540 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:25 compute-0 nova_compute[262220]: 2025-10-08 10:14:25.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:14:25 compute-0 nova_compute[262220]: 2025-10-08 10:14:25.251 2 DEBUG oslo_concurrency.processutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/3cde70359534d4758cf71011630bd1fb14a90c92 ea469a2e-bf09-495c-9b5e-02ad38d32d40_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.303s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:14:25 compute-0 nova_compute[262220]: 2025-10-08 10:14:25.338 2 DEBUG nova.storage.rbd_utils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] resizing rbd image ea469a2e-bf09-495c-9b5e-02ad38d32d40_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 08 10:14:25 compute-0 nova_compute[262220]: 2025-10-08 10:14:25.443 2 DEBUG nova.objects.instance [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lazy-loading 'migration_context' on Instance uuid ea469a2e-bf09-495c-9b5e-02ad38d32d40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 08 10:14:25 compute-0 nova_compute[262220]: 2025-10-08 10:14:25.458 2 DEBUG nova.virt.libvirt.driver [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 08 10:14:25 compute-0 nova_compute[262220]: 2025-10-08 10:14:25.458 2 DEBUG nova.virt.libvirt.driver [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Ensure instance console log exists: /var/lib/nova/instances/ea469a2e-bf09-495c-9b5e-02ad38d32d40/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 08 10:14:25 compute-0 nova_compute[262220]: 2025-10-08 10:14:25.458 2 DEBUG oslo_concurrency.lockutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:14:25 compute-0 nova_compute[262220]: 2025-10-08 10:14:25.459 2 DEBUG oslo_concurrency.lockutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:14:25 compute-0 nova_compute[262220]: 2025-10-08 10:14:25.459 2 DEBUG oslo_concurrency.lockutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:14:25 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:14:25 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:14:25 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:14:25.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:14:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:14:25] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct 08 10:14:25 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:14:25] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct 08 10:14:26 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v895: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Oct 08 10:14:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:26 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:26 compute-0 nova_compute[262220]: 2025-10-08 10:14:26.547 2 DEBUG nova.network.neutron [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Successfully created port: be4ec274-2a90-48e8-bd51-fd01f3c659da _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 08 10:14:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:26 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:26 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:14:26 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:14:26 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:14:26.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:14:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:14:27.150Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:14:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:14:27.150Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:14:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:14:27.150Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:14:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:27 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90005240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:27 compute-0 ceph-mon[73572]: pgmap v895: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Oct 08 10:14:27 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:14:27 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:14:27 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:14:27.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:14:27 compute-0 sudo[274114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:14:27 compute-0 sudo[274114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:14:27 compute-0 sudo[274114]: pam_unix(sudo:session): session closed for user root
Oct 08 10:14:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:27 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:14:27 compute-0 nova_compute[262220]: 2025-10-08 10:14:27.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:14:28 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v896: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Oct 08 10:14:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:28 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003540 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:28 compute-0 nova_compute[262220]: 2025-10-08 10:14:28.324 2 DEBUG nova.network.neutron [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Successfully updated port: be4ec274-2a90-48e8-bd51-fd01f3c659da _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 08 10:14:28 compute-0 nova_compute[262220]: 2025-10-08 10:14:28.348 2 DEBUG oslo_concurrency.lockutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "refresh_cache-ea469a2e-bf09-495c-9b5e-02ad38d32d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 08 10:14:28 compute-0 nova_compute[262220]: 2025-10-08 10:14:28.348 2 DEBUG oslo_concurrency.lockutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquired lock "refresh_cache-ea469a2e-bf09-495c-9b5e-02ad38d32d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 08 10:14:28 compute-0 nova_compute[262220]: 2025-10-08 10:14:28.348 2 DEBUG nova.network.neutron [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 08 10:14:28 compute-0 nova_compute[262220]: 2025-10-08 10:14:28.403 2 DEBUG nova.compute.manager [req-6719b070-2f01-4279-a5fe-5a610edfd379 req-0fbee54e-a6e0-4744-a94a-3c84cef15802 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Received event network-changed-be4ec274-2a90-48e8-bd51-fd01f3c659da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 08 10:14:28 compute-0 nova_compute[262220]: 2025-10-08 10:14:28.404 2 DEBUG nova.compute.manager [req-6719b070-2f01-4279-a5fe-5a610edfd379 req-0fbee54e-a6e0-4744-a94a-3c84cef15802 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Refreshing instance network info cache due to event network-changed-be4ec274-2a90-48e8-bd51-fd01f3c659da. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 08 10:14:28 compute-0 nova_compute[262220]: 2025-10-08 10:14:28.404 2 DEBUG oslo_concurrency.lockutils [req-6719b070-2f01-4279-a5fe-5a610edfd379 req-0fbee54e-a6e0-4744-a94a-3c84cef15802 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "refresh_cache-ea469a2e-bf09-495c-9b5e-02ad38d32d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 08 10:14:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:28 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:28 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:14:28 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:14:28 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:14:28.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:14:29 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:14:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:29 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:29 compute-0 ceph-mon[73572]: pgmap v896: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Oct 08 10:14:29 compute-0 nova_compute[262220]: 2025-10-08 10:14:29.337 2 DEBUG nova.network.neutron [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 08 10:14:29 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:14:29 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:14:29 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:14:29.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:14:30 compute-0 nova_compute[262220]: 2025-10-08 10:14:30.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:14:30 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v897: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 08 10:14:30 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:30 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90005240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:30 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:30 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003540 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:30 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:14:30 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:14:30 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:14:30.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:14:30 compute-0 podman[274142]: 2025-10-08 10:14:30.919985725 +0000 UTC m=+0.076296448 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 08 10:14:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:31 compute-0 nova_compute[262220]: 2025-10-08 10:14:31.286 2 DEBUG nova.network.neutron [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Updating instance_info_cache with network_info: [{"id": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "address": "fa:16:3e:e6:b0:e0", "network": {"id": "834a886f-bb33-49ed-b47e-ef0308a38e89", "bridge": "br-int", "label": "tempest-network-smoke--1879322246", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe4ec274-2a", "ovs_interfaceid": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 08 10:14:31 compute-0 nova_compute[262220]: 2025-10-08 10:14:31.303 2 DEBUG oslo_concurrency.lockutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Releasing lock "refresh_cache-ea469a2e-bf09-495c-9b5e-02ad38d32d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 08 10:14:31 compute-0 nova_compute[262220]: 2025-10-08 10:14:31.303 2 DEBUG nova.compute.manager [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Instance network_info: |[{"id": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "address": "fa:16:3e:e6:b0:e0", "network": {"id": "834a886f-bb33-49ed-b47e-ef0308a38e89", "bridge": "br-int", "label": "tempest-network-smoke--1879322246", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe4ec274-2a", "ovs_interfaceid": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 08 10:14:31 compute-0 nova_compute[262220]: 2025-10-08 10:14:31.304 2 DEBUG oslo_concurrency.lockutils [req-6719b070-2f01-4279-a5fe-5a610edfd379 req-0fbee54e-a6e0-4744-a94a-3c84cef15802 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquired lock "refresh_cache-ea469a2e-bf09-495c-9b5e-02ad38d32d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 08 10:14:31 compute-0 nova_compute[262220]: 2025-10-08 10:14:31.304 2 DEBUG nova.network.neutron [req-6719b070-2f01-4279-a5fe-5a610edfd379 req-0fbee54e-a6e0-4744-a94a-3c84cef15802 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Refreshing network info cache for port be4ec274-2a90-48e8-bd51-fd01f3c659da _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 08 10:14:31 compute-0 nova_compute[262220]: 2025-10-08 10:14:31.307 2 DEBUG nova.virt.libvirt.driver [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Start _get_guest_xml network_info=[{"id": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "address": "fa:16:3e:e6:b0:e0", "network": {"id": "834a886f-bb33-49ed-b47e-ef0308a38e89", "bridge": "br-int", "label": "tempest-network-smoke--1879322246", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe4ec274-2a", "ovs_interfaceid": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-08T10:08:49Z,direct_url=<?>,disk_format='qcow2',id=e5994bac-385d-4cfe-962e-386aa0559983,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9bebada0871a4efa9df99c6beff34c13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-08T10:08:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'encryption_secret_uuid': None, 'encryption_format': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'device_type': 'disk', 'size': 0, 'image_id': 'e5994bac-385d-4cfe-962e-386aa0559983'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 08 10:14:31 compute-0 nova_compute[262220]: 2025-10-08 10:14:31.314 2 WARNING nova.virt.libvirt.driver [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 10:14:31 compute-0 nova_compute[262220]: 2025-10-08 10:14:31.319 2 DEBUG nova.virt.libvirt.host [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 08 10:14:31 compute-0 ceph-mon[73572]: pgmap v897: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 08 10:14:31 compute-0 nova_compute[262220]: 2025-10-08 10:14:31.320 2 DEBUG nova.virt.libvirt.host [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 08 10:14:31 compute-0 nova_compute[262220]: 2025-10-08 10:14:31.323 2 DEBUG nova.virt.libvirt.host [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 08 10:14:31 compute-0 nova_compute[262220]: 2025-10-08 10:14:31.323 2 DEBUG nova.virt.libvirt.host [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 08 10:14:31 compute-0 nova_compute[262220]: 2025-10-08 10:14:31.324 2 DEBUG nova.virt.libvirt.driver [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 08 10:14:31 compute-0 nova_compute[262220]: 2025-10-08 10:14:31.324 2 DEBUG nova.virt.hardware [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-08T10:08:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='461f98d6-ae65-4f86-8ae2-cc3cfaea2a46',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-08T10:08:49Z,direct_url=<?>,disk_format='qcow2',id=e5994bac-385d-4cfe-962e-386aa0559983,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9bebada0871a4efa9df99c6beff34c13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-08T10:08:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 08 10:14:31 compute-0 nova_compute[262220]: 2025-10-08 10:14:31.324 2 DEBUG nova.virt.hardware [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 08 10:14:31 compute-0 nova_compute[262220]: 2025-10-08 10:14:31.325 2 DEBUG nova.virt.hardware [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 08 10:14:31 compute-0 nova_compute[262220]: 2025-10-08 10:14:31.325 2 DEBUG nova.virt.hardware [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 08 10:14:31 compute-0 nova_compute[262220]: 2025-10-08 10:14:31.325 2 DEBUG nova.virt.hardware [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 08 10:14:31 compute-0 nova_compute[262220]: 2025-10-08 10:14:31.325 2 DEBUG nova.virt.hardware [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 08 10:14:31 compute-0 nova_compute[262220]: 2025-10-08 10:14:31.325 2 DEBUG nova.virt.hardware [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 08 10:14:31 compute-0 nova_compute[262220]: 2025-10-08 10:14:31.326 2 DEBUG nova.virt.hardware [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 08 10:14:31 compute-0 nova_compute[262220]: 2025-10-08 10:14:31.326 2 DEBUG nova.virt.hardware [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 08 10:14:31 compute-0 nova_compute[262220]: 2025-10-08 10:14:31.326 2 DEBUG nova.virt.hardware [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 08 10:14:31 compute-0 nova_compute[262220]: 2025-10-08 10:14:31.326 2 DEBUG nova.virt.hardware [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 08 10:14:31 compute-0 nova_compute[262220]: 2025-10-08 10:14:31.330 2 DEBUG oslo_concurrency.processutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:14:31 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:14:31 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:14:31 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:14:31.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:14:31 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 08 10:14:31 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3828849259' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 08 10:14:31 compute-0 nova_compute[262220]: 2025-10-08 10:14:31.863 2 DEBUG oslo_concurrency.processutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:14:31 compute-0 nova_compute[262220]: 2025-10-08 10:14:31.909 2 DEBUG nova.storage.rbd_utils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] rbd image ea469a2e-bf09-495c-9b5e-02ad38d32d40_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 08 10:14:31 compute-0 nova_compute[262220]: 2025-10-08 10:14:31.915 2 DEBUG oslo_concurrency.processutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:14:31 compute-0 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 08 10:14:31 compute-0 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 10K writes, 40K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 10K writes, 2666 syncs, 4.09 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1892 writes, 5856 keys, 1892 commit groups, 1.0 writes per commit group, ingest: 6.53 MB, 0.01 MB/s
                                           Interval WAL: 1892 writes, 779 syncs, 2.43 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 08 10:14:32 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v898: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 08 10:14:32 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:32 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:32 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3828849259' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 08 10:14:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 08 10:14:32 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/652061004' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 08 10:14:32 compute-0 nova_compute[262220]: 2025-10-08 10:14:32.380 2 DEBUG oslo_concurrency.processutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:14:32 compute-0 nova_compute[262220]: 2025-10-08 10:14:32.383 2 DEBUG nova.virt.libvirt.vif [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-08T10:14:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1473882269',display_name='tempest-TestNetworkBasicOps-server-1473882269',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1473882269',id=6,image_ref='e5994bac-385d-4cfe-962e-386aa0559983',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLR6MyJBOMYp2wYGaL6G2mEbJcrZztqNeLLTecSXpMa+10UdVkrFMZxX+qiOC0ccFZIJleCiHIYc6JXNFg7vRmgJ0JtTU6W+KAYc8u1JxRO51IwGJ30ByO68fx1sTbOqEg==',key_name='tempest-TestNetworkBasicOps-1715641436',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0edb2c85b88b4b168a4b2e8c5ed4a05c',ramdisk_id='',reservation_id='r-b2owsx2f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e5994bac-385d-4cfe-962e-386aa0559983',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-139500885',owner_user_name='tempest-TestNetworkBasicOps-139500885-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-08T10:14:24Z,user_data=None,user_id='d50b19166a7245e390a6e29682191263',uuid=ea469a2e-bf09-495c-9b5e-02ad38d32d40,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "address": "fa:16:3e:e6:b0:e0", "network": {"id": "834a886f-bb33-49ed-b47e-ef0308a38e89", "bridge": "br-int", "label": "tempest-network-smoke--1879322246", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe4ec274-2a", "ovs_interfaceid": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 08 10:14:32 compute-0 nova_compute[262220]: 2025-10-08 10:14:32.384 2 DEBUG nova.network.os_vif_util [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converting VIF {"id": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "address": "fa:16:3e:e6:b0:e0", "network": {"id": "834a886f-bb33-49ed-b47e-ef0308a38e89", "bridge": "br-int", "label": "tempest-network-smoke--1879322246", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe4ec274-2a", "ovs_interfaceid": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 08 10:14:32 compute-0 nova_compute[262220]: 2025-10-08 10:14:32.385 2 DEBUG nova.network.os_vif_util [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e6:b0:e0,bridge_name='br-int',has_traffic_filtering=True,id=be4ec274-2a90-48e8-bd51-fd01f3c659da,network=Network(834a886f-bb33-49ed-b47e-ef0308a38e89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe4ec274-2a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 08 10:14:32 compute-0 nova_compute[262220]: 2025-10-08 10:14:32.387 2 DEBUG nova.objects.instance [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lazy-loading 'pci_devices' on Instance uuid ea469a2e-bf09-495c-9b5e-02ad38d32d40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 08 10:14:32 compute-0 nova_compute[262220]: 2025-10-08 10:14:32.405 2 DEBUG nova.virt.libvirt.driver [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] End _get_guest_xml xml=<domain type="kvm">
Oct 08 10:14:32 compute-0 nova_compute[262220]:   <uuid>ea469a2e-bf09-495c-9b5e-02ad38d32d40</uuid>
Oct 08 10:14:32 compute-0 nova_compute[262220]:   <name>instance-00000006</name>
Oct 08 10:14:32 compute-0 nova_compute[262220]:   <memory>131072</memory>
Oct 08 10:14:32 compute-0 nova_compute[262220]:   <vcpu>1</vcpu>
Oct 08 10:14:32 compute-0 nova_compute[262220]:   <metadata>
Oct 08 10:14:32 compute-0 nova_compute[262220]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 08 10:14:32 compute-0 nova_compute[262220]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 08 10:14:32 compute-0 nova_compute[262220]:       <nova:name>tempest-TestNetworkBasicOps-server-1473882269</nova:name>
Oct 08 10:14:32 compute-0 nova_compute[262220]:       <nova:creationTime>2025-10-08 10:14:31</nova:creationTime>
Oct 08 10:14:32 compute-0 nova_compute[262220]:       <nova:flavor name="m1.nano">
Oct 08 10:14:32 compute-0 nova_compute[262220]:         <nova:memory>128</nova:memory>
Oct 08 10:14:32 compute-0 nova_compute[262220]:         <nova:disk>1</nova:disk>
Oct 08 10:14:32 compute-0 nova_compute[262220]:         <nova:swap>0</nova:swap>
Oct 08 10:14:32 compute-0 nova_compute[262220]:         <nova:ephemeral>0</nova:ephemeral>
Oct 08 10:14:32 compute-0 nova_compute[262220]:         <nova:vcpus>1</nova:vcpus>
Oct 08 10:14:32 compute-0 nova_compute[262220]:       </nova:flavor>
Oct 08 10:14:32 compute-0 nova_compute[262220]:       <nova:owner>
Oct 08 10:14:32 compute-0 nova_compute[262220]:         <nova:user uuid="d50b19166a7245e390a6e29682191263">tempest-TestNetworkBasicOps-139500885-project-member</nova:user>
Oct 08 10:14:32 compute-0 nova_compute[262220]:         <nova:project uuid="0edb2c85b88b4b168a4b2e8c5ed4a05c">tempest-TestNetworkBasicOps-139500885</nova:project>
Oct 08 10:14:32 compute-0 nova_compute[262220]:       </nova:owner>
Oct 08 10:14:32 compute-0 nova_compute[262220]:       <nova:root type="image" uuid="e5994bac-385d-4cfe-962e-386aa0559983"/>
Oct 08 10:14:32 compute-0 nova_compute[262220]:       <nova:ports>
Oct 08 10:14:32 compute-0 nova_compute[262220]:         <nova:port uuid="be4ec274-2a90-48e8-bd51-fd01f3c659da">
Oct 08 10:14:32 compute-0 nova_compute[262220]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 08 10:14:32 compute-0 nova_compute[262220]:         </nova:port>
Oct 08 10:14:32 compute-0 nova_compute[262220]:       </nova:ports>
Oct 08 10:14:32 compute-0 nova_compute[262220]:     </nova:instance>
Oct 08 10:14:32 compute-0 nova_compute[262220]:   </metadata>
Oct 08 10:14:32 compute-0 nova_compute[262220]:   <sysinfo type="smbios">
Oct 08 10:14:32 compute-0 nova_compute[262220]:     <system>
Oct 08 10:14:32 compute-0 nova_compute[262220]:       <entry name="manufacturer">RDO</entry>
Oct 08 10:14:32 compute-0 nova_compute[262220]:       <entry name="product">OpenStack Compute</entry>
Oct 08 10:14:32 compute-0 nova_compute[262220]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 08 10:14:32 compute-0 nova_compute[262220]:       <entry name="serial">ea469a2e-bf09-495c-9b5e-02ad38d32d40</entry>
Oct 08 10:14:32 compute-0 nova_compute[262220]:       <entry name="uuid">ea469a2e-bf09-495c-9b5e-02ad38d32d40</entry>
Oct 08 10:14:32 compute-0 nova_compute[262220]:       <entry name="family">Virtual Machine</entry>
Oct 08 10:14:32 compute-0 nova_compute[262220]:     </system>
Oct 08 10:14:32 compute-0 nova_compute[262220]:   </sysinfo>
Oct 08 10:14:32 compute-0 nova_compute[262220]:   <os>
Oct 08 10:14:32 compute-0 nova_compute[262220]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 08 10:14:32 compute-0 nova_compute[262220]:     <boot dev="hd"/>
Oct 08 10:14:32 compute-0 nova_compute[262220]:     <smbios mode="sysinfo"/>
Oct 08 10:14:32 compute-0 nova_compute[262220]:   </os>
Oct 08 10:14:32 compute-0 nova_compute[262220]:   <features>
Oct 08 10:14:32 compute-0 nova_compute[262220]:     <acpi/>
Oct 08 10:14:32 compute-0 nova_compute[262220]:     <apic/>
Oct 08 10:14:32 compute-0 nova_compute[262220]:     <vmcoreinfo/>
Oct 08 10:14:32 compute-0 nova_compute[262220]:   </features>
Oct 08 10:14:32 compute-0 nova_compute[262220]:   <clock offset="utc">
Oct 08 10:14:32 compute-0 nova_compute[262220]:     <timer name="pit" tickpolicy="delay"/>
Oct 08 10:14:32 compute-0 nova_compute[262220]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 08 10:14:32 compute-0 nova_compute[262220]:     <timer name="hpet" present="no"/>
Oct 08 10:14:32 compute-0 nova_compute[262220]:   </clock>
Oct 08 10:14:32 compute-0 nova_compute[262220]:   <cpu mode="host-model" match="exact">
Oct 08 10:14:32 compute-0 nova_compute[262220]:     <topology sockets="1" cores="1" threads="1"/>
Oct 08 10:14:32 compute-0 nova_compute[262220]:   </cpu>
Oct 08 10:14:32 compute-0 nova_compute[262220]:   <devices>
Oct 08 10:14:32 compute-0 nova_compute[262220]:     <disk type="network" device="disk">
Oct 08 10:14:32 compute-0 nova_compute[262220]:       <driver type="raw" cache="none"/>
Oct 08 10:14:32 compute-0 nova_compute[262220]:       <source protocol="rbd" name="vms/ea469a2e-bf09-495c-9b5e-02ad38d32d40_disk">
Oct 08 10:14:32 compute-0 nova_compute[262220]:         <host name="192.168.122.100" port="6789"/>
Oct 08 10:14:32 compute-0 nova_compute[262220]:         <host name="192.168.122.102" port="6789"/>
Oct 08 10:14:32 compute-0 nova_compute[262220]:         <host name="192.168.122.101" port="6789"/>
Oct 08 10:14:32 compute-0 nova_compute[262220]:       </source>
Oct 08 10:14:32 compute-0 nova_compute[262220]:       <auth username="openstack">
Oct 08 10:14:32 compute-0 nova_compute[262220]:         <secret type="ceph" uuid="787292cc-8154-50c4-9e00-e9be3e817149"/>
Oct 08 10:14:32 compute-0 nova_compute[262220]:       </auth>
Oct 08 10:14:32 compute-0 nova_compute[262220]:       <target dev="vda" bus="virtio"/>
Oct 08 10:14:32 compute-0 nova_compute[262220]:     </disk>
Oct 08 10:14:32 compute-0 nova_compute[262220]:     <disk type="network" device="cdrom">
Oct 08 10:14:32 compute-0 nova_compute[262220]:       <driver type="raw" cache="none"/>
Oct 08 10:14:32 compute-0 nova_compute[262220]:       <source protocol="rbd" name="vms/ea469a2e-bf09-495c-9b5e-02ad38d32d40_disk.config">
Oct 08 10:14:32 compute-0 nova_compute[262220]:         <host name="192.168.122.100" port="6789"/>
Oct 08 10:14:32 compute-0 nova_compute[262220]:         <host name="192.168.122.102" port="6789"/>
Oct 08 10:14:32 compute-0 nova_compute[262220]:         <host name="192.168.122.101" port="6789"/>
Oct 08 10:14:32 compute-0 nova_compute[262220]:       </source>
Oct 08 10:14:32 compute-0 nova_compute[262220]:       <auth username="openstack">
Oct 08 10:14:32 compute-0 nova_compute[262220]:         <secret type="ceph" uuid="787292cc-8154-50c4-9e00-e9be3e817149"/>
Oct 08 10:14:32 compute-0 nova_compute[262220]:       </auth>
Oct 08 10:14:32 compute-0 nova_compute[262220]:       <target dev="sda" bus="sata"/>
Oct 08 10:14:32 compute-0 nova_compute[262220]:     </disk>
Oct 08 10:14:32 compute-0 nova_compute[262220]:     <interface type="ethernet">
Oct 08 10:14:32 compute-0 nova_compute[262220]:       <mac address="fa:16:3e:e6:b0:e0"/>
Oct 08 10:14:32 compute-0 nova_compute[262220]:       <model type="virtio"/>
Oct 08 10:14:32 compute-0 nova_compute[262220]:       <driver name="vhost" rx_queue_size="512"/>
Oct 08 10:14:32 compute-0 nova_compute[262220]:       <mtu size="1442"/>
Oct 08 10:14:32 compute-0 nova_compute[262220]:       <target dev="tapbe4ec274-2a"/>
Oct 08 10:14:32 compute-0 nova_compute[262220]:     </interface>
Oct 08 10:14:32 compute-0 nova_compute[262220]:     <serial type="pty">
Oct 08 10:14:32 compute-0 nova_compute[262220]:       <log file="/var/lib/nova/instances/ea469a2e-bf09-495c-9b5e-02ad38d32d40/console.log" append="off"/>
Oct 08 10:14:32 compute-0 nova_compute[262220]:     </serial>
Oct 08 10:14:32 compute-0 nova_compute[262220]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 08 10:14:32 compute-0 nova_compute[262220]:     <video>
Oct 08 10:14:32 compute-0 nova_compute[262220]:       <model type="virtio"/>
Oct 08 10:14:32 compute-0 nova_compute[262220]:     </video>
Oct 08 10:14:32 compute-0 nova_compute[262220]:     <input type="tablet" bus="usb"/>
Oct 08 10:14:32 compute-0 nova_compute[262220]:     <rng model="virtio">
Oct 08 10:14:32 compute-0 nova_compute[262220]:       <backend model="random">/dev/urandom</backend>
Oct 08 10:14:32 compute-0 nova_compute[262220]:     </rng>
Oct 08 10:14:32 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root"/>
Oct 08 10:14:32 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:14:32 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:14:32 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:14:32 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:14:32 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:14:32 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:14:32 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:14:32 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:14:32 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:14:32 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:14:32 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:14:32 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:14:32 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:14:32 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:14:32 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:14:32 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:14:32 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:14:32 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:14:32 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:14:32 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:14:32 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:14:32 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:14:32 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:14:32 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:14:32 compute-0 nova_compute[262220]:     <controller type="usb" index="0"/>
Oct 08 10:14:32 compute-0 nova_compute[262220]:     <memballoon model="virtio">
Oct 08 10:14:32 compute-0 nova_compute[262220]:       <stats period="10"/>
Oct 08 10:14:32 compute-0 nova_compute[262220]:     </memballoon>
Oct 08 10:14:32 compute-0 nova_compute[262220]:   </devices>
Oct 08 10:14:32 compute-0 nova_compute[262220]: </domain>
Oct 08 10:14:32 compute-0 nova_compute[262220]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 08 10:14:32 compute-0 nova_compute[262220]: 2025-10-08 10:14:32.407 2 DEBUG nova.compute.manager [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Preparing to wait for external event network-vif-plugged-be4ec274-2a90-48e8-bd51-fd01f3c659da prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 08 10:14:32 compute-0 nova_compute[262220]: 2025-10-08 10:14:32.407 2 DEBUG oslo_concurrency.lockutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:14:32 compute-0 nova_compute[262220]: 2025-10-08 10:14:32.407 2 DEBUG oslo_concurrency.lockutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:14:32 compute-0 nova_compute[262220]: 2025-10-08 10:14:32.407 2 DEBUG oslo_concurrency.lockutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:14:32 compute-0 nova_compute[262220]: 2025-10-08 10:14:32.408 2 DEBUG nova.virt.libvirt.vif [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-08T10:14:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1473882269',display_name='tempest-TestNetworkBasicOps-server-1473882269',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1473882269',id=6,image_ref='e5994bac-385d-4cfe-962e-386aa0559983',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLR6MyJBOMYp2wYGaL6G2mEbJcrZztqNeLLTecSXpMa+10UdVkrFMZxX+qiOC0ccFZIJleCiHIYc6JXNFg7vRmgJ0JtTU6W+KAYc8u1JxRO51IwGJ30ByO68fx1sTbOqEg==',key_name='tempest-TestNetworkBasicOps-1715641436',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0edb2c85b88b4b168a4b2e8c5ed4a05c',ramdisk_id='',reservation_id='r-b2owsx2f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e5994bac-385d-4cfe-962e-386aa0559983',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-139500885',owner_user_name='tempest-TestNetworkBasicOps-139500885-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-08T10:14:24Z,user_data=None,user_id='d50b19166a7245e390a6e29682191263',uuid=ea469a2e-bf09-495c-9b5e-02ad38d32d40,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "address": "fa:16:3e:e6:b0:e0", "network": {"id": "834a886f-bb33-49ed-b47e-ef0308a38e89", "bridge": "br-int", "label": "tempest-network-smoke--1879322246", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe4ec274-2a", "ovs_interfaceid": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 08 10:14:32 compute-0 nova_compute[262220]: 2025-10-08 10:14:32.408 2 DEBUG nova.network.os_vif_util [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converting VIF {"id": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "address": "fa:16:3e:e6:b0:e0", "network": {"id": "834a886f-bb33-49ed-b47e-ef0308a38e89", "bridge": "br-int", "label": "tempest-network-smoke--1879322246", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe4ec274-2a", "ovs_interfaceid": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 08 10:14:32 compute-0 nova_compute[262220]: 2025-10-08 10:14:32.409 2 DEBUG nova.network.os_vif_util [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e6:b0:e0,bridge_name='br-int',has_traffic_filtering=True,id=be4ec274-2a90-48e8-bd51-fd01f3c659da,network=Network(834a886f-bb33-49ed-b47e-ef0308a38e89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe4ec274-2a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 08 10:14:32 compute-0 nova_compute[262220]: 2025-10-08 10:14:32.409 2 DEBUG os_vif [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:b0:e0,bridge_name='br-int',has_traffic_filtering=True,id=be4ec274-2a90-48e8-bd51-fd01f3c659da,network=Network(834a886f-bb33-49ed-b47e-ef0308a38e89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe4ec274-2a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 08 10:14:32 compute-0 nova_compute[262220]: 2025-10-08 10:14:32.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:14:32 compute-0 nova_compute[262220]: 2025-10-08 10:14:32.411 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 10:14:32 compute-0 nova_compute[262220]: 2025-10-08 10:14:32.412 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 10:14:32 compute-0 nova_compute[262220]: 2025-10-08 10:14:32.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:14:32 compute-0 nova_compute[262220]: 2025-10-08 10:14:32.417 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbe4ec274-2a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 10:14:32 compute-0 nova_compute[262220]: 2025-10-08 10:14:32.417 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbe4ec274-2a, col_values=(('external_ids', {'iface-id': 'be4ec274-2a90-48e8-bd51-fd01f3c659da', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e6:b0:e0', 'vm-uuid': 'ea469a2e-bf09-495c-9b5e-02ad38d32d40'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 10:14:32 compute-0 nova_compute[262220]: 2025-10-08 10:14:32.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:14:32 compute-0 nova_compute[262220]: 2025-10-08 10:14:32.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 08 10:14:32 compute-0 NetworkManager[44872]: <info>  [1759918472.4220] manager: (tapbe4ec274-2a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Oct 08 10:14:32 compute-0 nova_compute[262220]: 2025-10-08 10:14:32.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:14:32 compute-0 nova_compute[262220]: 2025-10-08 10:14:32.427 2 INFO os_vif [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:b0:e0,bridge_name='br-int',has_traffic_filtering=True,id=be4ec274-2a90-48e8-bd51-fd01f3c659da,network=Network(834a886f-bb33-49ed-b47e-ef0308a38e89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe4ec274-2a')
Oct 08 10:14:32 compute-0 nova_compute[262220]: 2025-10-08 10:14:32.485 2 DEBUG nova.virt.libvirt.driver [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 08 10:14:32 compute-0 nova_compute[262220]: 2025-10-08 10:14:32.487 2 DEBUG nova.virt.libvirt.driver [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 08 10:14:32 compute-0 nova_compute[262220]: 2025-10-08 10:14:32.488 2 DEBUG nova.virt.libvirt.driver [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] No VIF found with MAC fa:16:3e:e6:b0:e0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 08 10:14:32 compute-0 nova_compute[262220]: 2025-10-08 10:14:32.489 2 INFO nova.virt.libvirt.driver [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Using config drive
Oct 08 10:14:32 compute-0 nova_compute[262220]: 2025-10-08 10:14:32.530 2 DEBUG nova.storage.rbd_utils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] rbd image ea469a2e-bf09-495c-9b5e-02ad38d32d40_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 08 10:14:32 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:32 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90005240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:32 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:14:32 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:14:32 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:14:32.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:14:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:14:32 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:14:32 compute-0 nova_compute[262220]: 2025-10-08 10:14:32.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:14:33 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:33 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94004a30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:33 compute-0 ceph-mon[73572]: pgmap v898: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 08 10:14:33 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/652061004' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 08 10:14:33 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:14:33 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:14:33 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:14:33 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:14:33.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:14:33 compute-0 nova_compute[262220]: 2025-10-08 10:14:33.489 2 INFO nova.virt.libvirt.driver [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Creating config drive at /var/lib/nova/instances/ea469a2e-bf09-495c-9b5e-02ad38d32d40/disk.config
Oct 08 10:14:33 compute-0 nova_compute[262220]: 2025-10-08 10:14:33.493 2 DEBUG oslo_concurrency.processutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ea469a2e-bf09-495c-9b5e-02ad38d32d40/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg3mw4hpq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:14:33 compute-0 nova_compute[262220]: 2025-10-08 10:14:33.631 2 DEBUG oslo_concurrency.processutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ea469a2e-bf09-495c-9b5e-02ad38d32d40/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg3mw4hpq" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:14:33 compute-0 nova_compute[262220]: 2025-10-08 10:14:33.667 2 DEBUG nova.storage.rbd_utils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] rbd image ea469a2e-bf09-495c-9b5e-02ad38d32d40_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 08 10:14:33 compute-0 nova_compute[262220]: 2025-10-08 10:14:33.671 2 DEBUG oslo_concurrency.processutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ea469a2e-bf09-495c-9b5e-02ad38d32d40/disk.config ea469a2e-bf09-495c-9b5e-02ad38d32d40_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:14:33 compute-0 nova_compute[262220]: 2025-10-08 10:14:33.881 2 DEBUG oslo_concurrency.processutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ea469a2e-bf09-495c-9b5e-02ad38d32d40/disk.config ea469a2e-bf09-495c-9b5e-02ad38d32d40_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.210s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:14:33 compute-0 nova_compute[262220]: 2025-10-08 10:14:33.883 2 INFO nova.virt.libvirt.driver [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Deleting local config drive /var/lib/nova/instances/ea469a2e-bf09-495c-9b5e-02ad38d32d40/disk.config because it was imported into RBD.
Oct 08 10:14:33 compute-0 systemd[1]: Starting libvirt secret daemon...
Oct 08 10:14:33 compute-0 systemd[1]: Started libvirt secret daemon.
Oct 08 10:14:34 compute-0 kernel: tapbe4ec274-2a: entered promiscuous mode
Oct 08 10:14:34 compute-0 NetworkManager[44872]: <info>  [1759918474.0074] manager: (tapbe4ec274-2a): new Tun device (/org/freedesktop/NetworkManager/Devices/34)
Oct 08 10:14:34 compute-0 nova_compute[262220]: 2025-10-08 10:14:34.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:14:34 compute-0 ovn_controller[153187]: 2025-10-08T10:14:34Z|00037|binding|INFO|Claiming lport be4ec274-2a90-48e8-bd51-fd01f3c659da for this chassis.
Oct 08 10:14:34 compute-0 ovn_controller[153187]: 2025-10-08T10:14:34Z|00038|binding|INFO|be4ec274-2a90-48e8-bd51-fd01f3c659da: Claiming fa:16:3e:e6:b0:e0 10.100.0.3
Oct 08 10:14:34 compute-0 nova_compute[262220]: 2025-10-08 10:14:34.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:14:34 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:14:34.066 163175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:b0:e0 10.100.0.3'], port_security=['fa:16:3e:e6:b0:e0 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ea469a2e-bf09-495c-9b5e-02ad38d32d40', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-834a886f-bb33-49ed-b47e-ef0308a38e89', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0edb2c85b88b4b168a4b2e8c5ed4a05c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '13817d67-6af8-4060-9f0c-16a7fd8532c0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2eaf1a8f-1880-48d7-9974-4c1e9169efe5, chassis=[<ovs.db.idl.Row object at 0x7f191f105a60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f191f105a60>], logical_port=be4ec274-2a90-48e8-bd51-fd01f3c659da) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 08 10:14:34 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:14:34.067 163175 INFO neutron.agent.ovn.metadata.agent [-] Port be4ec274-2a90-48e8-bd51-fd01f3c659da in datapath 834a886f-bb33-49ed-b47e-ef0308a38e89 bound to our chassis
Oct 08 10:14:34 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:14:34.069 163175 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 834a886f-bb33-49ed-b47e-ef0308a38e89
Oct 08 10:14:34 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:14:34.086 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[5de2ab1a-ef6d-4f1c-8c1c-20ff9e68c1ea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:14:34 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:14:34.087 163175 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap834a886f-b1 in ovnmeta-834a886f-bb33-49ed-b47e-ef0308a38e89 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 08 10:14:34 compute-0 systemd-machined[216030]: New machine qemu-2-instance-00000006.
Oct 08 10:14:34 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:14:34.090 267781 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap834a886f-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 08 10:14:34 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:14:34.091 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[47337ed7-4b78-439d-9c6f-6ed88c6cde3d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:14:34 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:14:34.092 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[683e143f-311f-4444-8d20-484b90e2758a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:14:34 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000006.
Oct 08 10:14:34 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:14:34.114 163290 DEBUG oslo.privsep.daemon [-] privsep: reply[c1ae3106-f580-4530-99f0-1d6cd00856c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:14:34 compute-0 nova_compute[262220]: 2025-10-08 10:14:34.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:14:34 compute-0 ovn_controller[153187]: 2025-10-08T10:14:34Z|00039|binding|INFO|Setting lport be4ec274-2a90-48e8-bd51-fd01f3c659da ovn-installed in OVS
Oct 08 10:14:34 compute-0 ovn_controller[153187]: 2025-10-08T10:14:34Z|00040|binding|INFO|Setting lport be4ec274-2a90-48e8-bd51-fd01f3c659da up in Southbound
Oct 08 10:14:34 compute-0 nova_compute[262220]: 2025-10-08 10:14:34.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:14:34 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:14:34.138 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[da7ada3e-18a7-41e1-b1c3-b88e6dc893be]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:14:34 compute-0 systemd-udevd[274322]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 10:14:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:14:34 compute-0 NetworkManager[44872]: <info>  [1759918474.1693] device (tapbe4ec274-2a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 08 10:14:34 compute-0 NetworkManager[44872]: <info>  [1759918474.1709] device (tapbe4ec274-2a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 08 10:14:34 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:14:34.177 267799 DEBUG oslo.privsep.daemon [-] privsep: reply[1ddc5b70-bde4-49ed-ac3a-95b45637b4d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:14:34 compute-0 NetworkManager[44872]: <info>  [1759918474.1876] manager: (tap834a886f-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/35)
Oct 08 10:14:34 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:14:34.186 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[71f1919f-0064-4bda-936d-470061e1201c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:14:34 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v899: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 08 10:14:34 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:14:34.229 267799 DEBUG oslo.privsep.daemon [-] privsep: reply[0e281fc7-9b8f-4bc4-b3b7-638933d7d01b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:14:34 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:14:34.233 267799 DEBUG oslo.privsep.daemon [-] privsep: reply[a8a9ba21-ffa5-41a4-b10e-a0c4413c8d26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:14:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:34 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:34 compute-0 NetworkManager[44872]: <info>  [1759918474.2713] device (tap834a886f-b0): carrier: link connected
Oct 08 10:14:34 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:14:34.278 267799 DEBUG oslo.privsep.daemon [-] privsep: reply[3663e998-3c9c-4088-9a66-d7aea36c704d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:14:34 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:14:34.304 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[45be398f-3275-4907-8361-f6bef3c9512e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap834a886f-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:16:82:b6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 443290, 'reachable_time': 36315, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274352, 'error': None, 'target': 'ovnmeta-834a886f-bb33-49ed-b47e-ef0308a38e89', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:14:34 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:14:34.324 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[6adc5d0c-de0f-4db4-a29c-c06a20d2592d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe16:82b6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 443290, 'tstamp': 443290}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274353, 'error': None, 'target': 'ovnmeta-834a886f-bb33-49ed-b47e-ef0308a38e89', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:14:34 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:14:34.346 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[aefe0c1b-1536-4087-bef6-ef40930dcdc3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap834a886f-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:16:82:b6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 443290, 'reachable_time': 36315, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 274354, 'error': None, 'target': 'ovnmeta-834a886f-bb33-49ed-b47e-ef0308a38e89', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:14:34 compute-0 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Oct 08 10:14:34 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:14:34.368623) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 08 10:14:34 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Oct 08 10:14:34 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918474368721, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2122, "num_deletes": 251, "total_data_size": 4184829, "memory_usage": 4252152, "flush_reason": "Manual Compaction"}
Oct 08 10:14:34 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Oct 08 10:14:34 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:14:34.392 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[f92ebba2-02db-4ded-8f2d-9ab8900ba16d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:14:34 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918474400020, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 4063382, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24787, "largest_seqno": 26908, "table_properties": {"data_size": 4054034, "index_size": 5842, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19542, "raw_average_key_size": 20, "raw_value_size": 4035327, "raw_average_value_size": 4181, "num_data_blocks": 257, "num_entries": 965, "num_filter_entries": 965, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759918264, "oldest_key_time": 1759918264, "file_creation_time": 1759918474, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Oct 08 10:14:34 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 31445 microseconds, and 12460 cpu microseconds.
Oct 08 10:14:34 compute-0 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 08 10:14:34 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:14:34.400084) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 4063382 bytes OK
Oct 08 10:14:34 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:14:34.400112) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Oct 08 10:14:34 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:14:34.401564) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Oct 08 10:14:34 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:14:34.401579) EVENT_LOG_v1 {"time_micros": 1759918474401574, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 08 10:14:34 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:14:34.401606) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 08 10:14:34 compute-0 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 4176252, prev total WAL file size 4176252, number of live WAL files 2.
Oct 08 10:14:34 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 10:14:34 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:14:34.402680) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Oct 08 10:14:34 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 08 10:14:34 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(3968KB)], [56(11MB)]
Oct 08 10:14:34 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918474403274, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 16442724, "oldest_snapshot_seqno": -1}
Oct 08 10:14:34 compute-0 nova_compute[262220]: 2025-10-08 10:14:34.470 2 DEBUG nova.compute.manager [req-3afe6bfa-060a-478f-832f-6cff0bcdfea9 req-2a69d28c-bf31-493c-918b-b92eb2157dfc 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Received event network-vif-plugged-be4ec274-2a90-48e8-bd51-fd01f3c659da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 08 10:14:34 compute-0 nova_compute[262220]: 2025-10-08 10:14:34.471 2 DEBUG oslo_concurrency.lockutils [req-3afe6bfa-060a-478f-832f-6cff0bcdfea9 req-2a69d28c-bf31-493c-918b-b92eb2157dfc 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:14:34 compute-0 nova_compute[262220]: 2025-10-08 10:14:34.471 2 DEBUG oslo_concurrency.lockutils [req-3afe6bfa-060a-478f-832f-6cff0bcdfea9 req-2a69d28c-bf31-493c-918b-b92eb2157dfc 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:14:34 compute-0 nova_compute[262220]: 2025-10-08 10:14:34.472 2 DEBUG oslo_concurrency.lockutils [req-3afe6bfa-060a-478f-832f-6cff0bcdfea9 req-2a69d28c-bf31-493c-918b-b92eb2157dfc 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:14:34 compute-0 nova_compute[262220]: 2025-10-08 10:14:34.472 2 DEBUG nova.compute.manager [req-3afe6bfa-060a-478f-832f-6cff0bcdfea9 req-2a69d28c-bf31-493c-918b-b92eb2157dfc 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Processing event network-vif-plugged-be4ec274-2a90-48e8-bd51-fd01f3c659da _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 08 10:14:34 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:14:34.473 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[e4c3cf43-9e9c-4362-9406-ec72888cd737]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:14:34 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:14:34.475 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap834a886f-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 10:14:34 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:14:34.475 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 10:14:34 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:14:34.476 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap834a886f-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 10:14:34 compute-0 nova_compute[262220]: 2025-10-08 10:14:34.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:14:34 compute-0 NetworkManager[44872]: <info>  [1759918474.4796] manager: (tap834a886f-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Oct 08 10:14:34 compute-0 kernel: tap834a886f-b0: entered promiscuous mode
Oct 08 10:14:34 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:14:34.490 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap834a886f-b0, col_values=(('external_ids', {'iface-id': 'f613d263-6ad2-4e23-84bc-b066c6b6b34a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 10:14:34 compute-0 ovn_controller[153187]: 2025-10-08T10:14:34Z|00041|binding|INFO|Releasing lport f613d263-6ad2-4e23-84bc-b066c6b6b34a from this chassis (sb_readonly=0)
Oct 08 10:14:34 compute-0 nova_compute[262220]: 2025-10-08 10:14:34.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:14:34 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:14:34.498 163175 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/834a886f-bb33-49ed-b47e-ef0308a38e89.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/834a886f-bb33-49ed-b47e-ef0308a38e89.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 08 10:14:34 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:14:34.500 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[788e2237-d1d7-41f7-9bf9-b0888795e7e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:14:34 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:14:34.501 163175 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 08 10:14:34 compute-0 ovn_metadata_agent[163169]: global
Oct 08 10:14:34 compute-0 ovn_metadata_agent[163169]:     log         /dev/log local0 debug
Oct 08 10:14:34 compute-0 ovn_metadata_agent[163169]:     log-tag     haproxy-metadata-proxy-834a886f-bb33-49ed-b47e-ef0308a38e89
Oct 08 10:14:34 compute-0 ovn_metadata_agent[163169]:     user        root
Oct 08 10:14:34 compute-0 ovn_metadata_agent[163169]:     group       root
Oct 08 10:14:34 compute-0 ovn_metadata_agent[163169]:     maxconn     1024
Oct 08 10:14:34 compute-0 ovn_metadata_agent[163169]:     pidfile     /var/lib/neutron/external/pids/834a886f-bb33-49ed-b47e-ef0308a38e89.pid.haproxy
Oct 08 10:14:34 compute-0 ovn_metadata_agent[163169]:     daemon
Oct 08 10:14:34 compute-0 ovn_metadata_agent[163169]: 
Oct 08 10:14:34 compute-0 ovn_metadata_agent[163169]: defaults
Oct 08 10:14:34 compute-0 ovn_metadata_agent[163169]:     log global
Oct 08 10:14:34 compute-0 ovn_metadata_agent[163169]:     mode http
Oct 08 10:14:34 compute-0 ovn_metadata_agent[163169]:     option httplog
Oct 08 10:14:34 compute-0 ovn_metadata_agent[163169]:     option dontlognull
Oct 08 10:14:34 compute-0 ovn_metadata_agent[163169]:     option http-server-close
Oct 08 10:14:34 compute-0 ovn_metadata_agent[163169]:     option forwardfor
Oct 08 10:14:34 compute-0 ovn_metadata_agent[163169]:     retries                 3
Oct 08 10:14:34 compute-0 ovn_metadata_agent[163169]:     timeout http-request    30s
Oct 08 10:14:34 compute-0 ovn_metadata_agent[163169]:     timeout connect         30s
Oct 08 10:14:34 compute-0 ovn_metadata_agent[163169]:     timeout client          32s
Oct 08 10:14:34 compute-0 ovn_metadata_agent[163169]:     timeout server          32s
Oct 08 10:14:34 compute-0 ovn_metadata_agent[163169]:     timeout http-keep-alive 30s
Oct 08 10:14:34 compute-0 ovn_metadata_agent[163169]: 
Oct 08 10:14:34 compute-0 ovn_metadata_agent[163169]: 
Oct 08 10:14:34 compute-0 ovn_metadata_agent[163169]: listen listener
Oct 08 10:14:34 compute-0 ovn_metadata_agent[163169]:     bind 169.254.169.254:80
Oct 08 10:14:34 compute-0 ovn_metadata_agent[163169]:     server metadata /var/lib/neutron/metadata_proxy
Oct 08 10:14:34 compute-0 ovn_metadata_agent[163169]:     http-request add-header X-OVN-Network-ID 834a886f-bb33-49ed-b47e-ef0308a38e89
Oct 08 10:14:34 compute-0 ovn_metadata_agent[163169]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 08 10:14:34 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:14:34.503 163175 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-834a886f-bb33-49ed-b47e-ef0308a38e89', 'env', 'PROCESS_TAG=haproxy-834a886f-bb33-49ed-b47e-ef0308a38e89', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/834a886f-bb33-49ed-b47e-ef0308a38e89.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 08 10:14:34 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 5816 keys, 14317326 bytes, temperature: kUnknown
Oct 08 10:14:34 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918474505845, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 14317326, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14277846, "index_size": 23818, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14597, "raw_key_size": 147901, "raw_average_key_size": 25, "raw_value_size": 14172310, "raw_average_value_size": 2436, "num_data_blocks": 970, "num_entries": 5816, "num_filter_entries": 5816, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916581, "oldest_key_time": 0, "file_creation_time": 1759918474, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Oct 08 10:14:34 compute-0 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 08 10:14:34 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:14:34.506105) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 14317326 bytes
Oct 08 10:14:34 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:14:34.507287) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 160.2 rd, 139.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.9, 11.8 +0.0 blob) out(13.7 +0.0 blob), read-write-amplify(7.6) write-amplify(3.5) OK, records in: 6334, records dropped: 518 output_compression: NoCompression
Oct 08 10:14:34 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:14:34.507304) EVENT_LOG_v1 {"time_micros": 1759918474507297, "job": 30, "event": "compaction_finished", "compaction_time_micros": 102640, "compaction_time_cpu_micros": 32688, "output_level": 6, "num_output_files": 1, "total_output_size": 14317326, "num_input_records": 6334, "num_output_records": 5816, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 08 10:14:34 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 10:14:34 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918474507988, "job": 30, "event": "table_file_deletion", "file_number": 58}
Oct 08 10:14:34 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 10:14:34 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918474510366, "job": 30, "event": "table_file_deletion", "file_number": 56}
Oct 08 10:14:34 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:14:34.402577) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:14:34 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:14:34.510421) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:14:34 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:14:34.510457) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:14:34 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:14:34.510459) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:14:34 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:14:34.510461) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:14:34 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:14:34.510462) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:14:34 compute-0 nova_compute[262220]: 2025-10-08 10:14:34.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:14:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:34 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:34 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:14:34 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:14:34 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:14:34.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:14:34 compute-0 podman[274428]: 2025-10-08 10:14:34.891130302 +0000 UTC m=+0.058209766 container create 2c85508ebe5bbfbd488053b850f882989cf060afde04acce5a32fa0a4320dce9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-834a886f-bb33-49ed-b47e-ef0308a38e89, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 08 10:14:34 compute-0 systemd[1]: Started libpod-conmon-2c85508ebe5bbfbd488053b850f882989cf060afde04acce5a32fa0a4320dce9.scope.
Oct 08 10:14:34 compute-0 podman[274428]: 2025-10-08 10:14:34.857256681 +0000 UTC m=+0.024336165 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 08 10:14:34 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:14:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/841b76c2441b0eb7f658de0d9799efa6ab00baf820e9b70f7311256c5c904ae8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 08 10:14:34 compute-0 podman[274428]: 2025-10-08 10:14:34.985323326 +0000 UTC m=+0.152402820 container init 2c85508ebe5bbfbd488053b850f882989cf060afde04acce5a32fa0a4320dce9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-834a886f-bb33-49ed-b47e-ef0308a38e89, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 08 10:14:34 compute-0 podman[274428]: 2025-10-08 10:14:34.993985865 +0000 UTC m=+0.161065329 container start 2c85508ebe5bbfbd488053b850f882989cf060afde04acce5a32fa0a4320dce9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-834a886f-bb33-49ed-b47e-ef0308a38e89, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 08 10:14:35 compute-0 neutron-haproxy-ovnmeta-834a886f-bb33-49ed-b47e-ef0308a38e89[274443]: [NOTICE]   (274447) : New worker (274449) forked
Oct 08 10:14:35 compute-0 neutron-haproxy-ovnmeta-834a886f-bb33-49ed-b47e-ef0308a38e89[274443]: [NOTICE]   (274447) : Loading success.
Oct 08 10:14:35 compute-0 nova_compute[262220]: 2025-10-08 10:14:35.041 2 DEBUG nova.network.neutron [req-6719b070-2f01-4279-a5fe-5a610edfd379 req-0fbee54e-a6e0-4744-a94a-3c84cef15802 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Updated VIF entry in instance network info cache for port be4ec274-2a90-48e8-bd51-fd01f3c659da. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 08 10:14:35 compute-0 nova_compute[262220]: 2025-10-08 10:14:35.041 2 DEBUG nova.network.neutron [req-6719b070-2f01-4279-a5fe-5a610edfd379 req-0fbee54e-a6e0-4744-a94a-3c84cef15802 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Updating instance_info_cache with network_info: [{"id": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "address": "fa:16:3e:e6:b0:e0", "network": {"id": "834a886f-bb33-49ed-b47e-ef0308a38e89", "bridge": "br-int", "label": "tempest-network-smoke--1879322246", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe4ec274-2a", "ovs_interfaceid": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 08 10:14:35 compute-0 nova_compute[262220]: 2025-10-08 10:14:35.059 2 DEBUG oslo_concurrency.lockutils [req-6719b070-2f01-4279-a5fe-5a610edfd379 req-0fbee54e-a6e0-4744-a94a-3c84cef15802 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Releasing lock "refresh_cache-ea469a2e-bf09-495c-9b5e-02ad38d32d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 08 10:14:35 compute-0 nova_compute[262220]: 2025-10-08 10:14:35.116 2 DEBUG nova.compute.manager [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 08 10:14:35 compute-0 nova_compute[262220]: 2025-10-08 10:14:35.118 2 DEBUG nova.virt.driver [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Emitting event <LifecycleEvent: 1759918475.1170862, ea469a2e-bf09-495c-9b5e-02ad38d32d40 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 08 10:14:35 compute-0 nova_compute[262220]: 2025-10-08 10:14:35.118 2 INFO nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] VM Started (Lifecycle Event)
Oct 08 10:14:35 compute-0 nova_compute[262220]: 2025-10-08 10:14:35.122 2 DEBUG nova.virt.libvirt.driver [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 08 10:14:35 compute-0 nova_compute[262220]: 2025-10-08 10:14:35.126 2 INFO nova.virt.libvirt.driver [-] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Instance spawned successfully.
Oct 08 10:14:35 compute-0 nova_compute[262220]: 2025-10-08 10:14:35.127 2 DEBUG nova.virt.libvirt.driver [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 08 10:14:35 compute-0 nova_compute[262220]: 2025-10-08 10:14:35.139 2 DEBUG nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 08 10:14:35 compute-0 nova_compute[262220]: 2025-10-08 10:14:35.143 2 DEBUG nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 08 10:14:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:35 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90005240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:35 compute-0 nova_compute[262220]: 2025-10-08 10:14:35.271 2 DEBUG nova.virt.libvirt.driver [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 08 10:14:35 compute-0 nova_compute[262220]: 2025-10-08 10:14:35.271 2 DEBUG nova.virt.libvirt.driver [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 08 10:14:35 compute-0 nova_compute[262220]: 2025-10-08 10:14:35.272 2 DEBUG nova.virt.libvirt.driver [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 08 10:14:35 compute-0 nova_compute[262220]: 2025-10-08 10:14:35.272 2 DEBUG nova.virt.libvirt.driver [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 08 10:14:35 compute-0 nova_compute[262220]: 2025-10-08 10:14:35.273 2 DEBUG nova.virt.libvirt.driver [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 08 10:14:35 compute-0 nova_compute[262220]: 2025-10-08 10:14:35.273 2 DEBUG nova.virt.libvirt.driver [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 08 10:14:35 compute-0 nova_compute[262220]: 2025-10-08 10:14:35.277 2 INFO nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 08 10:14:35 compute-0 nova_compute[262220]: 2025-10-08 10:14:35.278 2 DEBUG nova.virt.driver [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Emitting event <LifecycleEvent: 1759918475.1184063, ea469a2e-bf09-495c-9b5e-02ad38d32d40 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 08 10:14:35 compute-0 nova_compute[262220]: 2025-10-08 10:14:35.278 2 INFO nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] VM Paused (Lifecycle Event)
Oct 08 10:14:35 compute-0 nova_compute[262220]: 2025-10-08 10:14:35.302 2 DEBUG nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 08 10:14:35 compute-0 nova_compute[262220]: 2025-10-08 10:14:35.306 2 DEBUG nova.virt.driver [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Emitting event <LifecycleEvent: 1759918475.1213126, ea469a2e-bf09-495c-9b5e-02ad38d32d40 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 08 10:14:35 compute-0 nova_compute[262220]: 2025-10-08 10:14:35.306 2 INFO nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] VM Resumed (Lifecycle Event)
Oct 08 10:14:35 compute-0 nova_compute[262220]: 2025-10-08 10:14:35.334 2 DEBUG nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 08 10:14:35 compute-0 nova_compute[262220]: 2025-10-08 10:14:35.337 2 DEBUG nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 08 10:14:35 compute-0 nova_compute[262220]: 2025-10-08 10:14:35.352 2 INFO nova.compute.manager [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Took 10.64 seconds to spawn the instance on the hypervisor.
Oct 08 10:14:35 compute-0 nova_compute[262220]: 2025-10-08 10:14:35.353 2 DEBUG nova.compute.manager [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 08 10:14:35 compute-0 nova_compute[262220]: 2025-10-08 10:14:35.361 2 INFO nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 08 10:14:35 compute-0 ceph-mon[73572]: pgmap v899: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 08 10:14:35 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:14:35 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:14:35 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:14:35.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:14:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:14:35] "GET /metrics HTTP/1.1" 200 48481 "" "Prometheus/2.51.0"
Oct 08 10:14:35 compute-0 nova_compute[262220]: 2025-10-08 10:14:35.736 2 INFO nova.compute.manager [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Took 12.04 seconds to build instance.
Oct 08 10:14:35 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:14:35] "GET /metrics HTTP/1.1" 200 48481 "" "Prometheus/2.51.0"
Oct 08 10:14:35 compute-0 nova_compute[262220]: 2025-10-08 10:14:35.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:14:35 compute-0 nova_compute[262220]: 2025-10-08 10:14:35.886 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 08 10:14:35 compute-0 nova_compute[262220]: 2025-10-08 10:14:35.985 2 DEBUG oslo_concurrency.lockutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.393s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:14:36 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v900: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 08 10:14:36 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:36 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94004a30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:36 compute-0 nova_compute[262220]: 2025-10-08 10:14:36.554 2 DEBUG nova.compute.manager [req-74f13856-d5f3-4402-a60e-e31749374d03 req-5f7b899e-1f3c-40c8-8a24-298e0935669b 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Received event network-vif-plugged-be4ec274-2a90-48e8-bd51-fd01f3c659da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 08 10:14:36 compute-0 nova_compute[262220]: 2025-10-08 10:14:36.554 2 DEBUG oslo_concurrency.lockutils [req-74f13856-d5f3-4402-a60e-e31749374d03 req-5f7b899e-1f3c-40c8-8a24-298e0935669b 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:14:36 compute-0 nova_compute[262220]: 2025-10-08 10:14:36.554 2 DEBUG oslo_concurrency.lockutils [req-74f13856-d5f3-4402-a60e-e31749374d03 req-5f7b899e-1f3c-40c8-8a24-298e0935669b 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:14:36 compute-0 nova_compute[262220]: 2025-10-08 10:14:36.555 2 DEBUG oslo_concurrency.lockutils [req-74f13856-d5f3-4402-a60e-e31749374d03 req-5f7b899e-1f3c-40c8-8a24-298e0935669b 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:14:36 compute-0 nova_compute[262220]: 2025-10-08 10:14:36.555 2 DEBUG nova.compute.manager [req-74f13856-d5f3-4402-a60e-e31749374d03 req-5f7b899e-1f3c-40c8-8a24-298e0935669b 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] No waiting events found dispatching network-vif-plugged-be4ec274-2a90-48e8-bd51-fd01f3c659da pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 08 10:14:36 compute-0 nova_compute[262220]: 2025-10-08 10:14:36.555 2 WARNING nova.compute.manager [req-74f13856-d5f3-4402-a60e-e31749374d03 req-5f7b899e-1f3c-40c8-8a24-298e0935669b 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Received unexpected event network-vif-plugged-be4ec274-2a90-48e8-bd51-fd01f3c659da for instance with vm_state active and task_state None.
Oct 08 10:14:36 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:36 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:36 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:14:36 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:14:36 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:14:36.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:14:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:14:37.151Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:14:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:37 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000b4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:37 compute-0 ceph-mon[73572]: pgmap v900: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 08 10:14:37 compute-0 nova_compute[262220]: 2025-10-08 10:14:37.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:14:37 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:14:37 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:14:37 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:14:37.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:14:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:37 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:14:37 compute-0 nova_compute[262220]: 2025-10-08 10:14:37.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:14:38 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v901: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 08 10:14:38 compute-0 ovn_controller[153187]: 2025-10-08T10:14:38Z|00042|binding|INFO|Releasing lport f613d263-6ad2-4e23-84bc-b066c6b6b34a from this chassis (sb_readonly=0)
Oct 08 10:14:38 compute-0 NetworkManager[44872]: <info>  [1759918478.2198] manager: (patch-br-int-to-provnet-5eda3e1f-dd4f-4c7d-b5fb-d8c0d9996d6d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Oct 08 10:14:38 compute-0 NetworkManager[44872]: <info>  [1759918478.2207] manager: (patch-provnet-5eda3e1f-dd4f-4c7d-b5fb-d8c0d9996d6d-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/38)
Oct 08 10:14:38 compute-0 nova_compute[262220]: 2025-10-08 10:14:38.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:14:38 compute-0 ovn_controller[153187]: 2025-10-08T10:14:38Z|00043|binding|INFO|Releasing lport f613d263-6ad2-4e23-84bc-b066c6b6b34a from this chassis (sb_readonly=0)
Oct 08 10:14:38 compute-0 nova_compute[262220]: 2025-10-08 10:14:38.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:14:38 compute-0 nova_compute[262220]: 2025-10-08 10:14:38.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:14:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:38 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90005240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:38 compute-0 nova_compute[262220]: 2025-10-08 10:14:38.575 2 DEBUG nova.compute.manager [req-8aa493a7-11bf-43a7-aec9-2ef6703fdbb0 req-b356d5b9-4e70-40c6-a136-5f3a7677b7f8 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Received event network-changed-be4ec274-2a90-48e8-bd51-fd01f3c659da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 08 10:14:38 compute-0 nova_compute[262220]: 2025-10-08 10:14:38.575 2 DEBUG nova.compute.manager [req-8aa493a7-11bf-43a7-aec9-2ef6703fdbb0 req-b356d5b9-4e70-40c6-a136-5f3a7677b7f8 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Refreshing instance network info cache due to event network-changed-be4ec274-2a90-48e8-bd51-fd01f3c659da. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 08 10:14:38 compute-0 nova_compute[262220]: 2025-10-08 10:14:38.576 2 DEBUG oslo_concurrency.lockutils [req-8aa493a7-11bf-43a7-aec9-2ef6703fdbb0 req-b356d5b9-4e70-40c6-a136-5f3a7677b7f8 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "refresh_cache-ea469a2e-bf09-495c-9b5e-02ad38d32d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 08 10:14:38 compute-0 nova_compute[262220]: 2025-10-08 10:14:38.576 2 DEBUG oslo_concurrency.lockutils [req-8aa493a7-11bf-43a7-aec9-2ef6703fdbb0 req-b356d5b9-4e70-40c6-a136-5f3a7677b7f8 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquired lock "refresh_cache-ea469a2e-bf09-495c-9b5e-02ad38d32d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 08 10:14:38 compute-0 nova_compute[262220]: 2025-10-08 10:14:38.576 2 DEBUG nova.network.neutron [req-8aa493a7-11bf-43a7-aec9-2ef6703fdbb0 req-b356d5b9-4e70-40c6-a136-5f3a7677b7f8 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Refreshing network info cache for port be4ec274-2a90-48e8-bd51-fd01f3c659da _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 08 10:14:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:38 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94004a30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:38 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:14:38 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:14:38 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:14:38.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:14:39 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:14:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:39 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:39 compute-0 ceph-mon[73572]: pgmap v901: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 08 10:14:39 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:14:39 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:14:39 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:14:39.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:14:40 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v902: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 08 10:14:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:40 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000b4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:40 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90005240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:40 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:14:40 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:14:40 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:14:40.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:14:40 compute-0 nova_compute[262220]: 2025-10-08 10:14:40.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:14:40 compute-0 nova_compute[262220]: 2025-10-08 10:14:40.927 2 DEBUG nova.network.neutron [req-8aa493a7-11bf-43a7-aec9-2ef6703fdbb0 req-b356d5b9-4e70-40c6-a136-5f3a7677b7f8 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Updated VIF entry in instance network info cache for port be4ec274-2a90-48e8-bd51-fd01f3c659da. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 08 10:14:40 compute-0 nova_compute[262220]: 2025-10-08 10:14:40.928 2 DEBUG nova.network.neutron [req-8aa493a7-11bf-43a7-aec9-2ef6703fdbb0 req-b356d5b9-4e70-40c6-a136-5f3a7677b7f8 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Updating instance_info_cache with network_info: [{"id": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "address": "fa:16:3e:e6:b0:e0", "network": {"id": "834a886f-bb33-49ed-b47e-ef0308a38e89", "bridge": "br-int", "label": "tempest-network-smoke--1879322246", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe4ec274-2a", "ovs_interfaceid": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 08 10:14:40 compute-0 nova_compute[262220]: 2025-10-08 10:14:40.946 2 DEBUG oslo_concurrency.lockutils [req-8aa493a7-11bf-43a7-aec9-2ef6703fdbb0 req-b356d5b9-4e70-40c6-a136-5f3a7677b7f8 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Releasing lock "refresh_cache-ea469a2e-bf09-495c-9b5e-02ad38d32d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 08 10:14:41 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:41 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94004a30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:41 compute-0 ceph-mon[73572]: pgmap v902: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 08 10:14:41 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:14:41 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:14:41 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:14:41.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:14:41 compute-0 nova_compute[262220]: 2025-10-08 10:14:41.897 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:14:41 compute-0 podman[274466]: 2025-10-08 10:14:41.930015264 +0000 UTC m=+0.089444412 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Oct 08 10:14:42 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v903: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 08 10:14:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:42 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:42 compute-0 nova_compute[262220]: 2025-10-08 10:14:42.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:14:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:42 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000b4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:42 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:14:42 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:14:42 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:14:42.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:14:42 compute-0 nova_compute[262220]: 2025-10-08 10:14:42.882 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:14:42 compute-0 nova_compute[262220]: 2025-10-08 10:14:42.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:14:42 compute-0 nova_compute[262220]: 2025-10-08 10:14:42.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:14:42 compute-0 nova_compute[262220]: 2025-10-08 10:14:42.886 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 08 10:14:42 compute-0 nova_compute[262220]: 2025-10-08 10:14:42.906 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 08 10:14:42 compute-0 nova_compute[262220]: 2025-10-08 10:14:42.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:14:43 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:43 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90005240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:43 compute-0 ceph-mon[73572]: pgmap v903: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 08 10:14:43 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:14:43 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:14:43 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:14:43.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:14:43 compute-0 nova_compute[262220]: 2025-10-08 10:14:43.906 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:14:43 compute-0 nova_compute[262220]: 2025-10-08 10:14:43.932 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:14:43 compute-0 nova_compute[262220]: 2025-10-08 10:14:43.932 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:14:43 compute-0 nova_compute[262220]: 2025-10-08 10:14:43.933 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:14:43 compute-0 nova_compute[262220]: 2025-10-08 10:14:43.933 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 08 10:14:43 compute-0 nova_compute[262220]: 2025-10-08 10:14:43.933 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:14:44 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:14:44 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v904: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 08 10:14:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:44 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94004a30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:44 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 08 10:14:44 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1149238324' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:14:44 compute-0 nova_compute[262220]: 2025-10-08 10:14:44.420 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:14:44 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/1149238324' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:14:44 compute-0 nova_compute[262220]: 2025-10-08 10:14:44.485 2 DEBUG nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 08 10:14:44 compute-0 nova_compute[262220]: 2025-10-08 10:14:44.486 2 DEBUG nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 08 10:14:44 compute-0 nova_compute[262220]: 2025-10-08 10:14:44.644 2 WARNING nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 10:14:44 compute-0 nova_compute[262220]: 2025-10-08 10:14:44.645 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4438MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 08 10:14:44 compute-0 nova_compute[262220]: 2025-10-08 10:14:44.646 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:14:44 compute-0 nova_compute[262220]: 2025-10-08 10:14:44.646 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:14:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:44 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:44 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:14:44 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:14:44 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:14:44.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:14:44 compute-0 nova_compute[262220]: 2025-10-08 10:14:44.892 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Instance ea469a2e-bf09-495c-9b5e-02ad38d32d40 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 08 10:14:44 compute-0 nova_compute[262220]: 2025-10-08 10:14:44.893 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 08 10:14:44 compute-0 nova_compute[262220]: 2025-10-08 10:14:44.893 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 08 10:14:44 compute-0 nova_compute[262220]: 2025-10-08 10:14:44.961 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Refreshing inventories for resource provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 08 10:14:45 compute-0 nova_compute[262220]: 2025-10-08 10:14:45.020 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Updating ProviderTree inventory for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 08 10:14:45 compute-0 nova_compute[262220]: 2025-10-08 10:14:45.021 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Updating inventory in ProviderTree for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 08 10:14:45 compute-0 nova_compute[262220]: 2025-10-08 10:14:45.036 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Refreshing aggregate associations for resource provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 08 10:14:45 compute-0 nova_compute[262220]: 2025-10-08 10:14:45.068 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Refreshing trait associations for resource provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2, traits: HW_CPU_X86_CLMUL,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_FMA3,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ACCELERATORS,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE41,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AMD_SVM,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_RESCUE_BFV,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SVM,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,HW_CPU_X86_SSE2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 08 10:14:45 compute-0 nova_compute[262220]: 2025-10-08 10:14:45.098 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:14:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:45 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000b4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:45 compute-0 ceph-mon[73572]: pgmap v904: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 08 10:14:45 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:14:45 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:14:45 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:14:45.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:14:45 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 08 10:14:45 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3390333083' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:14:45 compute-0 nova_compute[262220]: 2025-10-08 10:14:45.607 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:14:45 compute-0 nova_compute[262220]: 2025-10-08 10:14:45.613 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 08 10:14:45 compute-0 nova_compute[262220]: 2025-10-08 10:14:45.628 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 08 10:14:45 compute-0 nova_compute[262220]: 2025-10-08 10:14:45.658 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 08 10:14:45 compute-0 nova_compute[262220]: 2025-10-08 10:14:45.659 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.013s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:14:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:14:45] "GET /metrics HTTP/1.1" 200 48481 "" "Prometheus/2.51.0"
Oct 08 10:14:45 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:14:45] "GET /metrics HTTP/1.1" 200 48481 "" "Prometheus/2.51.0"
Oct 08 10:14:46 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v905: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 85 B/s wr, 69 op/s
Oct 08 10:14:46 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:46 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90005240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:46 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3390333083' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:14:46 compute-0 nova_compute[262220]: 2025-10-08 10:14:46.640 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:14:46 compute-0 nova_compute[262220]: 2025-10-08 10:14:46.659 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:14:46 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:46 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94005350 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:46 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:14:46 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:14:46 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:14:46.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:14:46 compute-0 nova_compute[262220]: 2025-10-08 10:14:46.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:14:46 compute-0 nova_compute[262220]: 2025-10-08 10:14:46.886 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 08 10:14:46 compute-0 nova_compute[262220]: 2025-10-08 10:14:46.887 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 08 10:14:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:14:47.152Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:14:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:14:47.152Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:14:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:47 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:47 compute-0 nova_compute[262220]: 2025-10-08 10:14:47.324 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "refresh_cache-ea469a2e-bf09-495c-9b5e-02ad38d32d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 08 10:14:47 compute-0 nova_compute[262220]: 2025-10-08 10:14:47.325 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquired lock "refresh_cache-ea469a2e-bf09-495c-9b5e-02ad38d32d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 08 10:14:47 compute-0 nova_compute[262220]: 2025-10-08 10:14:47.325 2 DEBUG nova.network.neutron [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 08 10:14:47 compute-0 nova_compute[262220]: 2025-10-08 10:14:47.325 2 DEBUG nova.objects.instance [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ea469a2e-bf09-495c-9b5e-02ad38d32d40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 08 10:14:47 compute-0 nova_compute[262220]: 2025-10-08 10:14:47.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:14:47 compute-0 ceph-mon[73572]: pgmap v905: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 85 B/s wr, 69 op/s
Oct 08 10:14:47 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/36288157' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:14:47 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:14:47 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:14:47 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:14:47.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:14:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:14:47
Oct 08 10:14:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 08 10:14:47 compute-0 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct 08 10:14:47 compute-0 ceph-mgr[73869]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', 'volumes', '.nfs', '.rgw.root', 'backups', 'vms', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'images', '.mgr']
Oct 08 10:14:47 compute-0 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct 08 10:14:47 compute-0 sudo[274544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:14:47 compute-0 sudo[274544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:14:47 compute-0 sudo[274544]: pam_unix(sudo:session): session closed for user root
Oct 08 10:14:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:14:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:14:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:47 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:14:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:14:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:14:47 compute-0 nova_compute[262220]: 2025-10-08 10:14:47.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:14:48 compute-0 ovn_controller[153187]: 2025-10-08T10:14:48Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e6:b0:e0 10.100.0.3
Oct 08 10:14:48 compute-0 ovn_controller[153187]: 2025-10-08T10:14:48Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e6:b0:e0 10.100.0.3
Oct 08 10:14:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct 08 10:14:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:14:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 08 10:14:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:14:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Oct 08 10:14:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:14:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:14:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:14:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:14:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:14:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 08 10:14:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:14:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 08 10:14:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:14:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:14:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:14:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 08 10:14:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:14:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 08 10:14:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:14:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:14:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:14:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 08 10:14:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:14:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 10:14:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 08 10:14:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:14:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:14:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:14:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:14:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:14:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:14:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:14:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:14:48 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v906: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 85 B/s wr, 69 op/s
Oct 08 10:14:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 08 10:14:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:14:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:14:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:14:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:14:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:48 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000b4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:48 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/2692607519' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:14:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:14:48 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/3167404767' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:14:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:48 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90005240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:48 compute-0 nova_compute[262220]: 2025-10-08 10:14:48.705 2 DEBUG nova.network.neutron [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Updating instance_info_cache with network_info: [{"id": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "address": "fa:16:3e:e6:b0:e0", "network": {"id": "834a886f-bb33-49ed-b47e-ef0308a38e89", "bridge": "br-int", "label": "tempest-network-smoke--1879322246", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe4ec274-2a", "ovs_interfaceid": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 08 10:14:48 compute-0 nova_compute[262220]: 2025-10-08 10:14:48.731 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Releasing lock "refresh_cache-ea469a2e-bf09-495c-9b5e-02ad38d32d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 08 10:14:48 compute-0 nova_compute[262220]: 2025-10-08 10:14:48.732 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 08 10:14:48 compute-0 nova_compute[262220]: 2025-10-08 10:14:48.732 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:14:48 compute-0 nova_compute[262220]: 2025-10-08 10:14:48.733 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:14:48 compute-0 nova_compute[262220]: 2025-10-08 10:14:48.733 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:14:48 compute-0 nova_compute[262220]: 2025-10-08 10:14:48.733 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 08 10:14:48 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:14:48 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:14:48 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:14:48.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:14:48 compute-0 podman[274571]: 2025-10-08 10:14:48.902051861 +0000 UTC m=+0.060436068 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 08 10:14:48 compute-0 podman[274570]: 2025-10-08 10:14:48.917100776 +0000 UTC m=+0.076756474 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 08 10:14:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:14:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:49 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94005350 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:49 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:14:49 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:14:49 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:14:49.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:14:49 compute-0 ceph-mon[73572]: pgmap v906: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 85 B/s wr, 69 op/s
Oct 08 10:14:49 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/2249919879' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:14:50 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v907: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Oct 08 10:14:50 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:50 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:50 compute-0 ceph-mon[73572]: pgmap v907: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Oct 08 10:14:50 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:50 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000b4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:50 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:14:50 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:14:50 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:14:50.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:14:51 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:51 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c001ab0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:51 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:14:51 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:14:51 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:14:51.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:14:51 compute-0 nova_compute[262220]: 2025-10-08 10:14:51.920 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:14:51 compute-0 nova_compute[262220]: 2025-10-08 10:14:51.940 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Triggering sync for uuid ea469a2e-bf09-495c-9b5e-02ad38d32d40 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 08 10:14:51 compute-0 nova_compute[262220]: 2025-10-08 10:14:51.940 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:14:51 compute-0 nova_compute[262220]: 2025-10-08 10:14:51.941 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:14:51 compute-0 nova_compute[262220]: 2025-10-08 10:14:51.979 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.039s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:14:52 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v908: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 08 10:14:52 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:52 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94005350 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:52 compute-0 nova_compute[262220]: 2025-10-08 10:14:52.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:14:52 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:52 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:52 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:14:52 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:14:52 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:14:52.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:14:52 compute-0 nova_compute[262220]: 2025-10-08 10:14:52.987 2 INFO nova.compute.manager [None req-7f625ccc-5c89-4d62-996c-ca423229ac60 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Get console output
Oct 08 10:14:52 compute-0 nova_compute[262220]: 2025-10-08 10:14:52.993 631 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 08 10:14:53 compute-0 nova_compute[262220]: 2025-10-08 10:14:52.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:14:53 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:53 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000b4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:53 compute-0 ceph-mon[73572]: pgmap v908: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 08 10:14:53 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:14:53 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:14:53 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:14:53.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:14:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:14:54 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v909: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 08 10:14:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:54 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c001ab0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:54 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c001ab0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:54 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:14:54 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:14:54 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:14:54.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:14:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:55 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:55 compute-0 nova_compute[262220]: 2025-10-08 10:14:55.199 2 DEBUG oslo_concurrency.lockutils [None req-5a883b22-8805-4173-b851-76a08557fdf1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "interface-ea469a2e-bf09-495c-9b5e-02ad38d32d40-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:14:55 compute-0 nova_compute[262220]: 2025-10-08 10:14:55.199 2 DEBUG oslo_concurrency.lockutils [None req-5a883b22-8805-4173-b851-76a08557fdf1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "interface-ea469a2e-bf09-495c-9b5e-02ad38d32d40-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:14:55 compute-0 nova_compute[262220]: 2025-10-08 10:14:55.200 2 DEBUG nova.objects.instance [None req-5a883b22-8805-4173-b851-76a08557fdf1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lazy-loading 'flavor' on Instance uuid ea469a2e-bf09-495c-9b5e-02ad38d32d40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 08 10:14:55 compute-0 ceph-mon[73572]: pgmap v909: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 08 10:14:55 compute-0 nova_compute[262220]: 2025-10-08 10:14:55.485 2 DEBUG nova.objects.instance [None req-5a883b22-8805-4173-b851-76a08557fdf1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lazy-loading 'pci_requests' on Instance uuid ea469a2e-bf09-495c-9b5e-02ad38d32d40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 08 10:14:55 compute-0 nova_compute[262220]: 2025-10-08 10:14:55.497 2 DEBUG nova.network.neutron [None req-5a883b22-8805-4173-b851-76a08557fdf1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 08 10:14:55 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:14:55 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:14:55 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:14:55.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:14:55 compute-0 nova_compute[262220]: 2025-10-08 10:14:55.660 2 DEBUG nova.policy [None req-5a883b22-8805-4173-b851-76a08557fdf1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd50b19166a7245e390a6e29682191263', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0edb2c85b88b4b168a4b2e8c5ed4a05c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 08 10:14:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:14:55] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Oct 08 10:14:55 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:14:55] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Oct 08 10:14:56 compute-0 nova_compute[262220]: 2025-10-08 10:14:56.129 2 DEBUG nova.network.neutron [None req-5a883b22-8805-4173-b851-76a08557fdf1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Successfully created port: 79d28498-fe9d-49dc-ad2c-bde432b239db _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 08 10:14:56 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v910: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 08 10:14:56 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:56 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:56 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:56 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70003e60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:56 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:14:56 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:14:56 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:14:56.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:14:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:14:57.154Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:14:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:14:57.155Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:14:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:57 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000b4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:57 compute-0 ceph-mon[73572]: pgmap v910: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 08 10:14:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:14:57.413 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:14:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:14:57.414 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:14:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:14:57.415 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:14:57 compute-0 nova_compute[262220]: 2025-10-08 10:14:57.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:14:57 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:14:57 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:14:57 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:14:57.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:14:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:57 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:14:58 compute-0 nova_compute[262220]: 2025-10-08 10:14:58.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:14:58 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v911: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 08 10:14:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:58 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94005350 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:58 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:58 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:14:58 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:14:58 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:14:58.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:14:59 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:14:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:59 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70003e60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:14:59 compute-0 ceph-mon[73572]: pgmap v911: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 08 10:14:59 compute-0 nova_compute[262220]: 2025-10-08 10:14:59.337 2 DEBUG nova.network.neutron [None req-5a883b22-8805-4173-b851-76a08557fdf1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Successfully updated port: 79d28498-fe9d-49dc-ad2c-bde432b239db _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 08 10:14:59 compute-0 nova_compute[262220]: 2025-10-08 10:14:59.402 2 DEBUG oslo_concurrency.lockutils [None req-5a883b22-8805-4173-b851-76a08557fdf1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "refresh_cache-ea469a2e-bf09-495c-9b5e-02ad38d32d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 08 10:14:59 compute-0 nova_compute[262220]: 2025-10-08 10:14:59.402 2 DEBUG oslo_concurrency.lockutils [None req-5a883b22-8805-4173-b851-76a08557fdf1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquired lock "refresh_cache-ea469a2e-bf09-495c-9b5e-02ad38d32d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 08 10:14:59 compute-0 nova_compute[262220]: 2025-10-08 10:14:59.403 2 DEBUG nova.network.neutron [None req-5a883b22-8805-4173-b851-76a08557fdf1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 08 10:14:59 compute-0 nova_compute[262220]: 2025-10-08 10:14:59.436 2 DEBUG nova.compute.manager [req-eb6a668c-e2d8-4e4e-bc69-70e5d18e887d req-e553ac48-3e9d-4c34-87f2-509382fa0272 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Received event network-changed-79d28498-fe9d-49dc-ad2c-bde432b239db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 08 10:14:59 compute-0 nova_compute[262220]: 2025-10-08 10:14:59.438 2 DEBUG nova.compute.manager [req-eb6a668c-e2d8-4e4e-bc69-70e5d18e887d req-e553ac48-3e9d-4c34-87f2-509382fa0272 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Refreshing instance network info cache due to event network-changed-79d28498-fe9d-49dc-ad2c-bde432b239db. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 08 10:14:59 compute-0 nova_compute[262220]: 2025-10-08 10:14:59.439 2 DEBUG oslo_concurrency.lockutils [req-eb6a668c-e2d8-4e4e-bc69-70e5d18e887d req-e553ac48-3e9d-4c34-87f2-509382fa0272 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "refresh_cache-ea469a2e-bf09-495c-9b5e-02ad38d32d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 08 10:14:59 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:14:59 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:14:59 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:14:59.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:15:00 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v912: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 08 10:15:00 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:00 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000b4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:00 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:00 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94005350 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:00 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:15:00 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:15:00 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:15:00.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:15:01 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:01 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:01 compute-0 ceph-mon[73572]: pgmap v912: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 08 10:15:01 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:15:01 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:15:01 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:15:01.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:15:01 compute-0 podman[274628]: 2025-10-08 10:15:01.889708011 +0000 UTC m=+0.054823607 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 08 10:15:02 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v913: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 1 op/s
Oct 08 10:15:02 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:02 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70003e60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:02 compute-0 nova_compute[262220]: 2025-10-08 10:15:02.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:15:02 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:02 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000b4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:02 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:15:02 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:15:02 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:15:02.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:15:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:15:02 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:15:03 compute-0 nova_compute[262220]: 2025-10-08 10:15:03.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:15:03 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:03 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94005370 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:03 compute-0 nova_compute[262220]: 2025-10-08 10:15:03.260 2 DEBUG nova.network.neutron [None req-5a883b22-8805-4173-b851-76a08557fdf1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Updating instance_info_cache with network_info: [{"id": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "address": "fa:16:3e:e6:b0:e0", "network": {"id": "834a886f-bb33-49ed-b47e-ef0308a38e89", "bridge": "br-int", "label": "tempest-network-smoke--1879322246", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe4ec274-2a", "ovs_interfaceid": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "79d28498-fe9d-49dc-ad2c-bde432b239db", "address": "fa:16:3e:40:4d:66", "network": {"id": "0a28a475-c59d-4526-93af-b8af40052e5c", "bridge": "br-int", "label": "tempest-network-smoke--1745814783", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79d28498-fe", "ovs_interfaceid": "79d28498-fe9d-49dc-ad2c-bde432b239db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 08 10:15:03 compute-0 nova_compute[262220]: 2025-10-08 10:15:03.287 2 DEBUG oslo_concurrency.lockutils [None req-5a883b22-8805-4173-b851-76a08557fdf1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Releasing lock "refresh_cache-ea469a2e-bf09-495c-9b5e-02ad38d32d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 08 10:15:03 compute-0 nova_compute[262220]: 2025-10-08 10:15:03.288 2 DEBUG oslo_concurrency.lockutils [req-eb6a668c-e2d8-4e4e-bc69-70e5d18e887d req-e553ac48-3e9d-4c34-87f2-509382fa0272 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquired lock "refresh_cache-ea469a2e-bf09-495c-9b5e-02ad38d32d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 08 10:15:03 compute-0 nova_compute[262220]: 2025-10-08 10:15:03.288 2 DEBUG nova.network.neutron [req-eb6a668c-e2d8-4e4e-bc69-70e5d18e887d req-e553ac48-3e9d-4c34-87f2-509382fa0272 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Refreshing network info cache for port 79d28498-fe9d-49dc-ad2c-bde432b239db _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 08 10:15:03 compute-0 nova_compute[262220]: 2025-10-08 10:15:03.292 2 DEBUG nova.virt.libvirt.vif [None req-5a883b22-8805-4173-b851-76a08557fdf1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-08T10:14:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1473882269',display_name='tempest-TestNetworkBasicOps-server-1473882269',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1473882269',id=6,image_ref='e5994bac-385d-4cfe-962e-386aa0559983',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLR6MyJBOMYp2wYGaL6G2mEbJcrZztqNeLLTecSXpMa+10UdVkrFMZxX+qiOC0ccFZIJleCiHIYc6JXNFg7vRmgJ0JtTU6W+KAYc8u1JxRO51IwGJ30ByO68fx1sTbOqEg==',key_name='tempest-TestNetworkBasicOps-1715641436',keypairs=<?>,launch_index=0,launched_at=2025-10-08T10:14:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='0edb2c85b88b4b168a4b2e8c5ed4a05c',ramdisk_id='',reservation_id='r-b2owsx2f',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e5994bac-385d-4cfe-962e-386aa0559983',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-139500885',owner_user_name='tempest-TestNetworkBasicOps-139500885-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-08T10:14:35Z,user_data=None,user_id='d50b19166a7245e390a6e29682191263',uuid=ea469a2e-bf09-495c-9b5e-02ad38d32d40,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "79d28498-fe9d-49dc-ad2c-bde432b239db", "address": "fa:16:3e:40:4d:66", "network": {"id": "0a28a475-c59d-4526-93af-b8af40052e5c", "bridge": "br-int", "label": "tempest-network-smoke--1745814783", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79d28498-fe", "ovs_interfaceid": "79d28498-fe9d-49dc-ad2c-bde432b239db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 08 10:15:03 compute-0 nova_compute[262220]: 2025-10-08 10:15:03.292 2 DEBUG nova.network.os_vif_util [None req-5a883b22-8805-4173-b851-76a08557fdf1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converting VIF {"id": "79d28498-fe9d-49dc-ad2c-bde432b239db", "address": "fa:16:3e:40:4d:66", "network": {"id": "0a28a475-c59d-4526-93af-b8af40052e5c", "bridge": "br-int", "label": "tempest-network-smoke--1745814783", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79d28498-fe", "ovs_interfaceid": "79d28498-fe9d-49dc-ad2c-bde432b239db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 08 10:15:03 compute-0 nova_compute[262220]: 2025-10-08 10:15:03.294 2 DEBUG nova.network.os_vif_util [None req-5a883b22-8805-4173-b851-76a08557fdf1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:40:4d:66,bridge_name='br-int',has_traffic_filtering=True,id=79d28498-fe9d-49dc-ad2c-bde432b239db,network=Network(0a28a475-c59d-4526-93af-b8af40052e5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79d28498-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 08 10:15:03 compute-0 nova_compute[262220]: 2025-10-08 10:15:03.294 2 DEBUG os_vif [None req-5a883b22-8805-4173-b851-76a08557fdf1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:4d:66,bridge_name='br-int',has_traffic_filtering=True,id=79d28498-fe9d-49dc-ad2c-bde432b239db,network=Network(0a28a475-c59d-4526-93af-b8af40052e5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79d28498-fe') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 08 10:15:03 compute-0 nova_compute[262220]: 2025-10-08 10:15:03.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:15:03 compute-0 nova_compute[262220]: 2025-10-08 10:15:03.296 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 10:15:03 compute-0 nova_compute[262220]: 2025-10-08 10:15:03.296 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 10:15:03 compute-0 nova_compute[262220]: 2025-10-08 10:15:03.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:15:03 compute-0 nova_compute[262220]: 2025-10-08 10:15:03.304 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap79d28498-fe, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 10:15:03 compute-0 nova_compute[262220]: 2025-10-08 10:15:03.304 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap79d28498-fe, col_values=(('external_ids', {'iface-id': '79d28498-fe9d-49dc-ad2c-bde432b239db', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:40:4d:66', 'vm-uuid': 'ea469a2e-bf09-495c-9b5e-02ad38d32d40'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 10:15:03 compute-0 nova_compute[262220]: 2025-10-08 10:15:03.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:15:03 compute-0 NetworkManager[44872]: <info>  [1759918503.3097] manager: (tap79d28498-fe): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/39)
Oct 08 10:15:03 compute-0 nova_compute[262220]: 2025-10-08 10:15:03.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 08 10:15:03 compute-0 nova_compute[262220]: 2025-10-08 10:15:03.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:15:03 compute-0 nova_compute[262220]: 2025-10-08 10:15:03.318 2 INFO os_vif [None req-5a883b22-8805-4173-b851-76a08557fdf1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:4d:66,bridge_name='br-int',has_traffic_filtering=True,id=79d28498-fe9d-49dc-ad2c-bde432b239db,network=Network(0a28a475-c59d-4526-93af-b8af40052e5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79d28498-fe')
Oct 08 10:15:03 compute-0 nova_compute[262220]: 2025-10-08 10:15:03.319 2 DEBUG nova.virt.libvirt.vif [None req-5a883b22-8805-4173-b851-76a08557fdf1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-08T10:14:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1473882269',display_name='tempest-TestNetworkBasicOps-server-1473882269',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1473882269',id=6,image_ref='e5994bac-385d-4cfe-962e-386aa0559983',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLR6MyJBOMYp2wYGaL6G2mEbJcrZztqNeLLTecSXpMa+10UdVkrFMZxX+qiOC0ccFZIJleCiHIYc6JXNFg7vRmgJ0JtTU6W+KAYc8u1JxRO51IwGJ30ByO68fx1sTbOqEg==',key_name='tempest-TestNetworkBasicOps-1715641436',keypairs=<?>,launch_index=0,launched_at=2025-10-08T10:14:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='0edb2c85b88b4b168a4b2e8c5ed4a05c',ramdisk_id='',reservation_id='r-b2owsx2f',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e5994bac-385d-4cfe-962e-386aa0559983',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-139500885',owner_user_name='tempest-TestNetworkBasicOps-139500885-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-08T10:14:35Z,user_data=None,user_id='d50b19166a7245e390a6e29682191263',uuid=ea469a2e-bf09-495c-9b5e-02ad38d32d40,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "79d28498-fe9d-49dc-ad2c-bde432b239db", "address": "fa:16:3e:40:4d:66", "network": {"id": "0a28a475-c59d-4526-93af-b8af40052e5c", "bridge": "br-int", "label": "tempest-network-smoke--1745814783", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79d28498-fe", "ovs_interfaceid": "79d28498-fe9d-49dc-ad2c-bde432b239db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 08 10:15:03 compute-0 nova_compute[262220]: 2025-10-08 10:15:03.320 2 DEBUG nova.network.os_vif_util [None req-5a883b22-8805-4173-b851-76a08557fdf1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converting VIF {"id": "79d28498-fe9d-49dc-ad2c-bde432b239db", "address": "fa:16:3e:40:4d:66", "network": {"id": "0a28a475-c59d-4526-93af-b8af40052e5c", "bridge": "br-int", "label": "tempest-network-smoke--1745814783", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79d28498-fe", "ovs_interfaceid": "79d28498-fe9d-49dc-ad2c-bde432b239db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 08 10:15:03 compute-0 nova_compute[262220]: 2025-10-08 10:15:03.320 2 DEBUG nova.network.os_vif_util [None req-5a883b22-8805-4173-b851-76a08557fdf1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:40:4d:66,bridge_name='br-int',has_traffic_filtering=True,id=79d28498-fe9d-49dc-ad2c-bde432b239db,network=Network(0a28a475-c59d-4526-93af-b8af40052e5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79d28498-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 08 10:15:03 compute-0 nova_compute[262220]: 2025-10-08 10:15:03.324 2 DEBUG nova.virt.libvirt.guest [None req-5a883b22-8805-4173-b851-76a08557fdf1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] attach device xml: <interface type="ethernet">
Oct 08 10:15:03 compute-0 nova_compute[262220]:   <mac address="fa:16:3e:40:4d:66"/>
Oct 08 10:15:03 compute-0 nova_compute[262220]:   <model type="virtio"/>
Oct 08 10:15:03 compute-0 nova_compute[262220]:   <driver name="vhost" rx_queue_size="512"/>
Oct 08 10:15:03 compute-0 nova_compute[262220]:   <mtu size="1442"/>
Oct 08 10:15:03 compute-0 nova_compute[262220]:   <target dev="tap79d28498-fe"/>
Oct 08 10:15:03 compute-0 nova_compute[262220]: </interface>
Oct 08 10:15:03 compute-0 nova_compute[262220]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 08 10:15:03 compute-0 kernel: tap79d28498-fe: entered promiscuous mode
Oct 08 10:15:03 compute-0 nova_compute[262220]: 2025-10-08 10:15:03.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:15:03 compute-0 NetworkManager[44872]: <info>  [1759918503.3482] manager: (tap79d28498-fe): new Tun device (/org/freedesktop/NetworkManager/Devices/40)
Oct 08 10:15:03 compute-0 ovn_controller[153187]: 2025-10-08T10:15:03Z|00044|binding|INFO|Claiming lport 79d28498-fe9d-49dc-ad2c-bde432b239db for this chassis.
Oct 08 10:15:03 compute-0 ovn_controller[153187]: 2025-10-08T10:15:03Z|00045|binding|INFO|79d28498-fe9d-49dc-ad2c-bde432b239db: Claiming fa:16:3e:40:4d:66 10.100.0.23
Oct 08 10:15:03 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:15:03.389 163175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:4d:66 10.100.0.23'], port_security=['fa:16:3e:40:4d:66 10.100.0.23'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.23/28', 'neutron:device_id': 'ea469a2e-bf09-495c-9b5e-02ad38d32d40', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0a28a475-c59d-4526-93af-b8af40052e5c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0edb2c85b88b4b168a4b2e8c5ed4a05c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7f5008fb-e9a5-4fed-867f-172652283a31', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f6ba97cc-1c15-47ba-aa89-c964fcf23523, chassis=[<ovs.db.idl.Row object at 0x7f191f105a60>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f191f105a60>], logical_port=79d28498-fe9d-49dc-ad2c-bde432b239db) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 08 10:15:03 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:15:03.390 163175 INFO neutron.agent.ovn.metadata.agent [-] Port 79d28498-fe9d-49dc-ad2c-bde432b239db in datapath 0a28a475-c59d-4526-93af-b8af40052e5c bound to our chassis
Oct 08 10:15:03 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:15:03.391 163175 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0a28a475-c59d-4526-93af-b8af40052e5c
Oct 08 10:15:03 compute-0 systemd-udevd[274656]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 10:15:03 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:15:03.404 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[82fb71ff-2698-4baf-97de-816e3a2c19e3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:15:03 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:15:03.406 163175 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0a28a475-c1 in ovnmeta-0a28a475-c59d-4526-93af-b8af40052e5c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 08 10:15:03 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:15:03.408 267781 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0a28a475-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 08 10:15:03 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:15:03.408 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[21d6f22f-341a-4800-932d-5d7a1273978e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:15:03 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:15:03.409 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[6b89fb56-6119-45e8-85f1-2d113c79b673]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:15:03 compute-0 NetworkManager[44872]: <info>  [1759918503.4105] device (tap79d28498-fe): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 08 10:15:03 compute-0 NetworkManager[44872]: <info>  [1759918503.4120] device (tap79d28498-fe): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 08 10:15:03 compute-0 nova_compute[262220]: 2025-10-08 10:15:03.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:15:03 compute-0 ovn_controller[153187]: 2025-10-08T10:15:03Z|00046|binding|INFO|Setting lport 79d28498-fe9d-49dc-ad2c-bde432b239db ovn-installed in OVS
Oct 08 10:15:03 compute-0 ovn_controller[153187]: 2025-10-08T10:15:03Z|00047|binding|INFO|Setting lport 79d28498-fe9d-49dc-ad2c-bde432b239db up in Southbound
Oct 08 10:15:03 compute-0 nova_compute[262220]: 2025-10-08 10:15:03.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:15:03 compute-0 ceph-mon[73572]: pgmap v913: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 1 op/s
Oct 08 10:15:03 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:15:03 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:15:03.425 163290 DEBUG oslo.privsep.daemon [-] privsep: reply[95dc8cc8-f0c7-4be5-86a2-0ed2f86145e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:15:03 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:15:03.439 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[1d0a5081-248c-4b65-b54d-68665660e67b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:15:03 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:15:03.474 267799 DEBUG oslo.privsep.daemon [-] privsep: reply[9f2fc8d8-5713-4d45-aded-88f9c17608b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:15:03 compute-0 NetworkManager[44872]: <info>  [1759918503.4804] manager: (tap0a28a475-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/41)
Oct 08 10:15:03 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:15:03.480 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[163d6405-13bd-4593-b59f-8953f8c537a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:15:03 compute-0 systemd-udevd[274660]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 10:15:03 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:15:03.515 267799 DEBUG oslo.privsep.daemon [-] privsep: reply[54ef7e95-ffd5-4e61-b254-81edb31ca074]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:15:03 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:15:03.520 267799 DEBUG oslo.privsep.daemon [-] privsep: reply[bf42bf6f-cc63-4879-9963-eee14eb5a69b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:15:03 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:15:03 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:15:03 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:15:03.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:15:03 compute-0 NetworkManager[44872]: <info>  [1759918503.5468] device (tap0a28a475-c0): carrier: link connected
Oct 08 10:15:03 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:15:03.554 267799 DEBUG oslo.privsep.daemon [-] privsep: reply[457d7982-2c02-42cb-9515-563cab084b97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:15:03 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:15:03.579 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[10855808-d422-415e-863e-5310ef749217]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0a28a475-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c1:1f:72'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 20], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 446217, 'reachable_time': 31275, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274683, 'error': None, 'target': 'ovnmeta-0a28a475-c59d-4526-93af-b8af40052e5c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:15:03 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:15:03.602 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[2d26ba25-9e79-4149-b272-6bb39a8a495e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec1:1f72'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 446217, 'tstamp': 446217}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274684, 'error': None, 'target': 'ovnmeta-0a28a475-c59d-4526-93af-b8af40052e5c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:15:03 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:15:03.627 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[06527ae1-1f19-4d31-8859-1b6564aae41b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0a28a475-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c1:1f:72'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 20], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 446217, 'reachable_time': 31275, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 274685, 'error': None, 'target': 'ovnmeta-0a28a475-c59d-4526-93af-b8af40052e5c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:15:03 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:15:03.673 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[5eea7432-d8f5-41f5-aaa3-97ccfcc7e9de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:15:03 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:15:03.746 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[19875fb4-dcb6-4b5f-8687-6f8adc33c7e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:15:03 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:15:03.748 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0a28a475-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 10:15:03 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:15:03.748 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 10:15:03 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:15:03.749 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0a28a475-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 10:15:03 compute-0 kernel: tap0a28a475-c0: entered promiscuous mode
Oct 08 10:15:03 compute-0 nova_compute[262220]: 2025-10-08 10:15:03.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:15:03 compute-0 NetworkManager[44872]: <info>  [1759918503.7518] manager: (tap0a28a475-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/42)
Oct 08 10:15:03 compute-0 nova_compute[262220]: 2025-10-08 10:15:03.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:15:03 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:15:03.754 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0a28a475-c0, col_values=(('external_ids', {'iface-id': '5250d729-6010-4688-85e3-ca6a96907e0d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 10:15:03 compute-0 nova_compute[262220]: 2025-10-08 10:15:03.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:15:03 compute-0 ovn_controller[153187]: 2025-10-08T10:15:03Z|00048|binding|INFO|Releasing lport 5250d729-6010-4688-85e3-ca6a96907e0d from this chassis (sb_readonly=0)
Oct 08 10:15:03 compute-0 nova_compute[262220]: 2025-10-08 10:15:03.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:15:03 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:15:03.771 163175 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0a28a475-c59d-4526-93af-b8af40052e5c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0a28a475-c59d-4526-93af-b8af40052e5c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 08 10:15:03 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:15:03.772 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[f3fd76ce-dc3a-40b9-b837-665108781cb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:15:03 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:15:03.773 163175 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 08 10:15:03 compute-0 ovn_metadata_agent[163169]: global
Oct 08 10:15:03 compute-0 ovn_metadata_agent[163169]:     log         /dev/log local0 debug
Oct 08 10:15:03 compute-0 ovn_metadata_agent[163169]:     log-tag     haproxy-metadata-proxy-0a28a475-c59d-4526-93af-b8af40052e5c
Oct 08 10:15:03 compute-0 ovn_metadata_agent[163169]:     user        root
Oct 08 10:15:03 compute-0 ovn_metadata_agent[163169]:     group       root
Oct 08 10:15:03 compute-0 ovn_metadata_agent[163169]:     maxconn     1024
Oct 08 10:15:03 compute-0 ovn_metadata_agent[163169]:     pidfile     /var/lib/neutron/external/pids/0a28a475-c59d-4526-93af-b8af40052e5c.pid.haproxy
Oct 08 10:15:03 compute-0 ovn_metadata_agent[163169]:     daemon
Oct 08 10:15:03 compute-0 ovn_metadata_agent[163169]: 
Oct 08 10:15:03 compute-0 ovn_metadata_agent[163169]: defaults
Oct 08 10:15:03 compute-0 ovn_metadata_agent[163169]:     log global
Oct 08 10:15:03 compute-0 ovn_metadata_agent[163169]:     mode http
Oct 08 10:15:03 compute-0 ovn_metadata_agent[163169]:     option httplog
Oct 08 10:15:03 compute-0 ovn_metadata_agent[163169]:     option dontlognull
Oct 08 10:15:03 compute-0 ovn_metadata_agent[163169]:     option http-server-close
Oct 08 10:15:03 compute-0 ovn_metadata_agent[163169]:     option forwardfor
Oct 08 10:15:03 compute-0 ovn_metadata_agent[163169]:     retries                 3
Oct 08 10:15:03 compute-0 ovn_metadata_agent[163169]:     timeout http-request    30s
Oct 08 10:15:03 compute-0 ovn_metadata_agent[163169]:     timeout connect         30s
Oct 08 10:15:03 compute-0 ovn_metadata_agent[163169]:     timeout client          32s
Oct 08 10:15:03 compute-0 ovn_metadata_agent[163169]:     timeout server          32s
Oct 08 10:15:03 compute-0 ovn_metadata_agent[163169]:     timeout http-keep-alive 30s
Oct 08 10:15:03 compute-0 ovn_metadata_agent[163169]: 
Oct 08 10:15:03 compute-0 ovn_metadata_agent[163169]: 
Oct 08 10:15:03 compute-0 ovn_metadata_agent[163169]: listen listener
Oct 08 10:15:03 compute-0 ovn_metadata_agent[163169]:     bind 169.254.169.254:80
Oct 08 10:15:03 compute-0 ovn_metadata_agent[163169]:     server metadata /var/lib/neutron/metadata_proxy
Oct 08 10:15:03 compute-0 ovn_metadata_agent[163169]:     http-request add-header X-OVN-Network-ID 0a28a475-c59d-4526-93af-b8af40052e5c
Oct 08 10:15:03 compute-0 ovn_metadata_agent[163169]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 08 10:15:03 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:15:03.774 163175 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0a28a475-c59d-4526-93af-b8af40052e5c', 'env', 'PROCESS_TAG=haproxy-0a28a475-c59d-4526-93af-b8af40052e5c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0a28a475-c59d-4526-93af-b8af40052e5c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 08 10:15:03 compute-0 nova_compute[262220]: 2025-10-08 10:15:03.872 2 DEBUG nova.virt.libvirt.driver [None req-5a883b22-8805-4173-b851-76a08557fdf1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 08 10:15:03 compute-0 nova_compute[262220]: 2025-10-08 10:15:03.873 2 DEBUG nova.virt.libvirt.driver [None req-5a883b22-8805-4173-b851-76a08557fdf1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 08 10:15:03 compute-0 nova_compute[262220]: 2025-10-08 10:15:03.873 2 DEBUG nova.virt.libvirt.driver [None req-5a883b22-8805-4173-b851-76a08557fdf1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] No VIF found with MAC fa:16:3e:e6:b0:e0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 08 10:15:03 compute-0 nova_compute[262220]: 2025-10-08 10:15:03.873 2 DEBUG nova.virt.libvirt.driver [None req-5a883b22-8805-4173-b851-76a08557fdf1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] No VIF found with MAC fa:16:3e:40:4d:66, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 08 10:15:03 compute-0 nova_compute[262220]: 2025-10-08 10:15:03.959 2 DEBUG nova.virt.libvirt.guest [None req-5a883b22-8805-4173-b851-76a08557fdf1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 08 10:15:03 compute-0 nova_compute[262220]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 08 10:15:03 compute-0 nova_compute[262220]:   <nova:name>tempest-TestNetworkBasicOps-server-1473882269</nova:name>
Oct 08 10:15:03 compute-0 nova_compute[262220]:   <nova:creationTime>2025-10-08 10:15:03</nova:creationTime>
Oct 08 10:15:03 compute-0 nova_compute[262220]:   <nova:flavor name="m1.nano">
Oct 08 10:15:03 compute-0 nova_compute[262220]:     <nova:memory>128</nova:memory>
Oct 08 10:15:03 compute-0 nova_compute[262220]:     <nova:disk>1</nova:disk>
Oct 08 10:15:03 compute-0 nova_compute[262220]:     <nova:swap>0</nova:swap>
Oct 08 10:15:03 compute-0 nova_compute[262220]:     <nova:ephemeral>0</nova:ephemeral>
Oct 08 10:15:03 compute-0 nova_compute[262220]:     <nova:vcpus>1</nova:vcpus>
Oct 08 10:15:03 compute-0 nova_compute[262220]:   </nova:flavor>
Oct 08 10:15:03 compute-0 nova_compute[262220]:   <nova:owner>
Oct 08 10:15:03 compute-0 nova_compute[262220]:     <nova:user uuid="d50b19166a7245e390a6e29682191263">tempest-TestNetworkBasicOps-139500885-project-member</nova:user>
Oct 08 10:15:03 compute-0 nova_compute[262220]:     <nova:project uuid="0edb2c85b88b4b168a4b2e8c5ed4a05c">tempest-TestNetworkBasicOps-139500885</nova:project>
Oct 08 10:15:03 compute-0 nova_compute[262220]:   </nova:owner>
Oct 08 10:15:03 compute-0 nova_compute[262220]:   <nova:root type="image" uuid="e5994bac-385d-4cfe-962e-386aa0559983"/>
Oct 08 10:15:03 compute-0 nova_compute[262220]:   <nova:ports>
Oct 08 10:15:03 compute-0 nova_compute[262220]:     <nova:port uuid="be4ec274-2a90-48e8-bd51-fd01f3c659da">
Oct 08 10:15:03 compute-0 nova_compute[262220]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 08 10:15:03 compute-0 nova_compute[262220]:     </nova:port>
Oct 08 10:15:03 compute-0 nova_compute[262220]:     <nova:port uuid="79d28498-fe9d-49dc-ad2c-bde432b239db">
Oct 08 10:15:03 compute-0 nova_compute[262220]:       <nova:ip type="fixed" address="10.100.0.23" ipVersion="4"/>
Oct 08 10:15:03 compute-0 nova_compute[262220]:     </nova:port>
Oct 08 10:15:03 compute-0 nova_compute[262220]:   </nova:ports>
Oct 08 10:15:03 compute-0 nova_compute[262220]: </nova:instance>
Oct 08 10:15:03 compute-0 nova_compute[262220]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 08 10:15:04 compute-0 nova_compute[262220]: 2025-10-08 10:15:04.074 2 DEBUG oslo_concurrency.lockutils [None req-5a883b22-8805-4173-b851-76a08557fdf1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "interface-ea469a2e-bf09-495c-9b5e-02ad38d32d40-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 8.875s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:15:04 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:15:04 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v914: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 14 KiB/s wr, 1 op/s
Oct 08 10:15:04 compute-0 podman[274716]: 2025-10-08 10:15:04.145279561 +0000 UTC m=+0.026761572 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 08 10:15:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:04 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:04 compute-0 podman[274716]: 2025-10-08 10:15:04.321208249 +0000 UTC m=+0.202690230 container create 3f635e95f7ebf91f0c654612ee8273fdabac0234b29364d136bfe358d73eeb52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0a28a475-c59d-4526-93af-b8af40052e5c, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 08 10:15:04 compute-0 systemd[1]: Started libpod-conmon-3f635e95f7ebf91f0c654612ee8273fdabac0234b29364d136bfe358d73eeb52.scope.
Oct 08 10:15:04 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:15:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/915dac930a5508f0d71bb51887deafacf6554c7ddc11a4e1d1f27258efcfd64d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 08 10:15:04 compute-0 podman[274716]: 2025-10-08 10:15:04.509825865 +0000 UTC m=+0.391307866 container init 3f635e95f7ebf91f0c654612ee8273fdabac0234b29364d136bfe358d73eeb52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0a28a475-c59d-4526-93af-b8af40052e5c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 08 10:15:04 compute-0 podman[274716]: 2025-10-08 10:15:04.51555359 +0000 UTC m=+0.397035571 container start 3f635e95f7ebf91f0c654612ee8273fdabac0234b29364d136bfe358d73eeb52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0a28a475-c59d-4526-93af-b8af40052e5c, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 08 10:15:04 compute-0 neutron-haproxy-ovnmeta-0a28a475-c59d-4526-93af-b8af40052e5c[274732]: [NOTICE]   (274736) : New worker (274738) forked
Oct 08 10:15:04 compute-0 neutron-haproxy-ovnmeta-0a28a475-c59d-4526-93af-b8af40052e5c[274732]: [NOTICE]   (274736) : Loading success.
Oct 08 10:15:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:04 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70003e60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:04 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:15:04 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:15:04 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:15:04.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:15:04 compute-0 nova_compute[262220]: 2025-10-08 10:15:04.759 2 DEBUG nova.compute.manager [req-5eb48344-e4bf-4805-8340-a3658783aea3 req-2968b577-c6b9-445d-80e3-78e3ab6aec28 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Received event network-vif-plugged-79d28498-fe9d-49dc-ad2c-bde432b239db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 08 10:15:04 compute-0 nova_compute[262220]: 2025-10-08 10:15:04.760 2 DEBUG oslo_concurrency.lockutils [req-5eb48344-e4bf-4805-8340-a3658783aea3 req-2968b577-c6b9-445d-80e3-78e3ab6aec28 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:15:04 compute-0 nova_compute[262220]: 2025-10-08 10:15:04.760 2 DEBUG oslo_concurrency.lockutils [req-5eb48344-e4bf-4805-8340-a3658783aea3 req-2968b577-c6b9-445d-80e3-78e3ab6aec28 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:15:04 compute-0 nova_compute[262220]: 2025-10-08 10:15:04.760 2 DEBUG oslo_concurrency.lockutils [req-5eb48344-e4bf-4805-8340-a3658783aea3 req-2968b577-c6b9-445d-80e3-78e3ab6aec28 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:15:04 compute-0 nova_compute[262220]: 2025-10-08 10:15:04.761 2 DEBUG nova.compute.manager [req-5eb48344-e4bf-4805-8340-a3658783aea3 req-2968b577-c6b9-445d-80e3-78e3ab6aec28 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] No waiting events found dispatching network-vif-plugged-79d28498-fe9d-49dc-ad2c-bde432b239db pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 08 10:15:04 compute-0 nova_compute[262220]: 2025-10-08 10:15:04.761 2 WARNING nova.compute.manager [req-5eb48344-e4bf-4805-8340-a3658783aea3 req-2968b577-c6b9-445d-80e3-78e3ab6aec28 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Received unexpected event network-vif-plugged-79d28498-fe9d-49dc-ad2c-bde432b239db for instance with vm_state active and task_state None.
Oct 08 10:15:04 compute-0 ovn_controller[153187]: 2025-10-08T10:15:04Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:40:4d:66 10.100.0.23
Oct 08 10:15:04 compute-0 ovn_controller[153187]: 2025-10-08T10:15:04Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:40:4d:66 10.100.0.23
Oct 08 10:15:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:05 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000b4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:05 compute-0 nova_compute[262220]: 2025-10-08 10:15:05.319 2 DEBUG nova.network.neutron [req-eb6a668c-e2d8-4e4e-bc69-70e5d18e887d req-e553ac48-3e9d-4c34-87f2-509382fa0272 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Updated VIF entry in instance network info cache for port 79d28498-fe9d-49dc-ad2c-bde432b239db. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 08 10:15:05 compute-0 nova_compute[262220]: 2025-10-08 10:15:05.319 2 DEBUG nova.network.neutron [req-eb6a668c-e2d8-4e4e-bc69-70e5d18e887d req-e553ac48-3e9d-4c34-87f2-509382fa0272 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Updating instance_info_cache with network_info: [{"id": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "address": "fa:16:3e:e6:b0:e0", "network": {"id": "834a886f-bb33-49ed-b47e-ef0308a38e89", "bridge": "br-int", "label": "tempest-network-smoke--1879322246", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe4ec274-2a", "ovs_interfaceid": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "79d28498-fe9d-49dc-ad2c-bde432b239db", "address": "fa:16:3e:40:4d:66", "network": {"id": "0a28a475-c59d-4526-93af-b8af40052e5c", "bridge": "br-int", "label": "tempest-network-smoke--1745814783", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79d28498-fe", "ovs_interfaceid": "79d28498-fe9d-49dc-ad2c-bde432b239db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 08 10:15:05 compute-0 nova_compute[262220]: 2025-10-08 10:15:05.409 2 DEBUG oslo_concurrency.lockutils [req-eb6a668c-e2d8-4e4e-bc69-70e5d18e887d req-e553ac48-3e9d-4c34-87f2-509382fa0272 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Releasing lock "refresh_cache-ea469a2e-bf09-495c-9b5e-02ad38d32d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 08 10:15:05 compute-0 ceph-mon[73572]: pgmap v914: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 14 KiB/s wr, 1 op/s
Oct 08 10:15:05 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:15:05 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:15:05 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:15:05.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:15:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:15:05] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Oct 08 10:15:05 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:15:05] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Oct 08 10:15:06 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v915: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 2.0 KiB/s wr, 0 op/s
Oct 08 10:15:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:06 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94005390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:06 compute-0 ceph-mon[73572]: pgmap v915: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 2.0 KiB/s wr, 0 op/s
Oct 08 10:15:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:06 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:06 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:15:06 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:15:06 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:15:06.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:15:06 compute-0 nova_compute[262220]: 2025-10-08 10:15:06.857 2 DEBUG nova.compute.manager [req-fd34a631-d0f1-40b3-bd3e-331f39e7cb3e req-4d70929d-d8d1-45c6-bd34-3b5e9e515a61 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Received event network-vif-plugged-79d28498-fe9d-49dc-ad2c-bde432b239db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 08 10:15:06 compute-0 nova_compute[262220]: 2025-10-08 10:15:06.858 2 DEBUG oslo_concurrency.lockutils [req-fd34a631-d0f1-40b3-bd3e-331f39e7cb3e req-4d70929d-d8d1-45c6-bd34-3b5e9e515a61 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:15:06 compute-0 nova_compute[262220]: 2025-10-08 10:15:06.858 2 DEBUG oslo_concurrency.lockutils [req-fd34a631-d0f1-40b3-bd3e-331f39e7cb3e req-4d70929d-d8d1-45c6-bd34-3b5e9e515a61 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:15:06 compute-0 nova_compute[262220]: 2025-10-08 10:15:06.858 2 DEBUG oslo_concurrency.lockutils [req-fd34a631-d0f1-40b3-bd3e-331f39e7cb3e req-4d70929d-d8d1-45c6-bd34-3b5e9e515a61 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:15:06 compute-0 nova_compute[262220]: 2025-10-08 10:15:06.858 2 DEBUG nova.compute.manager [req-fd34a631-d0f1-40b3-bd3e-331f39e7cb3e req-4d70929d-d8d1-45c6-bd34-3b5e9e515a61 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] No waiting events found dispatching network-vif-plugged-79d28498-fe9d-49dc-ad2c-bde432b239db pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 08 10:15:06 compute-0 nova_compute[262220]: 2025-10-08 10:15:06.858 2 WARNING nova.compute.manager [req-fd34a631-d0f1-40b3-bd3e-331f39e7cb3e req-4d70929d-d8d1-45c6-bd34-3b5e9e515a61 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Received unexpected event network-vif-plugged-79d28498-fe9d-49dc-ad2c-bde432b239db for instance with vm_state active and task_state None.
Oct 08 10:15:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:15:07.155Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:15:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:15:07.155Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:15:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:15:07.156Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:15:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:07 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac700031e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:07 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:15:07 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:15:07 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:15:07.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:15:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:07 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:15:07 compute-0 sudo[274750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:15:07 compute-0 sudo[274750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:15:07 compute-0 sudo[274750]: pam_unix(sudo:session): session closed for user root
Oct 08 10:15:08 compute-0 nova_compute[262220]: 2025-10-08 10:15:08.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:15:08 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v916: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 2.0 KiB/s wr, 0 op/s
Oct 08 10:15:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:08 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000b4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:08 compute-0 nova_compute[262220]: 2025-10-08 10:15:08.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:15:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:08 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940053b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:08 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:15:08 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:15:08 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:15:08.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:15:09 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:15:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:09 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:09 compute-0 ceph-mon[73572]: pgmap v916: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 2.0 KiB/s wr, 0 op/s
Oct 08 10:15:09 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:15:09 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:15:09 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:15:09.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:15:10 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v917: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 5.7 KiB/s wr, 1 op/s
Oct 08 10:15:10 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:10 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac700031e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:10 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:10 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000b4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:10 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:15:10 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:15:10 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:15:10.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:15:11 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:11 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940053d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:11 compute-0 ceph-mon[73572]: pgmap v917: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 5.7 KiB/s wr, 1 op/s
Oct 08 10:15:11 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:15:11 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:15:11 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:15:11.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:15:12 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v918: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 5.7 KiB/s wr, 1 op/s
Oct 08 10:15:12 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:12 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:12 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:12 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac700031e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:12 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:15:12 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:15:12 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:15:12.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:15:12 compute-0 podman[274780]: 2025-10-08 10:15:12.928243699 +0000 UTC m=+0.085326399 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 08 10:15:13 compute-0 nova_compute[262220]: 2025-10-08 10:15:13.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:15:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:13 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000b4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:13 compute-0 nova_compute[262220]: 2025-10-08 10:15:13.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:15:13 compute-0 ceph-mon[73572]: pgmap v918: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 5.7 KiB/s wr, 1 op/s
Oct 08 10:15:13 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/249347053' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:15:13 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:15:13 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:15:13 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:15:13.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:15:14 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:15:14 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v919: 353 pgs: 353 active+clean; 159 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.7 MiB/s wr, 25 op/s
Oct 08 10:15:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:14 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940053f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:14 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:14 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:15:14 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:15:14 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:15:14.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:15:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:15 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:15 compute-0 ceph-mon[73572]: pgmap v919: 353 pgs: 353 active+clean; 159 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.7 MiB/s wr, 25 op/s
Oct 08 10:15:15 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:15:15 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:15:15 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:15:15.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:15:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:15:15] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Oct 08 10:15:15 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:15:15] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Oct 08 10:15:16 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v920: 353 pgs: 353 active+clean; 159 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.7 MiB/s wr, 25 op/s
Oct 08 10:15:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:16 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000b4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:16 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94005410 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:16 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:15:16 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:15:16 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:15:16.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:15:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:15:17.157Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:15:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:17 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:17 compute-0 ceph-mon[73572]: pgmap v920: 353 pgs: 353 active+clean; 159 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.7 MiB/s wr, 25 op/s
Oct 08 10:15:17 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:15:17 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:15:17 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:15:17.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:15:17 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:15:17 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:15:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:17 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:15:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:15:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:15:18 compute-0 nova_compute[262220]: 2025-10-08 10:15:18.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:15:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:15:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:15:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:15:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:15:18 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v921: 353 pgs: 353 active+clean; 159 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.7 MiB/s wr, 25 op/s
Oct 08 10:15:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:18 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:18 compute-0 nova_compute[262220]: 2025-10-08 10:15:18.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:15:18 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:15:18 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/2910948324' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 08 10:15:18 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/3725664723' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 08 10:15:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:18 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000b4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:18 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:15:18 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:15:18 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:15:18.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:15:19 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:15:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:19 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94005430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:19 compute-0 ceph-mon[73572]: pgmap v921: 353 pgs: 353 active+clean; 159 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.7 MiB/s wr, 25 op/s
Oct 08 10:15:19 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:15:19 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:15:19 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:15:19.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:15:19 compute-0 podman[274815]: 2025-10-08 10:15:19.906894438 +0000 UTC m=+0.067936118 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 08 10:15:19 compute-0 podman[274814]: 2025-10-08 10:15:19.907068094 +0000 UTC m=+0.071737402 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 08 10:15:20 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v922: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Oct 08 10:15:20 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:20 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac700031e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:20 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 08 10:15:20 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2620818139' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 08 10:15:20 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 08 10:15:20 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2620818139' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 08 10:15:20 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:20 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0047e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:20 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:15:20 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:15:20 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:15:20.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:15:21 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:21 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000b4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:21 compute-0 ceph-mon[73572]: pgmap v922: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Oct 08 10:15:21 compute-0 ceph-mon[73572]: from='client.? 192.168.122.10:0/2620818139' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 08 10:15:21 compute-0 ceph-mon[73572]: from='client.? 192.168.122.10:0/2620818139' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 08 10:15:21 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:15:21 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:15:21 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:15:21.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:15:21 compute-0 sudo[274855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:15:21 compute-0 sudo[274855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:15:21 compute-0 sudo[274855]: pam_unix(sudo:session): session closed for user root
Oct 08 10:15:21 compute-0 sudo[274880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 08 10:15:21 compute-0 sudo[274880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:15:22 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v923: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 08 10:15:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:22 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac700031e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:22 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 08 10:15:22 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:15:22 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 08 10:15:22 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:15:22 compute-0 sudo[274880]: pam_unix(sudo:session): session closed for user root
Oct 08 10:15:22 compute-0 sudo[274938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:15:22 compute-0 sudo[274938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:15:22 compute-0 sudo[274938]: pam_unix(sudo:session): session closed for user root
Oct 08 10:15:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:22 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac700031e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:22 compute-0 sudo[274963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- inventory --format=json-pretty --filter-for-batch
Oct 08 10:15:22 compute-0 sudo[274963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:15:22 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:15:22 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:15:22 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:15:22.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:15:23 compute-0 nova_compute[262220]: 2025-10-08 10:15:23.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:15:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:23 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0047e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:23 compute-0 podman[275029]: 2025-10-08 10:15:23.134089121 +0000 UTC m=+0.023879291 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:15:23 compute-0 podman[275029]: 2025-10-08 10:15:23.228248344 +0000 UTC m=+0.118038484 container create 751be0a4ea5e3774afb634ec0fb1dc5d32e1c4923a671cc4d28e3368819a7fea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_wilbur, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 10:15:23 compute-0 systemd[1]: Started libpod-conmon-751be0a4ea5e3774afb634ec0fb1dc5d32e1c4923a671cc4d28e3368819a7fea.scope.
Oct 08 10:15:23 compute-0 nova_compute[262220]: 2025-10-08 10:15:23.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:15:23 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:15:23 compute-0 podman[275029]: 2025-10-08 10:15:23.360275747 +0000 UTC m=+0.250065907 container init 751be0a4ea5e3774afb634ec0fb1dc5d32e1c4923a671cc4d28e3368819a7fea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:15:23 compute-0 podman[275029]: 2025-10-08 10:15:23.3712336 +0000 UTC m=+0.261023750 container start 751be0a4ea5e3774afb634ec0fb1dc5d32e1c4923a671cc4d28e3368819a7fea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_wilbur, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:15:23 compute-0 happy_wilbur[275046]: 167 167
Oct 08 10:15:23 compute-0 systemd[1]: libpod-751be0a4ea5e3774afb634ec0fb1dc5d32e1c4923a671cc4d28e3368819a7fea.scope: Deactivated successfully.
Oct 08 10:15:23 compute-0 ceph-mon[73572]: pgmap v923: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 08 10:15:23 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:15:23 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:15:23 compute-0 podman[275029]: 2025-10-08 10:15:23.393730475 +0000 UTC m=+0.283520615 container attach 751be0a4ea5e3774afb634ec0fb1dc5d32e1c4923a671cc4d28e3368819a7fea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_wilbur, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct 08 10:15:23 compute-0 podman[275029]: 2025-10-08 10:15:23.394613854 +0000 UTC m=+0.284404014 container died 751be0a4ea5e3774afb634ec0fb1dc5d32e1c4923a671cc4d28e3368819a7fea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:15:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-419d83d1abbdfa7ec69922c40793dc189e2e0003fb8d089888941dcb9d2581e0-merged.mount: Deactivated successfully.
Oct 08 10:15:23 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:15:23 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:15:23 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:15:23.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:15:23 compute-0 podman[275029]: 2025-10-08 10:15:23.616491401 +0000 UTC m=+0.506281541 container remove 751be0a4ea5e3774afb634ec0fb1dc5d32e1c4923a671cc4d28e3368819a7fea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 08 10:15:23 compute-0 systemd[1]: libpod-conmon-751be0a4ea5e3774afb634ec0fb1dc5d32e1c4923a671cc4d28e3368819a7fea.scope: Deactivated successfully.
Oct 08 10:15:23 compute-0 podman[275071]: 2025-10-08 10:15:23.819689367 +0000 UTC m=+0.051197300 container create 0084798e784a0469155ebd49450f6302b75d3d1851cacc63bc55a6e11607b759 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_jemison, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 10:15:23 compute-0 systemd[1]: Started libpod-conmon-0084798e784a0469155ebd49450f6302b75d3d1851cacc63bc55a6e11607b759.scope.
Oct 08 10:15:23 compute-0 podman[275071]: 2025-10-08 10:15:23.804067103 +0000 UTC m=+0.035575066 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:15:23 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:15:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c3104f5e1672bd986c46cb8537ed366c4308593af5442ec2c2bc6401aad69a5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:15:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c3104f5e1672bd986c46cb8537ed366c4308593af5442ec2c2bc6401aad69a5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:15:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c3104f5e1672bd986c46cb8537ed366c4308593af5442ec2c2bc6401aad69a5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:15:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c3104f5e1672bd986c46cb8537ed366c4308593af5442ec2c2bc6401aad69a5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:15:23 compute-0 podman[275071]: 2025-10-08 10:15:23.923819912 +0000 UTC m=+0.155327855 container init 0084798e784a0469155ebd49450f6302b75d3d1851cacc63bc55a6e11607b759 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_jemison, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 10:15:23 compute-0 podman[275071]: 2025-10-08 10:15:23.929917627 +0000 UTC m=+0.161425570 container start 0084798e784a0469155ebd49450f6302b75d3d1851cacc63bc55a6e11607b759 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 08 10:15:23 compute-0 podman[275071]: 2025-10-08 10:15:23.934450334 +0000 UTC m=+0.165958277 container attach 0084798e784a0469155ebd49450f6302b75d3d1851cacc63bc55a6e11607b759 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_jemison, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:15:24 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:15:24 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v924: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 08 10:15:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:24 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000b4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:24 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 08 10:15:24 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:15:24 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 08 10:15:24 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:15:24 compute-0 hardcore_jemison[275086]: [
Oct 08 10:15:24 compute-0 hardcore_jemison[275086]:     {
Oct 08 10:15:24 compute-0 hardcore_jemison[275086]:         "available": false,
Oct 08 10:15:24 compute-0 hardcore_jemison[275086]:         "being_replaced": false,
Oct 08 10:15:24 compute-0 hardcore_jemison[275086]:         "ceph_device_lvm": false,
Oct 08 10:15:24 compute-0 hardcore_jemison[275086]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 08 10:15:24 compute-0 hardcore_jemison[275086]:         "lsm_data": {},
Oct 08 10:15:24 compute-0 hardcore_jemison[275086]:         "lvs": [],
Oct 08 10:15:24 compute-0 hardcore_jemison[275086]:         "path": "/dev/sr0",
Oct 08 10:15:24 compute-0 hardcore_jemison[275086]:         "rejected_reasons": [
Oct 08 10:15:24 compute-0 hardcore_jemison[275086]:             "Has a FileSystem",
Oct 08 10:15:24 compute-0 hardcore_jemison[275086]:             "Insufficient space (<5GB)"
Oct 08 10:15:24 compute-0 hardcore_jemison[275086]:         ],
Oct 08 10:15:24 compute-0 hardcore_jemison[275086]:         "sys_api": {
Oct 08 10:15:24 compute-0 hardcore_jemison[275086]:             "actuators": null,
Oct 08 10:15:24 compute-0 hardcore_jemison[275086]:             "device_nodes": [
Oct 08 10:15:24 compute-0 hardcore_jemison[275086]:                 "sr0"
Oct 08 10:15:24 compute-0 hardcore_jemison[275086]:             ],
Oct 08 10:15:24 compute-0 hardcore_jemison[275086]:             "devname": "sr0",
Oct 08 10:15:24 compute-0 hardcore_jemison[275086]:             "human_readable_size": "482.00 KB",
Oct 08 10:15:24 compute-0 hardcore_jemison[275086]:             "id_bus": "ata",
Oct 08 10:15:24 compute-0 hardcore_jemison[275086]:             "model": "QEMU DVD-ROM",
Oct 08 10:15:24 compute-0 hardcore_jemison[275086]:             "nr_requests": "2",
Oct 08 10:15:24 compute-0 hardcore_jemison[275086]:             "parent": "/dev/sr0",
Oct 08 10:15:24 compute-0 hardcore_jemison[275086]:             "partitions": {},
Oct 08 10:15:24 compute-0 hardcore_jemison[275086]:             "path": "/dev/sr0",
Oct 08 10:15:24 compute-0 hardcore_jemison[275086]:             "removable": "1",
Oct 08 10:15:24 compute-0 hardcore_jemison[275086]:             "rev": "2.5+",
Oct 08 10:15:24 compute-0 hardcore_jemison[275086]:             "ro": "0",
Oct 08 10:15:24 compute-0 hardcore_jemison[275086]:             "rotational": "0",
Oct 08 10:15:24 compute-0 hardcore_jemison[275086]:             "sas_address": "",
Oct 08 10:15:24 compute-0 hardcore_jemison[275086]:             "sas_device_handle": "",
Oct 08 10:15:24 compute-0 hardcore_jemison[275086]:             "scheduler_mode": "mq-deadline",
Oct 08 10:15:24 compute-0 hardcore_jemison[275086]:             "sectors": 0,
Oct 08 10:15:24 compute-0 hardcore_jemison[275086]:             "sectorsize": "2048",
Oct 08 10:15:24 compute-0 hardcore_jemison[275086]:             "size": 493568.0,
Oct 08 10:15:24 compute-0 hardcore_jemison[275086]:             "support_discard": "2048",
Oct 08 10:15:24 compute-0 hardcore_jemison[275086]:             "type": "disk",
Oct 08 10:15:24 compute-0 hardcore_jemison[275086]:             "vendor": "QEMU"
Oct 08 10:15:24 compute-0 hardcore_jemison[275086]:         }
Oct 08 10:15:24 compute-0 hardcore_jemison[275086]:     }
Oct 08 10:15:24 compute-0 hardcore_jemison[275086]: ]
Oct 08 10:15:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:24 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac700031e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:24 compute-0 systemd[1]: libpod-0084798e784a0469155ebd49450f6302b75d3d1851cacc63bc55a6e11607b759.scope: Deactivated successfully.
Oct 08 10:15:24 compute-0 podman[276393]: 2025-10-08 10:15:24.766316862 +0000 UTC m=+0.024537302 container died 0084798e784a0469155ebd49450f6302b75d3d1851cacc63bc55a6e11607b759 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True)
Oct 08 10:15:24 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:15:24 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:15:24 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:15:24.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:15:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c3104f5e1672bd986c46cb8537ed366c4308593af5442ec2c2bc6401aad69a5-merged.mount: Deactivated successfully.
Oct 08 10:15:24 compute-0 podman[276393]: 2025-10-08 10:15:24.80382982 +0000 UTC m=+0.062050240 container remove 0084798e784a0469155ebd49450f6302b75d3d1851cacc63bc55a6e11607b759 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_jemison, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:15:24 compute-0 systemd[1]: libpod-conmon-0084798e784a0469155ebd49450f6302b75d3d1851cacc63bc55a6e11607b759.scope: Deactivated successfully.
Oct 08 10:15:24 compute-0 sudo[274963]: pam_unix(sudo:session): session closed for user root
Oct 08 10:15:24 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 10:15:24 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:15:24 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 10:15:24 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:15:24 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 10:15:24 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:15:24 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 08 10:15:24 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 10:15:24 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 08 10:15:24 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:15:24 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 08 10:15:24 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:15:24 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 08 10:15:24 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 10:15:24 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 08 10:15:24 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 10:15:24 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 10:15:24 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:15:24 compute-0 sudo[276408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:15:24 compute-0 sudo[276408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:15:25 compute-0 sudo[276408]: pam_unix(sudo:session): session closed for user root
Oct 08 10:15:25 compute-0 sudo[276433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 08 10:15:25 compute-0 sudo[276433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:15:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:25 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac700031e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:25 compute-0 podman[276501]: 2025-10-08 10:15:25.468159011 +0000 UTC m=+0.043139101 container create 2978f8ea8183bf47890c0a14b87fdcd3fb7730c7bf1d0208965c12cf6de295c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_euclid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 08 10:15:25 compute-0 ceph-mon[73572]: pgmap v924: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 08 10:15:25 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:15:25 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:15:25 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:15:25 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:15:25 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:15:25 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 10:15:25 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:15:25 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:15:25 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 10:15:25 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 10:15:25 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:15:25 compute-0 systemd[1]: Started libpod-conmon-2978f8ea8183bf47890c0a14b87fdcd3fb7730c7bf1d0208965c12cf6de295c9.scope.
Oct 08 10:15:25 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:15:25 compute-0 podman[276501]: 2025-10-08 10:15:25.452841067 +0000 UTC m=+0.027821187 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:15:25 compute-0 podman[276501]: 2025-10-08 10:15:25.56157192 +0000 UTC m=+0.136552040 container init 2978f8ea8183bf47890c0a14b87fdcd3fb7730c7bf1d0208965c12cf6de295c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_euclid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default)
Oct 08 10:15:25 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:15:25 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:15:25 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:15:25.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:15:25 compute-0 podman[276501]: 2025-10-08 10:15:25.570263711 +0000 UTC m=+0.145243811 container start 2978f8ea8183bf47890c0a14b87fdcd3fb7730c7bf1d0208965c12cf6de295c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Oct 08 10:15:25 compute-0 silly_euclid[276518]: 167 167
Oct 08 10:15:25 compute-0 podman[276501]: 2025-10-08 10:15:25.574158305 +0000 UTC m=+0.149138415 container attach 2978f8ea8183bf47890c0a14b87fdcd3fb7730c7bf1d0208965c12cf6de295c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_euclid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 10:15:25 compute-0 podman[276501]: 2025-10-08 10:15:25.574775916 +0000 UTC m=+0.149756046 container died 2978f8ea8183bf47890c0a14b87fdcd3fb7730c7bf1d0208965c12cf6de295c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_euclid, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct 08 10:15:25 compute-0 systemd[1]: libpod-2978f8ea8183bf47890c0a14b87fdcd3fb7730c7bf1d0208965c12cf6de295c9.scope: Deactivated successfully.
Oct 08 10:15:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-629f31cf0938bfa229acb8da33f5cae40a53bb6f135c2e131c6a9a256eb111c9-merged.mount: Deactivated successfully.
Oct 08 10:15:25 compute-0 podman[276501]: 2025-10-08 10:15:25.624990963 +0000 UTC m=+0.199971073 container remove 2978f8ea8183bf47890c0a14b87fdcd3fb7730c7bf1d0208965c12cf6de295c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct 08 10:15:25 compute-0 systemd[1]: libpod-conmon-2978f8ea8183bf47890c0a14b87fdcd3fb7730c7bf1d0208965c12cf6de295c9.scope: Deactivated successfully.
Oct 08 10:15:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:15:25] "GET /metrics HTTP/1.1" 200 48478 "" "Prometheus/2.51.0"
Oct 08 10:15:25 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:15:25] "GET /metrics HTTP/1.1" 200 48478 "" "Prometheus/2.51.0"
Oct 08 10:15:25 compute-0 podman[276542]: 2025-10-08 10:15:25.79192529 +0000 UTC m=+0.045633970 container create a33e39537e4a78a330dfcd8fc752c37dc0610457f10427d4edcb350a614dcb37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_payne, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Oct 08 10:15:25 compute-0 systemd[1]: Started libpod-conmon-a33e39537e4a78a330dfcd8fc752c37dc0610457f10427d4edcb350a614dcb37.scope.
Oct 08 10:15:25 compute-0 podman[276542]: 2025-10-08 10:15:25.770675966 +0000 UTC m=+0.024384676 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:15:25 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:15:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0dc4130190bd0708f84de38f2254b23f9004228e25b21392b0f0d91a32f3618/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:15:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0dc4130190bd0708f84de38f2254b23f9004228e25b21392b0f0d91a32f3618/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:15:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0dc4130190bd0708f84de38f2254b23f9004228e25b21392b0f0d91a32f3618/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:15:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0dc4130190bd0708f84de38f2254b23f9004228e25b21392b0f0d91a32f3618/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:15:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0dc4130190bd0708f84de38f2254b23f9004228e25b21392b0f0d91a32f3618/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 10:15:25 compute-0 podman[276542]: 2025-10-08 10:15:25.892237132 +0000 UTC m=+0.145945812 container init a33e39537e4a78a330dfcd8fc752c37dc0610457f10427d4edcb350a614dcb37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 10:15:25 compute-0 podman[276542]: 2025-10-08 10:15:25.902205293 +0000 UTC m=+0.155913963 container start a33e39537e4a78a330dfcd8fc752c37dc0610457f10427d4edcb350a614dcb37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_payne, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 10:15:25 compute-0 podman[276542]: 2025-10-08 10:15:25.905932294 +0000 UTC m=+0.159640994 container attach a33e39537e4a78a330dfcd8fc752c37dc0610457f10427d4edcb350a614dcb37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:15:26 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v925: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 102 KiB/s wr, 78 op/s
Oct 08 10:15:26 compute-0 quirky_payne[276558]: --> passed data devices: 0 physical, 1 LVM
Oct 08 10:15:26 compute-0 quirky_payne[276558]: --> All data devices are unavailable
Oct 08 10:15:26 compute-0 systemd[1]: libpod-a33e39537e4a78a330dfcd8fc752c37dc0610457f10427d4edcb350a614dcb37.scope: Deactivated successfully.
Oct 08 10:15:26 compute-0 podman[276542]: 2025-10-08 10:15:26.290309795 +0000 UTC m=+0.544018465 container died a33e39537e4a78a330dfcd8fc752c37dc0610457f10427d4edcb350a614dcb37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_payne, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct 08 10:15:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:26 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0047e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-d0dc4130190bd0708f84de38f2254b23f9004228e25b21392b0f0d91a32f3618-merged.mount: Deactivated successfully.
Oct 08 10:15:26 compute-0 podman[276542]: 2025-10-08 10:15:26.33236077 +0000 UTC m=+0.586069440 container remove a33e39537e4a78a330dfcd8fc752c37dc0610457f10427d4edcb350a614dcb37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_payne, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1)
Oct 08 10:15:26 compute-0 systemd[1]: libpod-conmon-a33e39537e4a78a330dfcd8fc752c37dc0610457f10427d4edcb350a614dcb37.scope: Deactivated successfully.
Oct 08 10:15:26 compute-0 sudo[276433]: pam_unix(sudo:session): session closed for user root
Oct 08 10:15:26 compute-0 sudo[276589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:15:26 compute-0 sudo[276589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:15:26 compute-0 sudo[276589]: pam_unix(sudo:session): session closed for user root
Oct 08 10:15:26 compute-0 sudo[276614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- lvm list --format json
Oct 08 10:15:26 compute-0 sudo[276614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:15:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:26 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000b4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:26 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:15:26 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:15:26 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:15:26.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:15:26 compute-0 podman[276681]: 2025-10-08 10:15:26.953850811 +0000 UTC m=+0.039372760 container create f1d26108791e7b7f34a7efcd453e8e2ac70e7ad4b8f82607207a0c0c85a07a44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_banach, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 10:15:26 compute-0 systemd[1]: Started libpod-conmon-f1d26108791e7b7f34a7efcd453e8e2ac70e7ad4b8f82607207a0c0c85a07a44.scope.
Oct 08 10:15:27 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:15:27 compute-0 podman[276681]: 2025-10-08 10:15:26.937541416 +0000 UTC m=+0.023063395 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:15:27 compute-0 podman[276681]: 2025-10-08 10:15:27.03420303 +0000 UTC m=+0.119725009 container init f1d26108791e7b7f34a7efcd453e8e2ac70e7ad4b8f82607207a0c0c85a07a44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_banach, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 10:15:27 compute-0 podman[276681]: 2025-10-08 10:15:27.041468274 +0000 UTC m=+0.126990223 container start f1d26108791e7b7f34a7efcd453e8e2ac70e7ad4b8f82607207a0c0c85a07a44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:15:27 compute-0 podman[276681]: 2025-10-08 10:15:27.044485971 +0000 UTC m=+0.130007960 container attach f1d26108791e7b7f34a7efcd453e8e2ac70e7ad4b8f82607207a0c0c85a07a44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:15:27 compute-0 exciting_banach[276697]: 167 167
Oct 08 10:15:27 compute-0 podman[276681]: 2025-10-08 10:15:27.049102319 +0000 UTC m=+0.134624298 container died f1d26108791e7b7f34a7efcd453e8e2ac70e7ad4b8f82607207a0c0c85a07a44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct 08 10:15:27 compute-0 systemd[1]: libpod-f1d26108791e7b7f34a7efcd453e8e2ac70e7ad4b8f82607207a0c0c85a07a44.scope: Deactivated successfully.
Oct 08 10:15:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-665a6ac2fce105bdc07a9f3599e21d2078ad6e65911087706dfe0970b4a2e787-merged.mount: Deactivated successfully.
Oct 08 10:15:27 compute-0 podman[276681]: 2025-10-08 10:15:27.082739483 +0000 UTC m=+0.168261442 container remove f1d26108791e7b7f34a7efcd453e8e2ac70e7ad4b8f82607207a0c0c85a07a44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_banach, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 08 10:15:27 compute-0 systemd[1]: libpod-conmon-f1d26108791e7b7f34a7efcd453e8e2ac70e7ad4b8f82607207a0c0c85a07a44.scope: Deactivated successfully.
Oct 08 10:15:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:15:27.157Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:15:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:15:27.161Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:15:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:27 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78002c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:27 compute-0 podman[276722]: 2025-10-08 10:15:27.259076493 +0000 UTC m=+0.043670337 container create 378eccdb956d6119c65977b195aa2434ab18554ac334fa2534fe7b39d65cc717 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_dirac, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 08 10:15:27 compute-0 systemd[1]: Started libpod-conmon-378eccdb956d6119c65977b195aa2434ab18554ac334fa2534fe7b39d65cc717.scope.
Oct 08 10:15:27 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:15:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/399565bcd0bc24d3975bf9b74a7a29a4a1d0a44ec4426331050bad5fbd24180c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:15:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/399565bcd0bc24d3975bf9b74a7a29a4a1d0a44ec4426331050bad5fbd24180c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:15:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/399565bcd0bc24d3975bf9b74a7a29a4a1d0a44ec4426331050bad5fbd24180c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:15:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/399565bcd0bc24d3975bf9b74a7a29a4a1d0a44ec4426331050bad5fbd24180c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:15:27 compute-0 podman[276722]: 2025-10-08 10:15:27.242233641 +0000 UTC m=+0.026827515 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:15:27 compute-0 podman[276722]: 2025-10-08 10:15:27.341279942 +0000 UTC m=+0.125873806 container init 378eccdb956d6119c65977b195aa2434ab18554ac334fa2534fe7b39d65cc717 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_dirac, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 08 10:15:27 compute-0 podman[276722]: 2025-10-08 10:15:27.350314643 +0000 UTC m=+0.134908487 container start 378eccdb956d6119c65977b195aa2434ab18554ac334fa2534fe7b39d65cc717 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:15:27 compute-0 podman[276722]: 2025-10-08 10:15:27.353457895 +0000 UTC m=+0.138051759 container attach 378eccdb956d6119c65977b195aa2434ab18554ac334fa2534fe7b39d65cc717 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct 08 10:15:27 compute-0 ceph-mon[73572]: pgmap v925: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 102 KiB/s wr, 78 op/s
Oct 08 10:15:27 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:15:27 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:15:27 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:15:27.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:15:27 compute-0 musing_dirac[276739]: {
Oct 08 10:15:27 compute-0 musing_dirac[276739]:     "1": [
Oct 08 10:15:27 compute-0 musing_dirac[276739]:         {
Oct 08 10:15:27 compute-0 musing_dirac[276739]:             "devices": [
Oct 08 10:15:27 compute-0 musing_dirac[276739]:                 "/dev/loop3"
Oct 08 10:15:27 compute-0 musing_dirac[276739]:             ],
Oct 08 10:15:27 compute-0 musing_dirac[276739]:             "lv_name": "ceph_lv0",
Oct 08 10:15:27 compute-0 musing_dirac[276739]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:15:27 compute-0 musing_dirac[276739]:             "lv_size": "21470642176",
Oct 08 10:15:27 compute-0 musing_dirac[276739]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 08 10:15:27 compute-0 musing_dirac[276739]:             "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 10:15:27 compute-0 musing_dirac[276739]:             "name": "ceph_lv0",
Oct 08 10:15:27 compute-0 musing_dirac[276739]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:15:27 compute-0 musing_dirac[276739]:             "tags": {
Oct 08 10:15:27 compute-0 musing_dirac[276739]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:15:27 compute-0 musing_dirac[276739]:                 "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 10:15:27 compute-0 musing_dirac[276739]:                 "ceph.cephx_lockbox_secret": "",
Oct 08 10:15:27 compute-0 musing_dirac[276739]:                 "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct 08 10:15:27 compute-0 musing_dirac[276739]:                 "ceph.cluster_name": "ceph",
Oct 08 10:15:27 compute-0 musing_dirac[276739]:                 "ceph.crush_device_class": "",
Oct 08 10:15:27 compute-0 musing_dirac[276739]:                 "ceph.encrypted": "0",
Oct 08 10:15:27 compute-0 musing_dirac[276739]:                 "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct 08 10:15:27 compute-0 musing_dirac[276739]:                 "ceph.osd_id": "1",
Oct 08 10:15:27 compute-0 musing_dirac[276739]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 08 10:15:27 compute-0 musing_dirac[276739]:                 "ceph.type": "block",
Oct 08 10:15:27 compute-0 musing_dirac[276739]:                 "ceph.vdo": "0",
Oct 08 10:15:27 compute-0 musing_dirac[276739]:                 "ceph.with_tpm": "0"
Oct 08 10:15:27 compute-0 musing_dirac[276739]:             },
Oct 08 10:15:27 compute-0 musing_dirac[276739]:             "type": "block",
Oct 08 10:15:27 compute-0 musing_dirac[276739]:             "vg_name": "ceph_vg0"
Oct 08 10:15:27 compute-0 musing_dirac[276739]:         }
Oct 08 10:15:27 compute-0 musing_dirac[276739]:     ]
Oct 08 10:15:27 compute-0 musing_dirac[276739]: }
Oct 08 10:15:27 compute-0 systemd[1]: libpod-378eccdb956d6119c65977b195aa2434ab18554ac334fa2534fe7b39d65cc717.scope: Deactivated successfully.
Oct 08 10:15:27 compute-0 conmon[276739]: conmon 378eccdb956d6119c659 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-378eccdb956d6119c65977b195aa2434ab18554ac334fa2534fe7b39d65cc717.scope/container/memory.events
Oct 08 10:15:27 compute-0 podman[276722]: 2025-10-08 10:15:27.667397307 +0000 UTC m=+0.451991171 container died 378eccdb956d6119c65977b195aa2434ab18554ac334fa2534fe7b39d65cc717 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_dirac, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:15:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-399565bcd0bc24d3975bf9b74a7a29a4a1d0a44ec4426331050bad5fbd24180c-merged.mount: Deactivated successfully.
Oct 08 10:15:27 compute-0 podman[276722]: 2025-10-08 10:15:27.709782353 +0000 UTC m=+0.494376197 container remove 378eccdb956d6119c65977b195aa2434ab18554ac334fa2534fe7b39d65cc717 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True)
Oct 08 10:15:27 compute-0 systemd[1]: libpod-conmon-378eccdb956d6119c65977b195aa2434ab18554ac334fa2534fe7b39d65cc717.scope: Deactivated successfully.
Oct 08 10:15:27 compute-0 sudo[276614]: pam_unix(sudo:session): session closed for user root
Oct 08 10:15:27 compute-0 sudo[276759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:15:27 compute-0 sudo[276759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:15:27 compute-0 sudo[276759]: pam_unix(sudo:session): session closed for user root
Oct 08 10:15:27 compute-0 sudo[276784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- raw list --format json
Oct 08 10:15:27 compute-0 sudo[276784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:15:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:27 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:15:27 compute-0 sudo[276809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:15:27 compute-0 sudo[276809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:15:27 compute-0 sudo[276809]: pam_unix(sudo:session): session closed for user root
Oct 08 10:15:28 compute-0 nova_compute[262220]: 2025-10-08 10:15:28.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:15:28 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v926: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 102 KiB/s wr, 78 op/s
Oct 08 10:15:28 compute-0 podman[276876]: 2025-10-08 10:15:28.304392958 +0000 UTC m=+0.041131956 container create eee85027f91ef248f1f07bf738aaddbc24da0ad90ce017e827bc0f752abe5654 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_greider, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:15:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:28 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_44] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c001ab0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:28 compute-0 nova_compute[262220]: 2025-10-08 10:15:28.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:15:28 compute-0 systemd[1]: Started libpod-conmon-eee85027f91ef248f1f07bf738aaddbc24da0ad90ce017e827bc0f752abe5654.scope.
Oct 08 10:15:28 compute-0 podman[276876]: 2025-10-08 10:15:28.289022202 +0000 UTC m=+0.025761230 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:15:28 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:15:28 compute-0 podman[276876]: 2025-10-08 10:15:28.419385312 +0000 UTC m=+0.156124360 container init eee85027f91ef248f1f07bf738aaddbc24da0ad90ce017e827bc0f752abe5654 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_greider, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:15:28 compute-0 podman[276876]: 2025-10-08 10:15:28.427360859 +0000 UTC m=+0.164099857 container start eee85027f91ef248f1f07bf738aaddbc24da0ad90ce017e827bc0f752abe5654 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_greider, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 10:15:28 compute-0 podman[276876]: 2025-10-08 10:15:28.430599553 +0000 UTC m=+0.167338571 container attach eee85027f91ef248f1f07bf738aaddbc24da0ad90ce017e827bc0f752abe5654 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_greider, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:15:28 compute-0 determined_greider[276893]: 167 167
Oct 08 10:15:28 compute-0 systemd[1]: libpod-eee85027f91ef248f1f07bf738aaddbc24da0ad90ce017e827bc0f752abe5654.scope: Deactivated successfully.
Oct 08 10:15:28 compute-0 conmon[276893]: conmon eee85027f91ef248f1f0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-eee85027f91ef248f1f07bf738aaddbc24da0ad90ce017e827bc0f752abe5654.scope/container/memory.events
Oct 08 10:15:28 compute-0 podman[276898]: 2025-10-08 10:15:28.482273998 +0000 UTC m=+0.030892567 container died eee85027f91ef248f1f07bf738aaddbc24da0ad90ce017e827bc0f752abe5654 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_greider, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Oct 08 10:15:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-82a631b398f69a9460cda078958d386ae210489e09578fbad94a4a0463545263-merged.mount: Deactivated successfully.
Oct 08 10:15:28 compute-0 podman[276898]: 2025-10-08 10:15:28.527945889 +0000 UTC m=+0.076564458 container remove eee85027f91ef248f1f07bf738aaddbc24da0ad90ce017e827bc0f752abe5654 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 10:15:28 compute-0 systemd[1]: libpod-conmon-eee85027f91ef248f1f07bf738aaddbc24da0ad90ce017e827bc0f752abe5654.scope: Deactivated successfully.
Oct 08 10:15:28 compute-0 ceph-mon[73572]: pgmap v926: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 102 KiB/s wr, 78 op/s
Oct 08 10:15:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:28 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0047e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:28 compute-0 podman[276920]: 2025-10-08 10:15:28.756284634 +0000 UTC m=+0.049452223 container create be34c998fafdbf02c16e4cad1dab436738b820fe4c3b2cfae4e6e60a1b4c082e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_black, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Oct 08 10:15:28 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:15:28 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:15:28 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:15:28.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:15:28 compute-0 systemd[1]: Started libpod-conmon-be34c998fafdbf02c16e4cad1dab436738b820fe4c3b2cfae4e6e60a1b4c082e.scope.
Oct 08 10:15:28 compute-0 podman[276920]: 2025-10-08 10:15:28.733954915 +0000 UTC m=+0.027122494 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:15:28 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:15:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c062af1fa40a308cec23b42585dc5459833e5f2693022264275a781097eec226/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:15:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c062af1fa40a308cec23b42585dc5459833e5f2693022264275a781097eec226/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:15:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c062af1fa40a308cec23b42585dc5459833e5f2693022264275a781097eec226/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:15:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c062af1fa40a308cec23b42585dc5459833e5f2693022264275a781097eec226/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:15:28 compute-0 podman[276920]: 2025-10-08 10:15:28.850717857 +0000 UTC m=+0.143885426 container init be34c998fafdbf02c16e4cad1dab436738b820fe4c3b2cfae4e6e60a1b4c082e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_black, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 10:15:28 compute-0 podman[276920]: 2025-10-08 10:15:28.86166435 +0000 UTC m=+0.154831899 container start be34c998fafdbf02c16e4cad1dab436738b820fe4c3b2cfae4e6e60a1b4c082e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_black, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:15:28 compute-0 podman[276920]: 2025-10-08 10:15:28.865395499 +0000 UTC m=+0.158563058 container attach be34c998fafdbf02c16e4cad1dab436738b820fe4c3b2cfae4e6e60a1b4c082e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_black, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:15:29 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:15:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:29 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000b4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:29 compute-0 lvm[277012]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 08 10:15:29 compute-0 lvm[277012]: VG ceph_vg0 finished
Oct 08 10:15:29 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:15:29 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:15:29 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:15:29.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:15:29 compute-0 nostalgic_black[276937]: {}
Oct 08 10:15:29 compute-0 systemd[1]: libpod-be34c998fafdbf02c16e4cad1dab436738b820fe4c3b2cfae4e6e60a1b4c082e.scope: Deactivated successfully.
Oct 08 10:15:29 compute-0 systemd[1]: libpod-be34c998fafdbf02c16e4cad1dab436738b820fe4c3b2cfae4e6e60a1b4c082e.scope: Consumed 1.167s CPU time.
Oct 08 10:15:29 compute-0 podman[276920]: 2025-10-08 10:15:29.628349298 +0000 UTC m=+0.921516847 container died be34c998fafdbf02c16e4cad1dab436738b820fe4c3b2cfae4e6e60a1b4c082e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_black, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Oct 08 10:15:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-c062af1fa40a308cec23b42585dc5459833e5f2693022264275a781097eec226-merged.mount: Deactivated successfully.
Oct 08 10:15:29 compute-0 podman[276920]: 2025-10-08 10:15:29.669912856 +0000 UTC m=+0.963080405 container remove be34c998fafdbf02c16e4cad1dab436738b820fe4c3b2cfae4e6e60a1b4c082e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_black, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 08 10:15:29 compute-0 systemd[1]: libpod-conmon-be34c998fafdbf02c16e4cad1dab436738b820fe4c3b2cfae4e6e60a1b4c082e.scope: Deactivated successfully.
Oct 08 10:15:29 compute-0 sudo[276784]: pam_unix(sudo:session): session closed for user root
Oct 08 10:15:29 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 10:15:29 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:15:29 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 10:15:29 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:15:29 compute-0 sudo[277030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 08 10:15:29 compute-0 sudo[277030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:15:29 compute-0 sudo[277030]: pam_unix(sudo:session): session closed for user root
Oct 08 10:15:30 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v927: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 103 KiB/s wr, 79 op/s
Oct 08 10:15:30 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:30 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78002c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:30 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:30 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_44] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c001ab0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:30 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:15:30 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:15:30 compute-0 ceph-mon[73572]: pgmap v927: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 103 KiB/s wr, 79 op/s
Oct 08 10:15:30 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:15:30 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:15:30 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:15:30.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:15:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0047e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:31 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:15:31 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:15:31 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:15:31.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:15:32 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v928: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Oct 08 10:15:32 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:32 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000b4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:32 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:32 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78002c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:32 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:15:32 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:15:32 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:15:32.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:15:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:15:32 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:15:32 compute-0 podman[277058]: 2025-10-08 10:15:32.918487616 +0000 UTC m=+0.073193579 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3)
Oct 08 10:15:33 compute-0 nova_compute[262220]: 2025-10-08 10:15:33.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:15:33 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:33 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_44] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003d00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:33 compute-0 ceph-mon[73572]: pgmap v928: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Oct 08 10:15:33 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:15:33 compute-0 nova_compute[262220]: 2025-10-08 10:15:33.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:15:33 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:15:33 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:15:33 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:15:33.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:15:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:15:34 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v929: 353 pgs: 353 active+clean; 200 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 139 op/s
Oct 08 10:15:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:34 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0047e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:34 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000b4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:34 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:15:34 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:15:34 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:15:34.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:15:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:35 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78002c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:35 compute-0 ceph-mon[73572]: pgmap v929: 353 pgs: 353 active+clean; 200 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 139 op/s
Oct 08 10:15:35 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:15:35 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:15:35 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:15:35.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:15:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:15:35] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Oct 08 10:15:35 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:15:35] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Oct 08 10:15:36 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v930: 353 pgs: 353 active+clean; 200 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 08 10:15:36 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:36 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_44] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003d00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:36 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:36 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0047e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:36 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:15:36 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:15:36 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:15:36.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:15:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:15:37.162Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:15:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:15:37.162Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:15:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:37 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000b4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:37 compute-0 ceph-mon[73572]: pgmap v930: 353 pgs: 353 active+clean; 200 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 08 10:15:37 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:15:37 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:15:37 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:15:37.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:15:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:37 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:15:38 compute-0 nova_compute[262220]: 2025-10-08 10:15:38.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:15:38 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v931: 353 pgs: 353 active+clean; 200 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 08 10:15:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:38 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78002c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:38 compute-0 nova_compute[262220]: 2025-10-08 10:15:38.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:15:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:38 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_44] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003d00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:38 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:15:38 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:15:38 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:15:38.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:15:39 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:15:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:39 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0047e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:39 compute-0 ceph-mon[73572]: pgmap v931: 353 pgs: 353 active+clean; 200 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 08 10:15:39 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:15:39 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:15:39 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:15:39.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:15:39 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:15:39.628 163175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2a:d6:ca', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'b2:8b:1e:40:84:a3'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 08 10:15:39 compute-0 nova_compute[262220]: 2025-10-08 10:15:39.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:15:39 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:15:39.629 163175 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 08 10:15:39 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:15:39.630 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26869918-b723-425c-a2e1-0d697f3d0fec, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 10:15:39 compute-0 nova_compute[262220]: 2025-10-08 10:15:39.811 2 DEBUG nova.compute.manager [req-cff2f628-77bf-453b-96fb-95f01c352100 req-cfde9a3d-d9ed-4803-9e8e-e822c178b303 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Received event network-changed-79d28498-fe9d-49dc-ad2c-bde432b239db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 08 10:15:39 compute-0 nova_compute[262220]: 2025-10-08 10:15:39.811 2 DEBUG nova.compute.manager [req-cff2f628-77bf-453b-96fb-95f01c352100 req-cfde9a3d-d9ed-4803-9e8e-e822c178b303 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Refreshing instance network info cache due to event network-changed-79d28498-fe9d-49dc-ad2c-bde432b239db. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 08 10:15:39 compute-0 nova_compute[262220]: 2025-10-08 10:15:39.812 2 DEBUG oslo_concurrency.lockutils [req-cff2f628-77bf-453b-96fb-95f01c352100 req-cfde9a3d-d9ed-4803-9e8e-e822c178b303 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "refresh_cache-ea469a2e-bf09-495c-9b5e-02ad38d32d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 08 10:15:39 compute-0 nova_compute[262220]: 2025-10-08 10:15:39.812 2 DEBUG oslo_concurrency.lockutils [req-cff2f628-77bf-453b-96fb-95f01c352100 req-cfde9a3d-d9ed-4803-9e8e-e822c178b303 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquired lock "refresh_cache-ea469a2e-bf09-495c-9b5e-02ad38d32d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 08 10:15:39 compute-0 nova_compute[262220]: 2025-10-08 10:15:39.812 2 DEBUG nova.network.neutron [req-cff2f628-77bf-453b-96fb-95f01c352100 req-cfde9a3d-d9ed-4803-9e8e-e822c178b303 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Refreshing network info cache for port 79d28498-fe9d-49dc-ad2c-bde432b239db _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 08 10:15:40 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v932: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 2.2 MiB/s wr, 67 op/s
Oct 08 10:15:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:40 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000b4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:40 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78003d90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:40 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:15:40 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:15:40 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:15:40.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:15:40 compute-0 nova_compute[262220]: 2025-10-08 10:15:40.871 2 DEBUG nova.network.neutron [req-cff2f628-77bf-453b-96fb-95f01c352100 req-cfde9a3d-d9ed-4803-9e8e-e822c178b303 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Updated VIF entry in instance network info cache for port 79d28498-fe9d-49dc-ad2c-bde432b239db. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 08 10:15:40 compute-0 nova_compute[262220]: 2025-10-08 10:15:40.871 2 DEBUG nova.network.neutron [req-cff2f628-77bf-453b-96fb-95f01c352100 req-cfde9a3d-d9ed-4803-9e8e-e822c178b303 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Updating instance_info_cache with network_info: [{"id": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "address": "fa:16:3e:e6:b0:e0", "network": {"id": "834a886f-bb33-49ed-b47e-ef0308a38e89", "bridge": "br-int", "label": "tempest-network-smoke--1879322246", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe4ec274-2a", "ovs_interfaceid": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "79d28498-fe9d-49dc-ad2c-bde432b239db", "address": "fa:16:3e:40:4d:66", "network": {"id": "0a28a475-c59d-4526-93af-b8af40052e5c", "bridge": "br-int", "label": "tempest-network-smoke--1745814783", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79d28498-fe", "ovs_interfaceid": "79d28498-fe9d-49dc-ad2c-bde432b239db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 08 10:15:40 compute-0 nova_compute[262220]: 2025-10-08 10:15:40.934 2 DEBUG oslo_concurrency.lockutils [req-cff2f628-77bf-453b-96fb-95f01c352100 req-cfde9a3d-d9ed-4803-9e8e-e822c178b303 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Releasing lock "refresh_cache-ea469a2e-bf09-495c-9b5e-02ad38d32d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 08 10:15:41 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:41 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_44] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003d00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:41 compute-0 ceph-mon[73572]: pgmap v932: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 2.2 MiB/s wr, 67 op/s
Oct 08 10:15:41 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:15:41 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:15:41 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:15:41.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:15:42 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v933: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.2 MiB/s wr, 66 op/s
Oct 08 10:15:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:42 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0047e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:42 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000b4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:42 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:15:42 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:15:42 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:15:42.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:15:43 compute-0 nova_compute[262220]: 2025-10-08 10:15:43.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:15:43 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:43 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78003d90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:43 compute-0 nova_compute[262220]: 2025-10-08 10:15:43.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:15:43 compute-0 ceph-mon[73572]: pgmap v933: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.2 MiB/s wr, 66 op/s
Oct 08 10:15:43 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:15:43 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:15:43 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:15:43.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:15:43 compute-0 nova_compute[262220]: 2025-10-08 10:15:43.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:15:43 compute-0 nova_compute[262220]: 2025-10-08 10:15:43.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:15:43 compute-0 nova_compute[262220]: 2025-10-08 10:15:43.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:15:43 compute-0 nova_compute[262220]: 2025-10-08 10:15:43.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:15:43 compute-0 nova_compute[262220]: 2025-10-08 10:15:43.924 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:15:43 compute-0 nova_compute[262220]: 2025-10-08 10:15:43.925 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:15:43 compute-0 nova_compute[262220]: 2025-10-08 10:15:43.925 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:15:43 compute-0 nova_compute[262220]: 2025-10-08 10:15:43.926 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 08 10:15:43 compute-0 nova_compute[262220]: 2025-10-08 10:15:43.926 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:15:43 compute-0 podman[277089]: 2025-10-08 10:15:43.936145222 +0000 UTC m=+0.093285445 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 08 10:15:44 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:15:44 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v934: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.2 MiB/s wr, 67 op/s
Oct 08 10:15:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:44 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_44] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003d00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:44 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 08 10:15:44 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/165788996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:15:44 compute-0 nova_compute[262220]: 2025-10-08 10:15:44.367 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:15:44 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/165788996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:15:44 compute-0 nova_compute[262220]: 2025-10-08 10:15:44.524 2 DEBUG nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 08 10:15:44 compute-0 nova_compute[262220]: 2025-10-08 10:15:44.525 2 DEBUG nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 08 10:15:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:44 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0047e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:44 compute-0 nova_compute[262220]: 2025-10-08 10:15:44.749 2 WARNING nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 10:15:44 compute-0 nova_compute[262220]: 2025-10-08 10:15:44.750 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4343MB free_disk=59.89706802368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 08 10:15:44 compute-0 nova_compute[262220]: 2025-10-08 10:15:44.750 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:15:44 compute-0 nova_compute[262220]: 2025-10-08 10:15:44.751 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:15:44 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:15:44 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:15:44 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:15:44.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:15:44 compute-0 nova_compute[262220]: 2025-10-08 10:15:44.871 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Instance ea469a2e-bf09-495c-9b5e-02ad38d32d40 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 08 10:15:44 compute-0 nova_compute[262220]: 2025-10-08 10:15:44.871 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 08 10:15:44 compute-0 nova_compute[262220]: 2025-10-08 10:15:44.871 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 08 10:15:44 compute-0 nova_compute[262220]: 2025-10-08 10:15:44.911 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:15:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:45 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000b4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:45 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 08 10:15:45 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3461361738' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:15:45 compute-0 nova_compute[262220]: 2025-10-08 10:15:45.395 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:15:45 compute-0 nova_compute[262220]: 2025-10-08 10:15:45.403 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 08 10:15:45 compute-0 nova_compute[262220]: 2025-10-08 10:15:45.431 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 08 10:15:45 compute-0 nova_compute[262220]: 2025-10-08 10:15:45.433 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 08 10:15:45 compute-0 nova_compute[262220]: 2025-10-08 10:15:45.434 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:15:45 compute-0 ceph-mon[73572]: pgmap v934: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.2 MiB/s wr, 67 op/s
Oct 08 10:15:45 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3461361738' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:15:45 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:15:45 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:15:45 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:15:45.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:15:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:15:45] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Oct 08 10:15:45 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:15:45] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Oct 08 10:15:46 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v935: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 24 KiB/s wr, 2 op/s
Oct 08 10:15:46 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:46 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78003d90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:46 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:46 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_44] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003d00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:46 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:15:46 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:15:46 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:15:46.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:15:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:15:47.163Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:15:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:47 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0047e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:47 compute-0 nova_compute[262220]: 2025-10-08 10:15:47.433 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:15:47 compute-0 nova_compute[262220]: 2025-10-08 10:15:47.434 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:15:47 compute-0 ceph-mon[73572]: pgmap v935: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 24 KiB/s wr, 2 op/s
Oct 08 10:15:47 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:15:47 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:15:47 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:15:47.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:15:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:15:47
Oct 08 10:15:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 08 10:15:47 compute-0 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct 08 10:15:47 compute-0 ceph-mgr[73869]: [balancer INFO root] pools ['images', 'vms', 'volumes', 'cephfs.cephfs.meta', '.mgr', 'backups', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.log', '.nfs', 'default.rgw.meta', 'default.rgw.control']
Oct 08 10:15:47 compute-0 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct 08 10:15:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:15:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:15:47 compute-0 nova_compute[262220]: 2025-10-08 10:15:47.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:15:47 compute-0 nova_compute[262220]: 2025-10-08 10:15:47.886 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 08 10:15:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:47 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:15:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:15:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:15:48 compute-0 nova_compute[262220]: 2025-10-08 10:15:48.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:15:48 compute-0 sudo[277165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:15:48 compute-0 sudo[277165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:15:48 compute-0 sudo[277165]: pam_unix(sudo:session): session closed for user root
Oct 08 10:15:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct 08 10:15:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:15:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 08 10:15:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:15:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001520898958943804 of space, bias 1.0, pg target 0.4562696876831412 quantized to 32 (current 32)
Oct 08 10:15:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:15:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:15:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:15:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:15:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:15:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 08 10:15:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:15:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 08 10:15:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:15:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:15:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:15:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 08 10:15:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:15:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 08 10:15:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:15:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:15:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:15:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 08 10:15:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:15:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 10:15:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 08 10:15:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:15:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:15:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:15:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:15:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:15:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:15:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:15:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:15:48 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v936: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 24 KiB/s wr, 2 op/s
Oct 08 10:15:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 08 10:15:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:15:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:15:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:15:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:15:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:48 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000b4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:48 compute-0 nova_compute[262220]: 2025-10-08 10:15:48.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:15:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:15:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:48 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78003d90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:48 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:15:48 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:15:48 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:15:48.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:15:48 compute-0 nova_compute[262220]: 2025-10-08 10:15:48.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:15:48 compute-0 nova_compute[262220]: 2025-10-08 10:15:48.887 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 08 10:15:48 compute-0 nova_compute[262220]: 2025-10-08 10:15:48.888 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 08 10:15:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:15:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:49 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_44] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003d00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:49 compute-0 nova_compute[262220]: 2025-10-08 10:15:49.359 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "refresh_cache-ea469a2e-bf09-495c-9b5e-02ad38d32d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 08 10:15:49 compute-0 nova_compute[262220]: 2025-10-08 10:15:49.360 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquired lock "refresh_cache-ea469a2e-bf09-495c-9b5e-02ad38d32d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 08 10:15:49 compute-0 nova_compute[262220]: 2025-10-08 10:15:49.360 2 DEBUG nova.network.neutron [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 08 10:15:49 compute-0 nova_compute[262220]: 2025-10-08 10:15:49.360 2 DEBUG nova.objects.instance [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ea469a2e-bf09-495c-9b5e-02ad38d32d40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 08 10:15:49 compute-0 ceph-mon[73572]: pgmap v936: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 24 KiB/s wr, 2 op/s
Oct 08 10:15:49 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/2253882089' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:15:49 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:15:49 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:15:49 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:15:49.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:15:50 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v937: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 26 KiB/s wr, 3 op/s
Oct 08 10:15:50 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:50 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0047e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:50 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/352928791' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:15:50 compute-0 ceph-mon[73572]: pgmap v937: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 26 KiB/s wr, 3 op/s
Oct 08 10:15:50 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:50 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000b4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:50 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:15:50 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:15:50 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:15:50.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:15:50 compute-0 podman[277192]: 2025-10-08 10:15:50.91524029 +0000 UTC m=+0.067862357 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2)
Oct 08 10:15:50 compute-0 podman[277193]: 2025-10-08 10:15:50.928876989 +0000 UTC m=+0.072616540 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 08 10:15:51 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:51 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78003d90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:51 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:15:51 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:15:51 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:15:51.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:15:51 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/845186431' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:15:52 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v938: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 4.7 KiB/s wr, 1 op/s
Oct 08 10:15:52 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:52 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_44] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003d00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:52 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:52 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0047e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:52 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:15:52 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:15:52 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:15:52.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:15:52 compute-0 ceph-mon[73572]: pgmap v938: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 4.7 KiB/s wr, 1 op/s
Oct 08 10:15:52 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/2453091126' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:15:53 compute-0 nova_compute[262220]: 2025-10-08 10:15:53.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:15:53 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:53 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000b4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:53 compute-0 nova_compute[262220]: 2025-10-08 10:15:53.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:15:53 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:15:53 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:15:53 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:15:53.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:15:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:15:54 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v939: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 8.3 KiB/s wr, 20 op/s
Oct 08 10:15:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:54 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac780046b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:54 compute-0 nova_compute[262220]: 2025-10-08 10:15:54.390 2 DEBUG nova.network.neutron [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Updating instance_info_cache with network_info: [{"id": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "address": "fa:16:3e:e6:b0:e0", "network": {"id": "834a886f-bb33-49ed-b47e-ef0308a38e89", "bridge": "br-int", "label": "tempest-network-smoke--1879322246", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe4ec274-2a", "ovs_interfaceid": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "79d28498-fe9d-49dc-ad2c-bde432b239db", "address": "fa:16:3e:40:4d:66", "network": {"id": "0a28a475-c59d-4526-93af-b8af40052e5c", "bridge": "br-int", "label": "tempest-network-smoke--1745814783", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79d28498-fe", "ovs_interfaceid": "79d28498-fe9d-49dc-ad2c-bde432b239db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 08 10:15:54 compute-0 nova_compute[262220]: 2025-10-08 10:15:54.413 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Releasing lock "refresh_cache-ea469a2e-bf09-495c-9b5e-02ad38d32d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 08 10:15:54 compute-0 nova_compute[262220]: 2025-10-08 10:15:54.413 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 08 10:15:54 compute-0 nova_compute[262220]: 2025-10-08 10:15:54.414 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:15:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:54 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_44] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003d00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:54 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:15:54 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:15:54 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:15:54.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:15:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:55 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0047e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:55 compute-0 ceph-mon[73572]: pgmap v939: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 8.3 KiB/s wr, 20 op/s
Oct 08 10:15:55 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:15:55 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:15:55 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:15:55.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:15:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:15:55] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Oct 08 10:15:55 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:15:55] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Oct 08 10:15:56 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v940: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 6.0 KiB/s wr, 20 op/s
Oct 08 10:15:56 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:56 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000b4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:56 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:56 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac780046b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:56 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:15:56 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:15:56 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:15:56.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:15:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:15:57.164Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:15:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:57 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_44] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003d00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:57 compute-0 ceph-mon[73572]: pgmap v940: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 6.0 KiB/s wr, 20 op/s
Oct 08 10:15:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:15:57.415 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:15:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:15:57.415 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:15:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:15:57.416 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:15:57 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:15:57 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:15:57 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:15:57.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:15:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:57 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:15:58 compute-0 nova_compute[262220]: 2025-10-08 10:15:58.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:15:58 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v941: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 6.0 KiB/s wr, 20 op/s
Oct 08 10:15:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:58 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0047e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:58 compute-0 nova_compute[262220]: 2025-10-08 10:15:58.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:15:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:58 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0047e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:58 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:15:58 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:15:58 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:15:58.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:15:59 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:15:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:59 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:15:59 compute-0 ceph-mon[73572]: pgmap v941: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 6.0 KiB/s wr, 20 op/s
Oct 08 10:15:59 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:15:59 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:15:59 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:15:59.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:16:00 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v942: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 8.0 KiB/s wr, 154 op/s
Oct 08 10:16:00 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:00 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_46] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003d00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:00 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:00 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940039d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:00 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:16:00 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:16:00 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:16:00.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:16:01 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:01 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0047e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:01 compute-0 ceph-mon[73572]: pgmap v942: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 8.0 KiB/s wr, 154 op/s
Oct 08 10:16:01 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:16:01 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:16:01 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:16:01.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:16:02 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v943: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 5.7 KiB/s wr, 153 op/s
Oct 08 10:16:02 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:02 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:02 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:02 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_46] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003d00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:02 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:16:02 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:16:02 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:16:02.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:16:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:16:02 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:16:03 compute-0 nova_compute[262220]: 2025-10-08 10:16:03.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:16:03 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:03 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940039d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:03 compute-0 nova_compute[262220]: 2025-10-08 10:16:03.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:16:03 compute-0 ceph-mon[73572]: pgmap v943: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 5.7 KiB/s wr, 153 op/s
Oct 08 10:16:03 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:16:03 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:16:03 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:16:03 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:16:03.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:16:03 compute-0 podman[277244]: 2025-10-08 10:16:03.904691016 +0000 UTC m=+0.061081499 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct 08 10:16:04 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:16:04 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v944: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 6.0 KiB/s wr, 153 op/s
Oct 08 10:16:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:04 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0047e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:04 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:04 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:16:04 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:16:04 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:16:04.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:16:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:05 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_46] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003d00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:05 compute-0 ceph-mon[73572]: pgmap v944: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 6.0 KiB/s wr, 153 op/s
Oct 08 10:16:05 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:16:05 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:16:05 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:16:05.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:16:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:16:05] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Oct 08 10:16:05 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:16:05] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Oct 08 10:16:06 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v945: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 2.3 KiB/s wr, 134 op/s
Oct 08 10:16:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:06 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940039d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:06 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:06 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0047e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:06 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:16:06 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:16:06 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:16:06.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:16:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:16:07.165Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:16:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:07 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:07 compute-0 ceph-mon[73572]: pgmap v945: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 2.3 KiB/s wr, 134 op/s
Oct 08 10:16:07 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:16:07 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:16:07 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:16:07.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:16:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:07 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:16:08 compute-0 nova_compute[262220]: 2025-10-08 10:16:08.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:16:08 compute-0 sudo[277271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:16:08 compute-0 sudo[277271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:16:08 compute-0 sudo[277271]: pam_unix(sudo:session): session closed for user root
Oct 08 10:16:08 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v946: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 2.3 KiB/s wr, 134 op/s
Oct 08 10:16:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:08 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_46] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003d00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:08 compute-0 nova_compute[262220]: 2025-10-08 10:16:08.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:16:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:08 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940039d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:08 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:16:08 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:16:08 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:16:08.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:16:09 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:16:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:09 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0047e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:09 compute-0 ceph-mon[73572]: pgmap v946: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 2.3 KiB/s wr, 134 op/s
Oct 08 10:16:09 compute-0 ovn_controller[153187]: 2025-10-08T10:16:09Z|00049|memory_trim|INFO|Detected inactivity (last active 30025 ms ago): trimming memory
Oct 08 10:16:09 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:16:09 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:16:09 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:16:09.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:16:10 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v947: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 8.0 KiB/s wr, 135 op/s
Oct 08 10:16:10 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:10 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:10 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:10 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_46] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003d00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:10 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:16:10 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:16:10 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:16:10.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:16:11 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:11 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940039d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:11 compute-0 ceph-mon[73572]: pgmap v947: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 8.0 KiB/s wr, 135 op/s
Oct 08 10:16:11 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:16:11 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:16:11 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:16:11.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:16:12 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v948: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 6.0 KiB/s wr, 1 op/s
Oct 08 10:16:12 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:12 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0047e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:12 compute-0 ceph-mon[73572]: pgmap v948: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 6.0 KiB/s wr, 1 op/s
Oct 08 10:16:12 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:12 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:12 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:16:12 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:16:12 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:16:12.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:16:13 compute-0 nova_compute[262220]: 2025-10-08 10:16:13.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:16:13 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:13 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_46] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003d00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:13 compute-0 nova_compute[262220]: 2025-10-08 10:16:13.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:16:13 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:16:13 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:16:13 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:16:13.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:16:14 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:16:14 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v949: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 7.3 KiB/s wr, 2 op/s
Oct 08 10:16:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:14 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940039d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:14 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0047e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:14 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:16:14 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:16:14 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:16:14.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:16:14 compute-0 podman[277302]: 2025-10-08 10:16:14.94346933 +0000 UTC m=+0.106545883 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 08 10:16:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:15 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:15 compute-0 ceph-mon[73572]: pgmap v949: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 7.3 KiB/s wr, 2 op/s
Oct 08 10:16:15 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:16:15 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:16:15 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:16:15.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:16:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:16:15] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Oct 08 10:16:15 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:16:15] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Oct 08 10:16:16 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v950: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 7.0 KiB/s wr, 1 op/s
Oct 08 10:16:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:16 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_46] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003d00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:16 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:16 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940039d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:16 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:16:16 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:16:16 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:16:16.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:16:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:16:17.165Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:16:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:17 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0047e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:17 compute-0 ceph-mon[73572]: pgmap v950: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 7.0 KiB/s wr, 1 op/s
Oct 08 10:16:17 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:16:17 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:16:17 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:16:17.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:16:17 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:16:17 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:16:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:17 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:16:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:16:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:16:18 compute-0 nova_compute[262220]: 2025-10-08 10:16:18.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:16:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:16:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:16:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:16:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:16:18 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v951: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 7.0 KiB/s wr, 1 op/s
Oct 08 10:16:18 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:16:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:18 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:18 compute-0 nova_compute[262220]: 2025-10-08 10:16:18.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:16:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:18 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_46] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003d00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:18 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:16:18 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:16:18 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:16:18.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:16:19 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:16:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:19 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940039d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:19 compute-0 ceph-mon[73572]: pgmap v951: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 7.0 KiB/s wr, 1 op/s
Oct 08 10:16:19 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:16:19 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:16:19 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:16:19.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:16:20 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v952: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 8.3 KiB/s wr, 2 op/s
Oct 08 10:16:20 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:20 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0047e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:20 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:20 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:20 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:16:20 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:16:20 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:16:20.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:16:21 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:21 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_46] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003d00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:21 compute-0 ceph-mon[73572]: pgmap v952: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 8.3 KiB/s wr, 2 op/s
Oct 08 10:16:21 compute-0 ceph-mon[73572]: from='client.? 192.168.122.10:0/2225824535' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 08 10:16:21 compute-0 ceph-mon[73572]: from='client.? 192.168.122.10:0/2225824535' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 08 10:16:21 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:16:21 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:16:21 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:16:21.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:16:21 compute-0 podman[277336]: 2025-10-08 10:16:21.898786971 +0000 UTC m=+0.056464380 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 08 10:16:21 compute-0 podman[277337]: 2025-10-08 10:16:21.919918222 +0000 UTC m=+0.076839367 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 08 10:16:22 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v953: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 2.7 KiB/s wr, 1 op/s
Oct 08 10:16:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:22 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940039d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:22 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:22 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940039d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:22 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:16:22 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:16:22 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:16:22.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:16:23 compute-0 nova_compute[262220]: 2025-10-08 10:16:23.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:16:23 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:23 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:23 compute-0 ceph-mon[73572]: pgmap v953: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 2.7 KiB/s wr, 1 op/s
Oct 08 10:16:23 compute-0 nova_compute[262220]: 2025-10-08 10:16:23.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:16:23 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:16:23 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:16:23 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:16:23.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:16:24 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:16:24 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v954: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 14 KiB/s wr, 3 op/s
Oct 08 10:16:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:24 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_46] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003d00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:24 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940039d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:24 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:16:24 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:16:24 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:16:24.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:16:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:25 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940039d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:25 compute-0 ceph-mon[73572]: pgmap v954: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 14 KiB/s wr, 3 op/s
Oct 08 10:16:25 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:16:25 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:16:25 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:16:25.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:16:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:16:25] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Oct 08 10:16:25 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:16:25] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Oct 08 10:16:26 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v955: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 13 KiB/s wr, 2 op/s
Oct 08 10:16:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:26 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:26 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:26 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_46] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003d00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:26 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:16:26 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:16:26 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:16:26.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:16:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:16:27.166Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:16:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:16:27.166Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:16:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:16:27.166Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:16:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:27 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940039d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:27 compute-0 ceph-mon[73572]: pgmap v955: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 13 KiB/s wr, 2 op/s
Oct 08 10:16:27 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:16:27 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:16:27 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:16:27.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:16:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:27 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:16:28 compute-0 nova_compute[262220]: 2025-10-08 10:16:28.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:16:28 compute-0 sudo[277383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:16:28 compute-0 sudo[277383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:16:28 compute-0 sudo[277383]: pam_unix(sudo:session): session closed for user root
Oct 08 10:16:28 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v956: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 13 KiB/s wr, 2 op/s
Oct 08 10:16:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:28 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940039d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:28 compute-0 nova_compute[262220]: 2025-10-08 10:16:28.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:16:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:28 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:28 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:16:28.839 163175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2a:d6:ca', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'b2:8b:1e:40:84:a3'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 08 10:16:28 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:16:28.840 163175 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 08 10:16:28 compute-0 nova_compute[262220]: 2025-10-08 10:16:28.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:16:28 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:16:28 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:16:28 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:16:28.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:16:29 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:16:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:29 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_46] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003d00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:29 compute-0 ceph-mon[73572]: pgmap v956: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 13 KiB/s wr, 2 op/s
Oct 08 10:16:29 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:16:29 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:16:29 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:16:29.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:16:30 compute-0 sudo[277410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:16:30 compute-0 sudo[277410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:16:30 compute-0 sudo[277410]: pam_unix(sudo:session): session closed for user root
Oct 08 10:16:30 compute-0 sudo[277435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 08 10:16:30 compute-0 sudo[277435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:16:30 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v957: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 15 KiB/s wr, 31 op/s
Oct 08 10:16:30 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:30 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940039d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:30 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/570110382' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:16:30 compute-0 sudo[277435]: pam_unix(sudo:session): session closed for user root
Oct 08 10:16:30 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:30 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0047e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:30 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:16:30 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:16:30 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:16:30.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:16:31 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac780031c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:31 compute-0 nova_compute[262220]: 2025-10-08 10:16:31.274 2 DEBUG oslo_concurrency.lockutils [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "interface-ea469a2e-bf09-495c-9b5e-02ad38d32d40-79d28498-fe9d-49dc-ad2c-bde432b239db" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:16:31 compute-0 nova_compute[262220]: 2025-10-08 10:16:31.275 2 DEBUG oslo_concurrency.lockutils [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "interface-ea469a2e-bf09-495c-9b5e-02ad38d32d40-79d28498-fe9d-49dc-ad2c-bde432b239db" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:16:31 compute-0 nova_compute[262220]: 2025-10-08 10:16:31.303 2 DEBUG nova.objects.instance [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lazy-loading 'flavor' on Instance uuid ea469a2e-bf09-495c-9b5e-02ad38d32d40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 08 10:16:31 compute-0 ceph-mon[73572]: pgmap v957: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 15 KiB/s wr, 31 op/s
Oct 08 10:16:31 compute-0 nova_compute[262220]: 2025-10-08 10:16:31.561 2 DEBUG nova.virt.libvirt.vif [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-08T10:14:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1473882269',display_name='tempest-TestNetworkBasicOps-server-1473882269',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1473882269',id=6,image_ref='e5994bac-385d-4cfe-962e-386aa0559983',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLR6MyJBOMYp2wYGaL6G2mEbJcrZztqNeLLTecSXpMa+10UdVkrFMZxX+qiOC0ccFZIJleCiHIYc6JXNFg7vRmgJ0JtTU6W+KAYc8u1JxRO51IwGJ30ByO68fx1sTbOqEg==',key_name='tempest-TestNetworkBasicOps-1715641436',keypairs=<?>,launch_index=0,launched_at=2025-10-08T10:14:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0edb2c85b88b4b168a4b2e8c5ed4a05c',ramdisk_id='',reservation_id='r-b2owsx2f',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e5994bac-385d-4cfe-962e-386aa0559983',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-139500885',owner_user_name='tempest-TestNetworkBasicOps-139500885-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-08T10:14:35Z,user_data=None,user_id='d50b19166a7245e390a6e29682191263',uuid=ea469a2e-bf09-495c-9b5e-02ad38d32d40,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "79d28498-fe9d-49dc-ad2c-bde432b239db", "address": "fa:16:3e:40:4d:66", "network": {"id": "0a28a475-c59d-4526-93af-b8af40052e5c", "bridge": "br-int", "label": "tempest-network-smoke--1745814783", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79d28498-fe", "ovs_interfaceid": "79d28498-fe9d-49dc-ad2c-bde432b239db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 08 10:16:31 compute-0 nova_compute[262220]: 2025-10-08 10:16:31.562 2 DEBUG nova.network.os_vif_util [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converting VIF {"id": "79d28498-fe9d-49dc-ad2c-bde432b239db", "address": "fa:16:3e:40:4d:66", "network": {"id": "0a28a475-c59d-4526-93af-b8af40052e5c", "bridge": "br-int", "label": "tempest-network-smoke--1745814783", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79d28498-fe", "ovs_interfaceid": "79d28498-fe9d-49dc-ad2c-bde432b239db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 08 10:16:31 compute-0 nova_compute[262220]: 2025-10-08 10:16:31.562 2 DEBUG nova.network.os_vif_util [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:40:4d:66,bridge_name='br-int',has_traffic_filtering=True,id=79d28498-fe9d-49dc-ad2c-bde432b239db,network=Network(0a28a475-c59d-4526-93af-b8af40052e5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79d28498-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 08 10:16:31 compute-0 nova_compute[262220]: 2025-10-08 10:16:31.567 2 DEBUG nova.virt.libvirt.guest [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:40:4d:66"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap79d28498-fe"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 08 10:16:31 compute-0 nova_compute[262220]: 2025-10-08 10:16:31.569 2 DEBUG nova.virt.libvirt.guest [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:40:4d:66"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap79d28498-fe"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 08 10:16:31 compute-0 nova_compute[262220]: 2025-10-08 10:16:31.572 2 DEBUG nova.virt.libvirt.driver [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Attempting to detach device tap79d28498-fe from instance ea469a2e-bf09-495c-9b5e-02ad38d32d40 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 08 10:16:31 compute-0 nova_compute[262220]: 2025-10-08 10:16:31.572 2 DEBUG nova.virt.libvirt.guest [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] detach device xml: <interface type="ethernet">
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <mac address="fa:16:3e:40:4d:66"/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <model type="virtio"/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <driver name="vhost" rx_queue_size="512"/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <mtu size="1442"/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <target dev="tap79d28498-fe"/>
Oct 08 10:16:31 compute-0 nova_compute[262220]: </interface>
Oct 08 10:16:31 compute-0 nova_compute[262220]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 08 10:16:31 compute-0 nova_compute[262220]: 2025-10-08 10:16:31.582 2 DEBUG nova.virt.libvirt.guest [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:40:4d:66"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap79d28498-fe"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 08 10:16:31 compute-0 nova_compute[262220]: 2025-10-08 10:16:31.585 2 DEBUG nova.virt.libvirt.guest [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:40:4d:66"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap79d28498-fe"/></interface>not found in domain: <domain type='kvm' id='2'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <name>instance-00000006</name>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <uuid>ea469a2e-bf09-495c-9b5e-02ad38d32d40</uuid>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <metadata>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <nova:name>tempest-TestNetworkBasicOps-server-1473882269</nova:name>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <nova:creationTime>2025-10-08 10:15:03</nova:creationTime>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <nova:flavor name="m1.nano">
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <nova:memory>128</nova:memory>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <nova:disk>1</nova:disk>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <nova:swap>0</nova:swap>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <nova:ephemeral>0</nova:ephemeral>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <nova:vcpus>1</nova:vcpus>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   </nova:flavor>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <nova:owner>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <nova:user uuid="d50b19166a7245e390a6e29682191263">tempest-TestNetworkBasicOps-139500885-project-member</nova:user>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <nova:project uuid="0edb2c85b88b4b168a4b2e8c5ed4a05c">tempest-TestNetworkBasicOps-139500885</nova:project>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   </nova:owner>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <nova:root type="image" uuid="e5994bac-385d-4cfe-962e-386aa0559983"/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <nova:ports>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <nova:port uuid="be4ec274-2a90-48e8-bd51-fd01f3c659da">
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </nova:port>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <nova:port uuid="79d28498-fe9d-49dc-ad2c-bde432b239db">
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <nova:ip type="fixed" address="10.100.0.23" ipVersion="4"/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </nova:port>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   </nova:ports>
Oct 08 10:16:31 compute-0 nova_compute[262220]: </nova:instance>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   </metadata>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <memory unit='KiB'>131072</memory>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <vcpu placement='static'>1</vcpu>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <resource>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <partition>/machine</partition>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   </resource>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <sysinfo type='smbios'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <system>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <entry name='manufacturer'>RDO</entry>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <entry name='product'>OpenStack Compute</entry>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <entry name='serial'>ea469a2e-bf09-495c-9b5e-02ad38d32d40</entry>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <entry name='uuid'>ea469a2e-bf09-495c-9b5e-02ad38d32d40</entry>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <entry name='family'>Virtual Machine</entry>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </system>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   </sysinfo>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <os>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <boot dev='hd'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <smbios mode='sysinfo'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   </os>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <features>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <acpi/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <apic/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <vmcoreinfo state='on'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   </features>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <cpu mode='custom' match='exact' check='full'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <vendor>AMD</vendor>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='require' name='x2apic'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='require' name='tsc-deadline'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='require' name='hypervisor'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='require' name='tsc_adjust'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='require' name='spec-ctrl'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='require' name='stibp'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='require' name='arch-capabilities'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='require' name='ssbd'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='require' name='cmp_legacy'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='require' name='overflow-recov'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='require' name='succor'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='require' name='ibrs'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='require' name='amd-ssbd'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='require' name='virt-ssbd'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='disable' name='lbrv'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='disable' name='tsc-scale'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='disable' name='vmcb-clean'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='disable' name='flushbyasid'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='disable' name='pause-filter'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='disable' name='pfthreshold'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='require' name='rdctl-no'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='require' name='mds-no'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='require' name='pschange-mc-no'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='require' name='gds-no'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='require' name='rfds-no'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='disable' name='xsaves'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='disable' name='svm'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='require' name='topoext'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='disable' name='npt'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='disable' name='nrip-save'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   </cpu>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <clock offset='utc'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <timer name='pit' tickpolicy='delay'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <timer name='hpet' present='no'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   </clock>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <on_poweroff>destroy</on_poweroff>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <on_reboot>restart</on_reboot>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <on_crash>destroy</on_crash>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <devices>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <disk type='network' device='disk'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <driver name='qemu' type='raw' cache='none'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <auth username='openstack'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:         <secret type='ceph' uuid='787292cc-8154-50c4-9e00-e9be3e817149'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       </auth>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <source protocol='rbd' name='vms/ea469a2e-bf09-495c-9b5e-02ad38d32d40_disk' index='2'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:         <host name='192.168.122.100' port='6789'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:         <host name='192.168.122.102' port='6789'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:         <host name='192.168.122.101' port='6789'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       </source>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <target dev='vda' bus='virtio'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='virtio-disk0'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </disk>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <disk type='network' device='cdrom'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <driver name='qemu' type='raw' cache='none'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <auth username='openstack'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:         <secret type='ceph' uuid='787292cc-8154-50c4-9e00-e9be3e817149'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       </auth>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <source protocol='rbd' name='vms/ea469a2e-bf09-495c-9b5e-02ad38d32d40_disk.config' index='1'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:         <host name='192.168.122.100' port='6789'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:         <host name='192.168.122.102' port='6789'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:         <host name='192.168.122.101' port='6789'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       </source>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <target dev='sda' bus='sata'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <readonly/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='sata0-0-0'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </disk>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <controller type='pci' index='0' model='pcie-root'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='pcie.0'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <target chassis='1' port='0x10'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='pci.1'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <target chassis='2' port='0x11'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='pci.2'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <target chassis='3' port='0x12'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='pci.3'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <target chassis='4' port='0x13'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='pci.4'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <target chassis='5' port='0x14'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='pci.5'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <target chassis='6' port='0x15'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='pci.6'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <target chassis='7' port='0x16'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='pci.7'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <target chassis='8' port='0x17'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='pci.8'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <target chassis='9' port='0x18'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='pci.9'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <target chassis='10' port='0x19'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='pci.10'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <target chassis='11' port='0x1a'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='pci.11'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <target chassis='12' port='0x1b'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='pci.12'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <target chassis='13' port='0x1c'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='pci.13'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <target chassis='14' port='0x1d'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='pci.14'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <target chassis='15' port='0x1e'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='pci.15'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <target chassis='16' port='0x1f'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='pci.16'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <target chassis='17' port='0x20'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='pci.17'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <target chassis='18' port='0x21'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='pci.18'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <target chassis='19' port='0x22'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='pci.19'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <target chassis='20' port='0x23'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='pci.20'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <target chassis='21' port='0x24'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='pci.21'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <target chassis='22' port='0x25'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='pci.22'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <target chassis='23' port='0x26'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='pci.23'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <target chassis='24' port='0x27'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='pci.24'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <target chassis='25' port='0x28'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='pci.25'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <model name='pcie-pci-bridge'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='pci.26'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='usb'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <controller type='sata' index='0'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='ide'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <interface type='ethernet'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <mac address='fa:16:3e:e6:b0:e0'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <target dev='tapbe4ec274-2a'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <model type='virtio'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <driver name='vhost' rx_queue_size='512'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <mtu size='1442'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='net0'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </interface>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <interface type='ethernet'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <mac address='fa:16:3e:40:4d:66'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <target dev='tap79d28498-fe'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <model type='virtio'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <driver name='vhost' rx_queue_size='512'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <mtu size='1442'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='net1'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </interface>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <serial type='pty'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <source path='/dev/pts/0'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <log file='/var/lib/nova/instances/ea469a2e-bf09-495c-9b5e-02ad38d32d40/console.log' append='off'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <target type='isa-serial' port='0'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:         <model name='isa-serial'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       </target>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='serial0'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </serial>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <console type='pty' tty='/dev/pts/0'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <source path='/dev/pts/0'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <log file='/var/lib/nova/instances/ea469a2e-bf09-495c-9b5e-02ad38d32d40/console.log' append='off'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <target type='serial' port='0'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='serial0'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </console>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <input type='tablet' bus='usb'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='input0'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='usb' bus='0' port='1'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </input>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <input type='mouse' bus='ps2'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='input1'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </input>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <input type='keyboard' bus='ps2'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='input2'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </input>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <listen type='address' address='::0'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </graphics>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <audio id='1' type='none'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <video>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <model type='virtio' heads='1' primary='yes'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='video0'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </video>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <watchdog model='itco' action='reset'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='watchdog0'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </watchdog>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <memballoon model='virtio'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <stats period='10'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='balloon0'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </memballoon>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <rng model='virtio'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <backend model='random'>/dev/urandom</backend>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='rng0'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </rng>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   </devices>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <label>system_u:system_r:svirt_t:s0:c144,c208</label>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c144,c208</imagelabel>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   </seclabel>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <label>+107:+107</label>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <imagelabel>+107:+107</imagelabel>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   </seclabel>
Oct 08 10:16:31 compute-0 nova_compute[262220]: </domain>
Oct 08 10:16:31 compute-0 nova_compute[262220]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 08 10:16:31 compute-0 nova_compute[262220]: 2025-10-08 10:16:31.588 2 INFO nova.virt.libvirt.driver [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Successfully detached device tap79d28498-fe from instance ea469a2e-bf09-495c-9b5e-02ad38d32d40 from the persistent domain config.
Oct 08 10:16:31 compute-0 nova_compute[262220]: 2025-10-08 10:16:31.588 2 DEBUG nova.virt.libvirt.driver [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] (1/8): Attempting to detach device tap79d28498-fe with device alias net1 from instance ea469a2e-bf09-495c-9b5e-02ad38d32d40 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 08 10:16:31 compute-0 nova_compute[262220]: 2025-10-08 10:16:31.589 2 DEBUG nova.virt.libvirt.guest [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] detach device xml: <interface type="ethernet">
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <mac address="fa:16:3e:40:4d:66"/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <model type="virtio"/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <driver name="vhost" rx_queue_size="512"/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <mtu size="1442"/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <target dev="tap79d28498-fe"/>
Oct 08 10:16:31 compute-0 nova_compute[262220]: </interface>
Oct 08 10:16:31 compute-0 nova_compute[262220]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 08 10:16:31 compute-0 kernel: tap79d28498-fe (unregistering): left promiscuous mode
Oct 08 10:16:31 compute-0 NetworkManager[44872]: <info>  [1759918591.6477] device (tap79d28498-fe): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 08 10:16:31 compute-0 ovn_controller[153187]: 2025-10-08T10:16:31Z|00050|binding|INFO|Releasing lport 79d28498-fe9d-49dc-ad2c-bde432b239db from this chassis (sb_readonly=0)
Oct 08 10:16:31 compute-0 nova_compute[262220]: 2025-10-08 10:16:31.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:16:31 compute-0 ovn_controller[153187]: 2025-10-08T10:16:31Z|00051|binding|INFO|Setting lport 79d28498-fe9d-49dc-ad2c-bde432b239db down in Southbound
Oct 08 10:16:31 compute-0 ovn_controller[153187]: 2025-10-08T10:16:31Z|00052|binding|INFO|Removing iface tap79d28498-fe ovn-installed in OVS
Oct 08 10:16:31 compute-0 nova_compute[262220]: 2025-10-08 10:16:31.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:16:31 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:16:31.668 163175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:4d:66 10.100.0.23', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.23/28', 'neutron:device_id': 'ea469a2e-bf09-495c-9b5e-02ad38d32d40', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0a28a475-c59d-4526-93af-b8af40052e5c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0edb2c85b88b4b168a4b2e8c5ed4a05c', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f6ba97cc-1c15-47ba-aa89-c964fcf23523, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f191f105a60>], logical_port=79d28498-fe9d-49dc-ad2c-bde432b239db) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f191f105a60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 08 10:16:31 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:16:31.669 163175 INFO neutron.agent.ovn.metadata.agent [-] Port 79d28498-fe9d-49dc-ad2c-bde432b239db in datapath 0a28a475-c59d-4526-93af-b8af40052e5c unbound from our chassis
Oct 08 10:16:31 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:16:31.670 163175 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0a28a475-c59d-4526-93af-b8af40052e5c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 08 10:16:31 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:16:31.672 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[fab4d5d7-cadc-4724-b9c2-7d7970a53a8a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:16:31 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:16:31.672 163175 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0a28a475-c59d-4526-93af-b8af40052e5c namespace which is not needed anymore
Oct 08 10:16:31 compute-0 nova_compute[262220]: 2025-10-08 10:16:31.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:16:31 compute-0 nova_compute[262220]: 2025-10-08 10:16:31.675 2 DEBUG nova.virt.libvirt.driver [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Received event <DeviceRemovedEvent: 1759918591.6748781, ea469a2e-bf09-495c-9b5e-02ad38d32d40 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 08 10:16:31 compute-0 nova_compute[262220]: 2025-10-08 10:16:31.677 2 DEBUG nova.virt.libvirt.driver [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Start waiting for the detach event from libvirt for device tap79d28498-fe with device alias net1 for instance ea469a2e-bf09-495c-9b5e-02ad38d32d40 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 08 10:16:31 compute-0 nova_compute[262220]: 2025-10-08 10:16:31.677 2 DEBUG nova.virt.libvirt.guest [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:40:4d:66"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap79d28498-fe"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 08 10:16:31 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:16:31 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:16:31 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:16:31.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:16:31 compute-0 nova_compute[262220]: 2025-10-08 10:16:31.687 2 DEBUG nova.virt.libvirt.guest [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:40:4d:66"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap79d28498-fe"/></interface>not found in domain: <domain type='kvm' id='2'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <name>instance-00000006</name>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <uuid>ea469a2e-bf09-495c-9b5e-02ad38d32d40</uuid>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <metadata>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <nova:name>tempest-TestNetworkBasicOps-server-1473882269</nova:name>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <nova:creationTime>2025-10-08 10:15:03</nova:creationTime>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <nova:flavor name="m1.nano">
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <nova:memory>128</nova:memory>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <nova:disk>1</nova:disk>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <nova:swap>0</nova:swap>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <nova:ephemeral>0</nova:ephemeral>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <nova:vcpus>1</nova:vcpus>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   </nova:flavor>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <nova:owner>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <nova:user uuid="d50b19166a7245e390a6e29682191263">tempest-TestNetworkBasicOps-139500885-project-member</nova:user>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <nova:project uuid="0edb2c85b88b4b168a4b2e8c5ed4a05c">tempest-TestNetworkBasicOps-139500885</nova:project>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   </nova:owner>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <nova:root type="image" uuid="e5994bac-385d-4cfe-962e-386aa0559983"/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <nova:ports>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <nova:port uuid="be4ec274-2a90-48e8-bd51-fd01f3c659da">
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </nova:port>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <nova:port uuid="79d28498-fe9d-49dc-ad2c-bde432b239db">
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <nova:ip type="fixed" address="10.100.0.23" ipVersion="4"/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </nova:port>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   </nova:ports>
Oct 08 10:16:31 compute-0 nova_compute[262220]: </nova:instance>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   </metadata>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <memory unit='KiB'>131072</memory>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <vcpu placement='static'>1</vcpu>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <resource>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <partition>/machine</partition>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   </resource>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <sysinfo type='smbios'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <system>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <entry name='manufacturer'>RDO</entry>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <entry name='product'>OpenStack Compute</entry>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <entry name='serial'>ea469a2e-bf09-495c-9b5e-02ad38d32d40</entry>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <entry name='uuid'>ea469a2e-bf09-495c-9b5e-02ad38d32d40</entry>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <entry name='family'>Virtual Machine</entry>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </system>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   </sysinfo>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <os>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <boot dev='hd'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <smbios mode='sysinfo'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   </os>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <features>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <acpi/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <apic/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <vmcoreinfo state='on'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   </features>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <cpu mode='custom' match='exact' check='full'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <vendor>AMD</vendor>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='require' name='x2apic'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='require' name='tsc-deadline'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='require' name='hypervisor'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='require' name='tsc_adjust'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='require' name='spec-ctrl'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='require' name='stibp'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='require' name='arch-capabilities'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='require' name='ssbd'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='require' name='cmp_legacy'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='require' name='overflow-recov'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='require' name='succor'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='require' name='ibrs'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='require' name='amd-ssbd'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='require' name='virt-ssbd'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='disable' name='lbrv'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='disable' name='tsc-scale'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='disable' name='vmcb-clean'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='disable' name='flushbyasid'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='disable' name='pause-filter'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='disable' name='pfthreshold'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='require' name='rdctl-no'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='require' name='mds-no'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='require' name='pschange-mc-no'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='require' name='gds-no'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='require' name='rfds-no'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='disable' name='xsaves'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='disable' name='svm'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='require' name='topoext'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='disable' name='npt'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <feature policy='disable' name='nrip-save'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   </cpu>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <clock offset='utc'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <timer name='pit' tickpolicy='delay'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <timer name='hpet' present='no'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   </clock>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <on_poweroff>destroy</on_poweroff>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <on_reboot>restart</on_reboot>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <on_crash>destroy</on_crash>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <devices>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <disk type='network' device='disk'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <driver name='qemu' type='raw' cache='none'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <auth username='openstack'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:         <secret type='ceph' uuid='787292cc-8154-50c4-9e00-e9be3e817149'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       </auth>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <source protocol='rbd' name='vms/ea469a2e-bf09-495c-9b5e-02ad38d32d40_disk' index='2'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:         <host name='192.168.122.100' port='6789'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:         <host name='192.168.122.102' port='6789'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:         <host name='192.168.122.101' port='6789'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       </source>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <target dev='vda' bus='virtio'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='virtio-disk0'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </disk>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <disk type='network' device='cdrom'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <driver name='qemu' type='raw' cache='none'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <auth username='openstack'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:         <secret type='ceph' uuid='787292cc-8154-50c4-9e00-e9be3e817149'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       </auth>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <source protocol='rbd' name='vms/ea469a2e-bf09-495c-9b5e-02ad38d32d40_disk.config' index='1'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:         <host name='192.168.122.100' port='6789'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:         <host name='192.168.122.102' port='6789'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:         <host name='192.168.122.101' port='6789'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       </source>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <target dev='sda' bus='sata'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <readonly/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='sata0-0-0'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </disk>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <controller type='pci' index='0' model='pcie-root'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='pcie.0'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <target chassis='1' port='0x10'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='pci.1'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <target chassis='2' port='0x11'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='pci.2'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <target chassis='3' port='0x12'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='pci.3'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <target chassis='4' port='0x13'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='pci.4'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <target chassis='5' port='0x14'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='pci.5'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <target chassis='6' port='0x15'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='pci.6'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <target chassis='7' port='0x16'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='pci.7'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <target chassis='8' port='0x17'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='pci.8'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <target chassis='9' port='0x18'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='pci.9'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <target chassis='10' port='0x19'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='pci.10'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <target chassis='11' port='0x1a'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='pci.11'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <target chassis='12' port='0x1b'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='pci.12'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <target chassis='13' port='0x1c'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='pci.13'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <target chassis='14' port='0x1d'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='pci.14'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <target chassis='15' port='0x1e'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='pci.15'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <target chassis='16' port='0x1f'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='pci.16'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <target chassis='17' port='0x20'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='pci.17'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <target chassis='18' port='0x21'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='pci.18'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <target chassis='19' port='0x22'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='pci.19'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <target chassis='20' port='0x23'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='pci.20'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <target chassis='21' port='0x24'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='pci.21'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <target chassis='22' port='0x25'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='pci.22'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <target chassis='23' port='0x26'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='pci.23'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <target chassis='24' port='0x27'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='pci.24'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <target chassis='25' port='0x28'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='pci.25'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <model name='pcie-pci-bridge'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='pci.26'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='usb'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <controller type='sata' index='0'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='ide'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <interface type='ethernet'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <mac address='fa:16:3e:e6:b0:e0'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <target dev='tapbe4ec274-2a'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <model type='virtio'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <driver name='vhost' rx_queue_size='512'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <mtu size='1442'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='net0'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </interface>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <serial type='pty'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <source path='/dev/pts/0'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <log file='/var/lib/nova/instances/ea469a2e-bf09-495c-9b5e-02ad38d32d40/console.log' append='off'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <target type='isa-serial' port='0'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:         <model name='isa-serial'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       </target>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='serial0'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </serial>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <console type='pty' tty='/dev/pts/0'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <source path='/dev/pts/0'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <log file='/var/lib/nova/instances/ea469a2e-bf09-495c-9b5e-02ad38d32d40/console.log' append='off'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <target type='serial' port='0'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='serial0'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </console>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <input type='tablet' bus='usb'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='input0'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='usb' bus='0' port='1'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </input>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <input type='mouse' bus='ps2'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='input1'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </input>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <input type='keyboard' bus='ps2'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='input2'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </input>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <listen type='address' address='::0'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </graphics>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <audio id='1' type='none'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <video>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <model type='virtio' heads='1' primary='yes'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='video0'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </video>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <watchdog model='itco' action='reset'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='watchdog0'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </watchdog>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <memballoon model='virtio'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <stats period='10'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='balloon0'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </memballoon>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <rng model='virtio'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <backend model='random'>/dev/urandom</backend>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <alias name='rng0'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </rng>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   </devices>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <label>system_u:system_r:svirt_t:s0:c144,c208</label>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c144,c208</imagelabel>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   </seclabel>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <label>+107:+107</label>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <imagelabel>+107:+107</imagelabel>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   </seclabel>
Oct 08 10:16:31 compute-0 nova_compute[262220]: </domain>
Oct 08 10:16:31 compute-0 nova_compute[262220]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 08 10:16:31 compute-0 nova_compute[262220]: 2025-10-08 10:16:31.687 2 INFO nova.virt.libvirt.driver [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Successfully detached device tap79d28498-fe from instance ea469a2e-bf09-495c-9b5e-02ad38d32d40 from the live domain config.
Oct 08 10:16:31 compute-0 nova_compute[262220]: 2025-10-08 10:16:31.688 2 DEBUG nova.virt.libvirt.vif [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-08T10:14:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1473882269',display_name='tempest-TestNetworkBasicOps-server-1473882269',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1473882269',id=6,image_ref='e5994bac-385d-4cfe-962e-386aa0559983',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLR6MyJBOMYp2wYGaL6G2mEbJcrZztqNeLLTecSXpMa+10UdVkrFMZxX+qiOC0ccFZIJleCiHIYc6JXNFg7vRmgJ0JtTU6W+KAYc8u1JxRO51IwGJ30ByO68fx1sTbOqEg==',key_name='tempest-TestNetworkBasicOps-1715641436',keypairs=<?>,launch_index=0,launched_at=2025-10-08T10:14:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0edb2c85b88b4b168a4b2e8c5ed4a05c',ramdisk_id='',reservation_id='r-b2owsx2f',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e5994bac-385d-4cfe-962e-386aa0559983',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-139500885',owner_user_name='tempest-TestNetworkBasicOps-139500885-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-08T10:14:35Z,user_data=None,user_id='d50b19166a7245e390a6e29682191263',uuid=ea469a2e-bf09-495c-9b5e-02ad38d32d40,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "79d28498-fe9d-49dc-ad2c-bde432b239db", "address": "fa:16:3e:40:4d:66", "network": {"id": "0a28a475-c59d-4526-93af-b8af40052e5c", "bridge": "br-int", "label": "tempest-network-smoke--1745814783", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79d28498-fe", "ovs_interfaceid": "79d28498-fe9d-49dc-ad2c-bde432b239db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 08 10:16:31 compute-0 nova_compute[262220]: 2025-10-08 10:16:31.688 2 DEBUG nova.network.os_vif_util [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converting VIF {"id": "79d28498-fe9d-49dc-ad2c-bde432b239db", "address": "fa:16:3e:40:4d:66", "network": {"id": "0a28a475-c59d-4526-93af-b8af40052e5c", "bridge": "br-int", "label": "tempest-network-smoke--1745814783", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79d28498-fe", "ovs_interfaceid": "79d28498-fe9d-49dc-ad2c-bde432b239db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 08 10:16:31 compute-0 nova_compute[262220]: 2025-10-08 10:16:31.689 2 DEBUG nova.network.os_vif_util [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:40:4d:66,bridge_name='br-int',has_traffic_filtering=True,id=79d28498-fe9d-49dc-ad2c-bde432b239db,network=Network(0a28a475-c59d-4526-93af-b8af40052e5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79d28498-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 08 10:16:31 compute-0 nova_compute[262220]: 2025-10-08 10:16:31.689 2 DEBUG os_vif [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:40:4d:66,bridge_name='br-int',has_traffic_filtering=True,id=79d28498-fe9d-49dc-ad2c-bde432b239db,network=Network(0a28a475-c59d-4526-93af-b8af40052e5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79d28498-fe') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 08 10:16:31 compute-0 nova_compute[262220]: 2025-10-08 10:16:31.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:16:31 compute-0 nova_compute[262220]: 2025-10-08 10:16:31.691 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79d28498-fe, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 10:16:31 compute-0 nova_compute[262220]: 2025-10-08 10:16:31.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:16:31 compute-0 nova_compute[262220]: 2025-10-08 10:16:31.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 08 10:16:31 compute-0 nova_compute[262220]: 2025-10-08 10:16:31.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:16:31 compute-0 nova_compute[262220]: 2025-10-08 10:16:31.699 2 INFO os_vif [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:40:4d:66,bridge_name='br-int',has_traffic_filtering=True,id=79d28498-fe9d-49dc-ad2c-bde432b239db,network=Network(0a28a475-c59d-4526-93af-b8af40052e5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79d28498-fe')
Oct 08 10:16:31 compute-0 nova_compute[262220]: 2025-10-08 10:16:31.700 2 DEBUG nova.virt.libvirt.guest [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <nova:name>tempest-TestNetworkBasicOps-server-1473882269</nova:name>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <nova:creationTime>2025-10-08 10:16:31</nova:creationTime>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <nova:flavor name="m1.nano">
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <nova:memory>128</nova:memory>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <nova:disk>1</nova:disk>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <nova:swap>0</nova:swap>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <nova:ephemeral>0</nova:ephemeral>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <nova:vcpus>1</nova:vcpus>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   </nova:flavor>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <nova:owner>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <nova:user uuid="d50b19166a7245e390a6e29682191263">tempest-TestNetworkBasicOps-139500885-project-member</nova:user>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <nova:project uuid="0edb2c85b88b4b168a4b2e8c5ed4a05c">tempest-TestNetworkBasicOps-139500885</nova:project>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   </nova:owner>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <nova:root type="image" uuid="e5994bac-385d-4cfe-962e-386aa0559983"/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   <nova:ports>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     <nova:port uuid="be4ec274-2a90-48e8-bd51-fd01f3c659da">
Oct 08 10:16:31 compute-0 nova_compute[262220]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 08 10:16:31 compute-0 nova_compute[262220]:     </nova:port>
Oct 08 10:16:31 compute-0 nova_compute[262220]:   </nova:ports>
Oct 08 10:16:31 compute-0 nova_compute[262220]: </nova:instance>
Oct 08 10:16:31 compute-0 nova_compute[262220]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 08 10:16:31 compute-0 neutron-haproxy-ovnmeta-0a28a475-c59d-4526-93af-b8af40052e5c[274732]: [NOTICE]   (274736) : haproxy version is 2.8.14-c23fe91
Oct 08 10:16:31 compute-0 neutron-haproxy-ovnmeta-0a28a475-c59d-4526-93af-b8af40052e5c[274732]: [NOTICE]   (274736) : path to executable is /usr/sbin/haproxy
Oct 08 10:16:31 compute-0 neutron-haproxy-ovnmeta-0a28a475-c59d-4526-93af-b8af40052e5c[274732]: [WARNING]  (274736) : Exiting Master process...
Oct 08 10:16:31 compute-0 neutron-haproxy-ovnmeta-0a28a475-c59d-4526-93af-b8af40052e5c[274732]: [WARNING]  (274736) : Exiting Master process...
Oct 08 10:16:31 compute-0 neutron-haproxy-ovnmeta-0a28a475-c59d-4526-93af-b8af40052e5c[274732]: [ALERT]    (274736) : Current worker (274738) exited with code 143 (Terminated)
Oct 08 10:16:31 compute-0 neutron-haproxy-ovnmeta-0a28a475-c59d-4526-93af-b8af40052e5c[274732]: [WARNING]  (274736) : All workers exited. Exiting... (0)
Oct 08 10:16:31 compute-0 systemd[1]: libpod-3f635e95f7ebf91f0c654612ee8273fdabac0234b29364d136bfe358d73eeb52.scope: Deactivated successfully.
Oct 08 10:16:31 compute-0 podman[277520]: 2025-10-08 10:16:31.812334229 +0000 UTC m=+0.044764123 container died 3f635e95f7ebf91f0c654612ee8273fdabac0234b29364d136bfe358d73eeb52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0a28a475-c59d-4526-93af-b8af40052e5c, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct 08 10:16:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-915dac930a5508f0d71bb51887deafacf6554c7ddc11a4e1d1f27258efcfd64d-merged.mount: Deactivated successfully.
Oct 08 10:16:31 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3f635e95f7ebf91f0c654612ee8273fdabac0234b29364d136bfe358d73eeb52-userdata-shm.mount: Deactivated successfully.
Oct 08 10:16:31 compute-0 podman[277520]: 2025-10-08 10:16:31.846278833 +0000 UTC m=+0.078708717 container cleanup 3f635e95f7ebf91f0c654612ee8273fdabac0234b29364d136bfe358d73eeb52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0a28a475-c59d-4526-93af-b8af40052e5c, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 08 10:16:31 compute-0 systemd[1]: libpod-conmon-3f635e95f7ebf91f0c654612ee8273fdabac0234b29364d136bfe358d73eeb52.scope: Deactivated successfully.
Oct 08 10:16:31 compute-0 podman[277551]: 2025-10-08 10:16:31.909401876 +0000 UTC m=+0.038463340 container remove 3f635e95f7ebf91f0c654612ee8273fdabac0234b29364d136bfe358d73eeb52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0a28a475-c59d-4526-93af-b8af40052e5c, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 08 10:16:31 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:16:31.915 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[caf0d0e0-1ba2-4ffe-adf0-6c6dce7bab52]: (4, ('Wed Oct  8 10:16:31 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0a28a475-c59d-4526-93af-b8af40052e5c (3f635e95f7ebf91f0c654612ee8273fdabac0234b29364d136bfe358d73eeb52)\n3f635e95f7ebf91f0c654612ee8273fdabac0234b29364d136bfe358d73eeb52\nWed Oct  8 10:16:31 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0a28a475-c59d-4526-93af-b8af40052e5c (3f635e95f7ebf91f0c654612ee8273fdabac0234b29364d136bfe358d73eeb52)\n3f635e95f7ebf91f0c654612ee8273fdabac0234b29364d136bfe358d73eeb52\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:16:31 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:16:31.917 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[ddece913-440d-499e-8778-cfab1074f04f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:16:31 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:16:31.918 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0a28a475-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 10:16:31 compute-0 nova_compute[262220]: 2025-10-08 10:16:31.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:16:31 compute-0 kernel: tap0a28a475-c0: left promiscuous mode
Oct 08 10:16:31 compute-0 nova_compute[262220]: 2025-10-08 10:16:31.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:16:31 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:16:31.936 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[d9c9ede8-8e32-415d-9966-73c0b8d03730]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:16:31 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:16:31.969 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[cb67bbcf-a900-4035-aab9-80820c7da0b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:16:31 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:16:31.970 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[07bb8549-ffd7-4734-bb9f-95351cd8bf23]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:16:31 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:16:31.985 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[1f8732dd-344d-41f5-8546-4dee305ec19e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 446210, 'reachable_time': 19457, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277568, 'error': None, 'target': 'ovnmeta-0a28a475-c59d-4526-93af-b8af40052e5c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:16:31 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:16:31.987 163290 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0a28a475-c59d-4526-93af-b8af40052e5c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 08 10:16:31 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:16:31.987 163290 DEBUG oslo.privsep.daemon [-] privsep: reply[57b8dd5e-b1f5-40a0-aa27-129dded65275]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:16:31 compute-0 systemd[1]: run-netns-ovnmeta\x2d0a28a475\x2dc59d\x2d4526\x2d93af\x2db8af40052e5c.mount: Deactivated successfully.
Oct 08 10:16:32 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v958: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 30 op/s
Oct 08 10:16:32 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:32 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_47] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:32 compute-0 nova_compute[262220]: 2025-10-08 10:16:32.617 2 DEBUG nova.compute.manager [req-b1c02e40-bc0b-4a93-a253-c648a6dbd9fd req-61327ef6-759f-41e0-9363-1780ab676776 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Received event network-vif-unplugged-79d28498-fe9d-49dc-ad2c-bde432b239db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 08 10:16:32 compute-0 nova_compute[262220]: 2025-10-08 10:16:32.618 2 DEBUG oslo_concurrency.lockutils [req-b1c02e40-bc0b-4a93-a253-c648a6dbd9fd req-61327ef6-759f-41e0-9363-1780ab676776 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:16:32 compute-0 nova_compute[262220]: 2025-10-08 10:16:32.618 2 DEBUG oslo_concurrency.lockutils [req-b1c02e40-bc0b-4a93-a253-c648a6dbd9fd req-61327ef6-759f-41e0-9363-1780ab676776 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:16:32 compute-0 nova_compute[262220]: 2025-10-08 10:16:32.618 2 DEBUG oslo_concurrency.lockutils [req-b1c02e40-bc0b-4a93-a253-c648a6dbd9fd req-61327ef6-759f-41e0-9363-1780ab676776 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:16:32 compute-0 nova_compute[262220]: 2025-10-08 10:16:32.618 2 DEBUG nova.compute.manager [req-b1c02e40-bc0b-4a93-a253-c648a6dbd9fd req-61327ef6-759f-41e0-9363-1780ab676776 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] No waiting events found dispatching network-vif-unplugged-79d28498-fe9d-49dc-ad2c-bde432b239db pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 08 10:16:32 compute-0 nova_compute[262220]: 2025-10-08 10:16:32.618 2 WARNING nova.compute.manager [req-b1c02e40-bc0b-4a93-a253-c648a6dbd9fd req-61327ef6-759f-41e0-9363-1780ab676776 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Received unexpected event network-vif-unplugged-79d28498-fe9d-49dc-ad2c-bde432b239db for instance with vm_state active and task_state None.
Oct 08 10:16:32 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:32 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_48] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940039d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:32 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:16:32 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:16:32 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:16:32.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:16:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:16:32 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:16:33 compute-0 nova_compute[262220]: 2025-10-08 10:16:33.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:16:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 08 10:16:33 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:16:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 08 10:16:33 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:16:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 10:16:33 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:16:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 08 10:16:33 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 10:16:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 08 10:16:33 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:16:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 08 10:16:33 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:16:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 08 10:16:33 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 10:16:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 08 10:16:33 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 10:16:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 10:16:33 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:16:33 compute-0 sudo[277569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:16:33 compute-0 sudo[277569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:16:33 compute-0 sudo[277569]: pam_unix(sudo:session): session closed for user root
Oct 08 10:16:33 compute-0 sudo[277595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 08 10:16:33 compute-0 sudo[277595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:16:33 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:33 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0047e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:33 compute-0 ceph-mon[73572]: pgmap v958: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 30 op/s
Oct 08 10:16:33 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:16:33 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:16:33 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:16:33 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:16:33 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 10:16:33 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:16:33 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:16:33 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 10:16:33 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 10:16:33 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:16:33 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:16:33 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:16:33 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:16:33.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:16:33 compute-0 podman[277661]: 2025-10-08 10:16:33.705580928 +0000 UTC m=+0.037775518 container create 712f06e51e4b6857932db64cb5dc9bca2763116081b18fb05eb8624fcec0848d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_ritchie, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True)
Oct 08 10:16:33 compute-0 systemd[1]: Started libpod-conmon-712f06e51e4b6857932db64cb5dc9bca2763116081b18fb05eb8624fcec0848d.scope.
Oct 08 10:16:33 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:16:33 compute-0 podman[277661]: 2025-10-08 10:16:33.690499402 +0000 UTC m=+0.022694022 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:16:33 compute-0 podman[277661]: 2025-10-08 10:16:33.799682149 +0000 UTC m=+0.131876789 container init 712f06e51e4b6857932db64cb5dc9bca2763116081b18fb05eb8624fcec0848d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_ritchie, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:16:33 compute-0 podman[277661]: 2025-10-08 10:16:33.807116218 +0000 UTC m=+0.139310818 container start 712f06e51e4b6857932db64cb5dc9bca2763116081b18fb05eb8624fcec0848d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_ritchie, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:16:33 compute-0 podman[277661]: 2025-10-08 10:16:33.811730528 +0000 UTC m=+0.143925138 container attach 712f06e51e4b6857932db64cb5dc9bca2763116081b18fb05eb8624fcec0848d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct 08 10:16:33 compute-0 systemd[1]: libpod-712f06e51e4b6857932db64cb5dc9bca2763116081b18fb05eb8624fcec0848d.scope: Deactivated successfully.
Oct 08 10:16:33 compute-0 jolly_ritchie[277678]: 167 167
Oct 08 10:16:33 compute-0 conmon[277678]: conmon 712f06e51e4b6857932d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-712f06e51e4b6857932db64cb5dc9bca2763116081b18fb05eb8624fcec0848d.scope/container/memory.events
Oct 08 10:16:33 compute-0 podman[277661]: 2025-10-08 10:16:33.81646867 +0000 UTC m=+0.148663270 container died 712f06e51e4b6857932db64cb5dc9bca2763116081b18fb05eb8624fcec0848d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325)
Oct 08 10:16:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0f930eedc0f7d6ee79c4a904a12b27cbdab2155508350316bed0e98ba93c1e9-merged.mount: Deactivated successfully.
Oct 08 10:16:33 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:16:33.841 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26869918-b723-425c-a2e1-0d697f3d0fec, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 10:16:33 compute-0 podman[277661]: 2025-10-08 10:16:33.856665025 +0000 UTC m=+0.188859625 container remove 712f06e51e4b6857932db64cb5dc9bca2763116081b18fb05eb8624fcec0848d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 08 10:16:33 compute-0 systemd[1]: libpod-conmon-712f06e51e4b6857932db64cb5dc9bca2763116081b18fb05eb8624fcec0848d.scope: Deactivated successfully.
Oct 08 10:16:33 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct 08 10:16:34 compute-0 podman[277704]: 2025-10-08 10:16:34.032235551 +0000 UTC m=+0.045353502 container create b1fa6ca03c6872a01e3c120f930f7bce6438087f0305012074546fdd8e4d821e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_kowalevski, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 08 10:16:34 compute-0 podman[277699]: 2025-10-08 10:16:34.04897509 +0000 UTC m=+0.069283333 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.build-date=20251001, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=iscsid, org.label-schema.vendor=CentOS)
Oct 08 10:16:34 compute-0 systemd[1]: Started libpod-conmon-b1fa6ca03c6872a01e3c120f930f7bce6438087f0305012074546fdd8e4d821e.scope.
Oct 08 10:16:34 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:16:34 compute-0 podman[277704]: 2025-10-08 10:16:34.010532742 +0000 UTC m=+0.023650713 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:16:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90cef5ba6bf13bc347c5130c083faf53540bca497cc827955f4074f835f64584/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:16:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90cef5ba6bf13bc347c5130c083faf53540bca497cc827955f4074f835f64584/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:16:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90cef5ba6bf13bc347c5130c083faf53540bca497cc827955f4074f835f64584/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:16:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90cef5ba6bf13bc347c5130c083faf53540bca497cc827955f4074f835f64584/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:16:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90cef5ba6bf13bc347c5130c083faf53540bca497cc827955f4074f835f64584/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 10:16:34 compute-0 podman[277704]: 2025-10-08 10:16:34.121858448 +0000 UTC m=+0.134976409 container init b1fa6ca03c6872a01e3c120f930f7bce6438087f0305012074546fdd8e4d821e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_kowalevski, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:16:34 compute-0 podman[277704]: 2025-10-08 10:16:34.134089332 +0000 UTC m=+0.147207283 container start b1fa6ca03c6872a01e3c120f930f7bce6438087f0305012074546fdd8e4d821e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_kowalevski, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Oct 08 10:16:34 compute-0 podman[277704]: 2025-10-08 10:16:34.137093139 +0000 UTC m=+0.150211100 container attach b1fa6ca03c6872a01e3c120f930f7bce6438087f0305012074546fdd8e4d821e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_kowalevski, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:16:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:16:34 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v959: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 14 KiB/s wr, 30 op/s
Oct 08 10:16:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:34 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac780031c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:34 compute-0 nova_compute[262220]: 2025-10-08 10:16:34.434 2 DEBUG oslo_concurrency.lockutils [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "refresh_cache-ea469a2e-bf09-495c-9b5e-02ad38d32d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 08 10:16:34 compute-0 nova_compute[262220]: 2025-10-08 10:16:34.434 2 DEBUG oslo_concurrency.lockutils [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquired lock "refresh_cache-ea469a2e-bf09-495c-9b5e-02ad38d32d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 08 10:16:34 compute-0 nova_compute[262220]: 2025-10-08 10:16:34.435 2 DEBUG nova.network.neutron [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 08 10:16:34 compute-0 goofy_kowalevski[277740]: --> passed data devices: 0 physical, 1 LVM
Oct 08 10:16:34 compute-0 goofy_kowalevski[277740]: --> All data devices are unavailable
Oct 08 10:16:34 compute-0 systemd[1]: libpod-b1fa6ca03c6872a01e3c120f930f7bce6438087f0305012074546fdd8e4d821e.scope: Deactivated successfully.
Oct 08 10:16:34 compute-0 conmon[277740]: conmon b1fa6ca03c6872a01e3c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b1fa6ca03c6872a01e3c120f930f7bce6438087f0305012074546fdd8e4d821e.scope/container/memory.events
Oct 08 10:16:34 compute-0 podman[277704]: 2025-10-08 10:16:34.484881832 +0000 UTC m=+0.497999783 container died b1fa6ca03c6872a01e3c120f930f7bce6438087f0305012074546fdd8e4d821e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_kowalevski, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct 08 10:16:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-90cef5ba6bf13bc347c5130c083faf53540bca497cc827955f4074f835f64584-merged.mount: Deactivated successfully.
Oct 08 10:16:34 compute-0 podman[277704]: 2025-10-08 10:16:34.527071631 +0000 UTC m=+0.540189582 container remove b1fa6ca03c6872a01e3c120f930f7bce6438087f0305012074546fdd8e4d821e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_kowalevski, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 08 10:16:34 compute-0 systemd[1]: libpod-conmon-b1fa6ca03c6872a01e3c120f930f7bce6438087f0305012074546fdd8e4d821e.scope: Deactivated successfully.
Oct 08 10:16:34 compute-0 sudo[277595]: pam_unix(sudo:session): session closed for user root
Oct 08 10:16:34 compute-0 sudo[277769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:16:34 compute-0 sudo[277769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:16:34 compute-0 sudo[277769]: pam_unix(sudo:session): session closed for user root
Oct 08 10:16:34 compute-0 sudo[277794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- lvm list --format json
Oct 08 10:16:34 compute-0 sudo[277794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:16:34 compute-0 nova_compute[262220]: 2025-10-08 10:16:34.705 2 DEBUG nova.compute.manager [req-a1aadc2d-190e-4eb8-8971-e56567d153df req-4eb46238-2d8e-488b-aaef-6342045264a0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Received event network-vif-plugged-79d28498-fe9d-49dc-ad2c-bde432b239db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 08 10:16:34 compute-0 nova_compute[262220]: 2025-10-08 10:16:34.706 2 DEBUG oslo_concurrency.lockutils [req-a1aadc2d-190e-4eb8-8971-e56567d153df req-4eb46238-2d8e-488b-aaef-6342045264a0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:16:34 compute-0 nova_compute[262220]: 2025-10-08 10:16:34.706 2 DEBUG oslo_concurrency.lockutils [req-a1aadc2d-190e-4eb8-8971-e56567d153df req-4eb46238-2d8e-488b-aaef-6342045264a0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:16:34 compute-0 nova_compute[262220]: 2025-10-08 10:16:34.706 2 DEBUG oslo_concurrency.lockutils [req-a1aadc2d-190e-4eb8-8971-e56567d153df req-4eb46238-2d8e-488b-aaef-6342045264a0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:16:34 compute-0 nova_compute[262220]: 2025-10-08 10:16:34.707 2 DEBUG nova.compute.manager [req-a1aadc2d-190e-4eb8-8971-e56567d153df req-4eb46238-2d8e-488b-aaef-6342045264a0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] No waiting events found dispatching network-vif-plugged-79d28498-fe9d-49dc-ad2c-bde432b239db pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 08 10:16:34 compute-0 nova_compute[262220]: 2025-10-08 10:16:34.707 2 WARNING nova.compute.manager [req-a1aadc2d-190e-4eb8-8971-e56567d153df req-4eb46238-2d8e-488b-aaef-6342045264a0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Received unexpected event network-vif-plugged-79d28498-fe9d-49dc-ad2c-bde432b239db for instance with vm_state active and task_state None.
Oct 08 10:16:34 compute-0 nova_compute[262220]: 2025-10-08 10:16:34.707 2 DEBUG nova.compute.manager [req-a1aadc2d-190e-4eb8-8971-e56567d153df req-4eb46238-2d8e-488b-aaef-6342045264a0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Received event network-vif-deleted-79d28498-fe9d-49dc-ad2c-bde432b239db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 08 10:16:34 compute-0 nova_compute[262220]: 2025-10-08 10:16:34.707 2 INFO nova.compute.manager [req-a1aadc2d-190e-4eb8-8971-e56567d153df req-4eb46238-2d8e-488b-aaef-6342045264a0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Neutron deleted interface 79d28498-fe9d-49dc-ad2c-bde432b239db; detaching it from the instance and deleting it from the info cache
Oct 08 10:16:34 compute-0 nova_compute[262220]: 2025-10-08 10:16:34.707 2 DEBUG nova.network.neutron [req-a1aadc2d-190e-4eb8-8971-e56567d153df req-4eb46238-2d8e-488b-aaef-6342045264a0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Updating instance_info_cache with network_info: [{"id": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "address": "fa:16:3e:e6:b0:e0", "network": {"id": "834a886f-bb33-49ed-b47e-ef0308a38e89", "bridge": "br-int", "label": "tempest-network-smoke--1879322246", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe4ec274-2a", "ovs_interfaceid": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 08 10:16:34 compute-0 nova_compute[262220]: 2025-10-08 10:16:34.732 2 DEBUG nova.objects.instance [req-a1aadc2d-190e-4eb8-8971-e56567d153df req-4eb46238-2d8e-488b-aaef-6342045264a0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lazy-loading 'system_metadata' on Instance uuid ea469a2e-bf09-495c-9b5e-02ad38d32d40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 08 10:16:34 compute-0 nova_compute[262220]: 2025-10-08 10:16:34.760 2 DEBUG nova.objects.instance [req-a1aadc2d-190e-4eb8-8971-e56567d153df req-4eb46238-2d8e-488b-aaef-6342045264a0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lazy-loading 'flavor' on Instance uuid ea469a2e-bf09-495c-9b5e-02ad38d32d40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 08 10:16:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:34 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_47] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:34 compute-0 nova_compute[262220]: 2025-10-08 10:16:34.789 2 DEBUG nova.virt.libvirt.vif [req-a1aadc2d-190e-4eb8-8971-e56567d153df req-4eb46238-2d8e-488b-aaef-6342045264a0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-08T10:14:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1473882269',display_name='tempest-TestNetworkBasicOps-server-1473882269',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1473882269',id=6,image_ref='e5994bac-385d-4cfe-962e-386aa0559983',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLR6MyJBOMYp2wYGaL6G2mEbJcrZztqNeLLTecSXpMa+10UdVkrFMZxX+qiOC0ccFZIJleCiHIYc6JXNFg7vRmgJ0JtTU6W+KAYc8u1JxRO51IwGJ30ByO68fx1sTbOqEg==',key_name='tempest-TestNetworkBasicOps-1715641436',keypairs=<?>,launch_index=0,launched_at=2025-10-08T10:14:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0edb2c85b88b4b168a4b2e8c5ed4a05c',ramdisk_id='',reservation_id='r-b2owsx2f',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e5994bac-385d-4cfe-962e-386aa0559983',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-139500885',owner_user_name='tempest-TestNetworkBasicOps-139500885-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-08T10:14:35Z,user_data=None,user_id='d50b19166a7245e390a6e29682191263',uuid=ea469a2e-bf09-495c-9b5e-02ad38d32d40,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "79d28498-fe9d-49dc-ad2c-bde432b239db", "address": "fa:16:3e:40:4d:66", "network": {"id": "0a28a475-c59d-4526-93af-b8af40052e5c", "bridge": "br-int", "label": "tempest-network-smoke--1745814783", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79d28498-fe", "ovs_interfaceid": "79d28498-fe9d-49dc-ad2c-bde432b239db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 08 10:16:34 compute-0 nova_compute[262220]: 2025-10-08 10:16:34.789 2 DEBUG nova.network.os_vif_util [req-a1aadc2d-190e-4eb8-8971-e56567d153df req-4eb46238-2d8e-488b-aaef-6342045264a0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Converting VIF {"id": "79d28498-fe9d-49dc-ad2c-bde432b239db", "address": "fa:16:3e:40:4d:66", "network": {"id": "0a28a475-c59d-4526-93af-b8af40052e5c", "bridge": "br-int", "label": "tempest-network-smoke--1745814783", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79d28498-fe", "ovs_interfaceid": "79d28498-fe9d-49dc-ad2c-bde432b239db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 08 10:16:34 compute-0 nova_compute[262220]: 2025-10-08 10:16:34.790 2 DEBUG nova.network.os_vif_util [req-a1aadc2d-190e-4eb8-8971-e56567d153df req-4eb46238-2d8e-488b-aaef-6342045264a0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:40:4d:66,bridge_name='br-int',has_traffic_filtering=True,id=79d28498-fe9d-49dc-ad2c-bde432b239db,network=Network(0a28a475-c59d-4526-93af-b8af40052e5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79d28498-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 08 10:16:34 compute-0 nova_compute[262220]: 2025-10-08 10:16:34.793 2 DEBUG nova.virt.libvirt.guest [req-a1aadc2d-190e-4eb8-8971-e56567d153df req-4eb46238-2d8e-488b-aaef-6342045264a0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:40:4d:66"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap79d28498-fe"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 08 10:16:34 compute-0 nova_compute[262220]: 2025-10-08 10:16:34.796 2 DEBUG nova.virt.libvirt.guest [req-a1aadc2d-190e-4eb8-8971-e56567d153df req-4eb46238-2d8e-488b-aaef-6342045264a0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:40:4d:66"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap79d28498-fe"/></interface>not found in domain: <domain type='kvm' id='2'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   <name>instance-00000006</name>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   <uuid>ea469a2e-bf09-495c-9b5e-02ad38d32d40</uuid>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   <metadata>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 08 10:16:34 compute-0 nova_compute[262220]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   <nova:name>tempest-TestNetworkBasicOps-server-1473882269</nova:name>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   <nova:creationTime>2025-10-08 10:16:31</nova:creationTime>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   <nova:flavor name="m1.nano">
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <nova:memory>128</nova:memory>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <nova:disk>1</nova:disk>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <nova:swap>0</nova:swap>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <nova:ephemeral>0</nova:ephemeral>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <nova:vcpus>1</nova:vcpus>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   </nova:flavor>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   <nova:owner>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <nova:user uuid="d50b19166a7245e390a6e29682191263">tempest-TestNetworkBasicOps-139500885-project-member</nova:user>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <nova:project uuid="0edb2c85b88b4b168a4b2e8c5ed4a05c">tempest-TestNetworkBasicOps-139500885</nova:project>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   </nova:owner>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   <nova:root type="image" uuid="e5994bac-385d-4cfe-962e-386aa0559983"/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   <nova:ports>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <nova:port uuid="be4ec274-2a90-48e8-bd51-fd01f3c659da">
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </nova:port>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   </nova:ports>
Oct 08 10:16:34 compute-0 nova_compute[262220]: </nova:instance>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   </metadata>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   <memory unit='KiB'>131072</memory>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   <vcpu placement='static'>1</vcpu>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   <resource>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <partition>/machine</partition>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   </resource>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   <sysinfo type='smbios'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <system>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <entry name='manufacturer'>RDO</entry>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <entry name='product'>OpenStack Compute</entry>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <entry name='serial'>ea469a2e-bf09-495c-9b5e-02ad38d32d40</entry>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <entry name='uuid'>ea469a2e-bf09-495c-9b5e-02ad38d32d40</entry>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <entry name='family'>Virtual Machine</entry>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </system>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   </sysinfo>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   <os>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <boot dev='hd'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <smbios mode='sysinfo'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   </os>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   <features>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <acpi/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <apic/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <vmcoreinfo state='on'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   </features>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   <cpu mode='custom' match='exact' check='full'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <vendor>AMD</vendor>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='require' name='x2apic'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='require' name='tsc-deadline'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='require' name='hypervisor'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='require' name='tsc_adjust'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='require' name='spec-ctrl'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='require' name='stibp'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='require' name='arch-capabilities'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='require' name='ssbd'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='require' name='cmp_legacy'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='require' name='overflow-recov'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='require' name='succor'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='require' name='ibrs'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='require' name='amd-ssbd'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='require' name='virt-ssbd'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='disable' name='lbrv'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='disable' name='tsc-scale'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='disable' name='vmcb-clean'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='disable' name='flushbyasid'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='disable' name='pause-filter'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='disable' name='pfthreshold'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='require' name='rdctl-no'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='require' name='mds-no'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='require' name='pschange-mc-no'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='require' name='gds-no'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='require' name='rfds-no'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='disable' name='xsaves'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='disable' name='svm'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='require' name='topoext'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='disable' name='npt'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='disable' name='nrip-save'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   </cpu>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   <clock offset='utc'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <timer name='pit' tickpolicy='delay'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <timer name='hpet' present='no'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   </clock>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   <on_poweroff>destroy</on_poweroff>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   <on_reboot>restart</on_reboot>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   <on_crash>destroy</on_crash>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   <devices>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <disk type='network' device='disk'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <driver name='qemu' type='raw' cache='none'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <auth username='openstack'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:         <secret type='ceph' uuid='787292cc-8154-50c4-9e00-e9be3e817149'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       </auth>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <source protocol='rbd' name='vms/ea469a2e-bf09-495c-9b5e-02ad38d32d40_disk' index='2'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:         <host name='192.168.122.100' port='6789'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:         <host name='192.168.122.102' port='6789'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:         <host name='192.168.122.101' port='6789'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       </source>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <target dev='vda' bus='virtio'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='virtio-disk0'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </disk>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <disk type='network' device='cdrom'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <driver name='qemu' type='raw' cache='none'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <auth username='openstack'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:         <secret type='ceph' uuid='787292cc-8154-50c4-9e00-e9be3e817149'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       </auth>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <source protocol='rbd' name='vms/ea469a2e-bf09-495c-9b5e-02ad38d32d40_disk.config' index='1'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:         <host name='192.168.122.100' port='6789'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:         <host name='192.168.122.102' port='6789'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:         <host name='192.168.122.101' port='6789'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       </source>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <target dev='sda' bus='sata'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <readonly/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='sata0-0-0'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </disk>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <controller type='pci' index='0' model='pcie-root'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='pcie.0'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <target chassis='1' port='0x10'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='pci.1'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <target chassis='2' port='0x11'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='pci.2'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <target chassis='3' port='0x12'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='pci.3'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <target chassis='4' port='0x13'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='pci.4'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <target chassis='5' port='0x14'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='pci.5'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <target chassis='6' port='0x15'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='pci.6'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <target chassis='7' port='0x16'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='pci.7'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <target chassis='8' port='0x17'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='pci.8'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <target chassis='9' port='0x18'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='pci.9'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <target chassis='10' port='0x19'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='pci.10'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <target chassis='11' port='0x1a'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='pci.11'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <target chassis='12' port='0x1b'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='pci.12'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <target chassis='13' port='0x1c'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='pci.13'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <target chassis='14' port='0x1d'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='pci.14'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <target chassis='15' port='0x1e'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='pci.15'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <target chassis='16' port='0x1f'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='pci.16'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <target chassis='17' port='0x20'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='pci.17'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <target chassis='18' port='0x21'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='pci.18'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <target chassis='19' port='0x22'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='pci.19'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <target chassis='20' port='0x23'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='pci.20'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <target chassis='21' port='0x24'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='pci.21'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <target chassis='22' port='0x25'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='pci.22'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <target chassis='23' port='0x26'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='pci.23'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <target chassis='24' port='0x27'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='pci.24'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <target chassis='25' port='0x28'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='pci.25'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <model name='pcie-pci-bridge'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='pci.26'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='usb'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <controller type='sata' index='0'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='ide'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <interface type='ethernet'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <mac address='fa:16:3e:e6:b0:e0'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <target dev='tapbe4ec274-2a'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <model type='virtio'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <driver name='vhost' rx_queue_size='512'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <mtu size='1442'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='net0'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </interface>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <serial type='pty'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <source path='/dev/pts/0'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <log file='/var/lib/nova/instances/ea469a2e-bf09-495c-9b5e-02ad38d32d40/console.log' append='off'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <target type='isa-serial' port='0'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:         <model name='isa-serial'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       </target>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='serial0'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </serial>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <console type='pty' tty='/dev/pts/0'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <source path='/dev/pts/0'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <log file='/var/lib/nova/instances/ea469a2e-bf09-495c-9b5e-02ad38d32d40/console.log' append='off'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <target type='serial' port='0'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='serial0'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </console>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <input type='tablet' bus='usb'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='input0'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='usb' bus='0' port='1'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </input>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <input type='mouse' bus='ps2'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='input1'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </input>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <input type='keyboard' bus='ps2'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='input2'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </input>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <listen type='address' address='::0'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </graphics>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <audio id='1' type='none'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <video>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <model type='virtio' heads='1' primary='yes'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='video0'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </video>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <watchdog model='itco' action='reset'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='watchdog0'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </watchdog>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <memballoon model='virtio'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <stats period='10'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='balloon0'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </memballoon>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <rng model='virtio'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <backend model='random'>/dev/urandom</backend>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='rng0'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </rng>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   </devices>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <label>system_u:system_r:svirt_t:s0:c144,c208</label>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c144,c208</imagelabel>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   </seclabel>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <label>+107:+107</label>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <imagelabel>+107:+107</imagelabel>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   </seclabel>
Oct 08 10:16:34 compute-0 nova_compute[262220]: </domain>
Oct 08 10:16:34 compute-0 nova_compute[262220]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 08 10:16:34 compute-0 nova_compute[262220]: 2025-10-08 10:16:34.798 2 DEBUG nova.virt.libvirt.guest [req-a1aadc2d-190e-4eb8-8971-e56567d153df req-4eb46238-2d8e-488b-aaef-6342045264a0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:40:4d:66"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap79d28498-fe"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 08 10:16:34 compute-0 nova_compute[262220]: 2025-10-08 10:16:34.801 2 DEBUG nova.virt.libvirt.guest [req-a1aadc2d-190e-4eb8-8971-e56567d153df req-4eb46238-2d8e-488b-aaef-6342045264a0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:40:4d:66"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap79d28498-fe"/></interface>not found in domain: <domain type='kvm' id='2'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   <name>instance-00000006</name>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   <uuid>ea469a2e-bf09-495c-9b5e-02ad38d32d40</uuid>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   <metadata>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 08 10:16:34 compute-0 nova_compute[262220]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   <nova:name>tempest-TestNetworkBasicOps-server-1473882269</nova:name>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   <nova:creationTime>2025-10-08 10:16:31</nova:creationTime>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   <nova:flavor name="m1.nano">
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <nova:memory>128</nova:memory>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <nova:disk>1</nova:disk>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <nova:swap>0</nova:swap>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <nova:ephemeral>0</nova:ephemeral>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <nova:vcpus>1</nova:vcpus>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   </nova:flavor>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   <nova:owner>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <nova:user uuid="d50b19166a7245e390a6e29682191263">tempest-TestNetworkBasicOps-139500885-project-member</nova:user>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <nova:project uuid="0edb2c85b88b4b168a4b2e8c5ed4a05c">tempest-TestNetworkBasicOps-139500885</nova:project>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   </nova:owner>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   <nova:root type="image" uuid="e5994bac-385d-4cfe-962e-386aa0559983"/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   <nova:ports>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <nova:port uuid="be4ec274-2a90-48e8-bd51-fd01f3c659da">
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </nova:port>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   </nova:ports>
Oct 08 10:16:34 compute-0 nova_compute[262220]: </nova:instance>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   </metadata>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   <memory unit='KiB'>131072</memory>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   <vcpu placement='static'>1</vcpu>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   <resource>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <partition>/machine</partition>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   </resource>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   <sysinfo type='smbios'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <system>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <entry name='manufacturer'>RDO</entry>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <entry name='product'>OpenStack Compute</entry>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <entry name='serial'>ea469a2e-bf09-495c-9b5e-02ad38d32d40</entry>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <entry name='uuid'>ea469a2e-bf09-495c-9b5e-02ad38d32d40</entry>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <entry name='family'>Virtual Machine</entry>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </system>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   </sysinfo>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   <os>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <boot dev='hd'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <smbios mode='sysinfo'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   </os>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   <features>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <acpi/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <apic/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <vmcoreinfo state='on'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   </features>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   <cpu mode='custom' match='exact' check='full'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <vendor>AMD</vendor>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='require' name='x2apic'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='require' name='tsc-deadline'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='require' name='hypervisor'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='require' name='tsc_adjust'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='require' name='spec-ctrl'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='require' name='stibp'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='require' name='arch-capabilities'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='require' name='ssbd'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='require' name='cmp_legacy'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='require' name='overflow-recov'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='require' name='succor'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='require' name='ibrs'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='require' name='amd-ssbd'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='require' name='virt-ssbd'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='disable' name='lbrv'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='disable' name='tsc-scale'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='disable' name='vmcb-clean'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='disable' name='flushbyasid'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='disable' name='pause-filter'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='disable' name='pfthreshold'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='require' name='rdctl-no'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='require' name='mds-no'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='require' name='pschange-mc-no'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='require' name='gds-no'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='require' name='rfds-no'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='disable' name='xsaves'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='disable' name='svm'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='require' name='topoext'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='disable' name='npt'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <feature policy='disable' name='nrip-save'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   </cpu>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   <clock offset='utc'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <timer name='pit' tickpolicy='delay'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <timer name='hpet' present='no'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   </clock>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   <on_poweroff>destroy</on_poweroff>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   <on_reboot>restart</on_reboot>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   <on_crash>destroy</on_crash>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   <devices>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <disk type='network' device='disk'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <driver name='qemu' type='raw' cache='none'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <auth username='openstack'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:         <secret type='ceph' uuid='787292cc-8154-50c4-9e00-e9be3e817149'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       </auth>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <source protocol='rbd' name='vms/ea469a2e-bf09-495c-9b5e-02ad38d32d40_disk' index='2'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:         <host name='192.168.122.100' port='6789'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:         <host name='192.168.122.102' port='6789'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:         <host name='192.168.122.101' port='6789'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       </source>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <target dev='vda' bus='virtio'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='virtio-disk0'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </disk>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <disk type='network' device='cdrom'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <driver name='qemu' type='raw' cache='none'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <auth username='openstack'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:         <secret type='ceph' uuid='787292cc-8154-50c4-9e00-e9be3e817149'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       </auth>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <source protocol='rbd' name='vms/ea469a2e-bf09-495c-9b5e-02ad38d32d40_disk.config' index='1'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:         <host name='192.168.122.100' port='6789'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:         <host name='192.168.122.102' port='6789'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:         <host name='192.168.122.101' port='6789'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       </source>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <target dev='sda' bus='sata'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <readonly/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='sata0-0-0'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </disk>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <controller type='pci' index='0' model='pcie-root'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='pcie.0'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <target chassis='1' port='0x10'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='pci.1'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <target chassis='2' port='0x11'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='pci.2'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <target chassis='3' port='0x12'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='pci.3'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <target chassis='4' port='0x13'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='pci.4'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <target chassis='5' port='0x14'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='pci.5'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <target chassis='6' port='0x15'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='pci.6'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <target chassis='7' port='0x16'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='pci.7'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <target chassis='8' port='0x17'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='pci.8'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <target chassis='9' port='0x18'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='pci.9'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <target chassis='10' port='0x19'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='pci.10'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <target chassis='11' port='0x1a'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='pci.11'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <target chassis='12' port='0x1b'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='pci.12'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <target chassis='13' port='0x1c'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='pci.13'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <target chassis='14' port='0x1d'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='pci.14'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <target chassis='15' port='0x1e'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='pci.15'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <target chassis='16' port='0x1f'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='pci.16'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <target chassis='17' port='0x20'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='pci.17'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <target chassis='18' port='0x21'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='pci.18'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <target chassis='19' port='0x22'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='pci.19'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <target chassis='20' port='0x23'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='pci.20'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <target chassis='21' port='0x24'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='pci.21'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <target chassis='22' port='0x25'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='pci.22'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <target chassis='23' port='0x26'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='pci.23'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <target chassis='24' port='0x27'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='pci.24'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <model name='pcie-root-port'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <target chassis='25' port='0x28'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='pci.25'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <model name='pcie-pci-bridge'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='pci.26'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='usb'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <controller type='sata' index='0'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='ide'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </controller>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <interface type='ethernet'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <mac address='fa:16:3e:e6:b0:e0'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <target dev='tapbe4ec274-2a'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <model type='virtio'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <driver name='vhost' rx_queue_size='512'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <mtu size='1442'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='net0'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </interface>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <serial type='pty'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <source path='/dev/pts/0'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <log file='/var/lib/nova/instances/ea469a2e-bf09-495c-9b5e-02ad38d32d40/console.log' append='off'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <target type='isa-serial' port='0'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:         <model name='isa-serial'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       </target>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='serial0'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </serial>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <console type='pty' tty='/dev/pts/0'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <source path='/dev/pts/0'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <log file='/var/lib/nova/instances/ea469a2e-bf09-495c-9b5e-02ad38d32d40/console.log' append='off'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <target type='serial' port='0'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='serial0'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </console>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <input type='tablet' bus='usb'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='input0'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='usb' bus='0' port='1'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </input>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <input type='mouse' bus='ps2'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='input1'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </input>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <input type='keyboard' bus='ps2'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='input2'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </input>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <listen type='address' address='::0'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </graphics>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <audio id='1' type='none'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <video>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <model type='virtio' heads='1' primary='yes'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='video0'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </video>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <watchdog model='itco' action='reset'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='watchdog0'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </watchdog>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <memballoon model='virtio'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <stats period='10'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='balloon0'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </memballoon>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <rng model='virtio'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <backend model='random'>/dev/urandom</backend>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <alias name='rng0'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </rng>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   </devices>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <label>system_u:system_r:svirt_t:s0:c144,c208</label>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c144,c208</imagelabel>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   </seclabel>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <label>+107:+107</label>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <imagelabel>+107:+107</imagelabel>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   </seclabel>
Oct 08 10:16:34 compute-0 nova_compute[262220]: </domain>
Oct 08 10:16:34 compute-0 nova_compute[262220]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 08 10:16:34 compute-0 nova_compute[262220]: 2025-10-08 10:16:34.803 2 WARNING nova.virt.libvirt.driver [req-a1aadc2d-190e-4eb8-8971-e56567d153df req-4eb46238-2d8e-488b-aaef-6342045264a0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Detaching interface fa:16:3e:40:4d:66 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap79d28498-fe' not found.
Oct 08 10:16:34 compute-0 nova_compute[262220]: 2025-10-08 10:16:34.804 2 DEBUG nova.virt.libvirt.vif [req-a1aadc2d-190e-4eb8-8971-e56567d153df req-4eb46238-2d8e-488b-aaef-6342045264a0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-08T10:14:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1473882269',display_name='tempest-TestNetworkBasicOps-server-1473882269',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1473882269',id=6,image_ref='e5994bac-385d-4cfe-962e-386aa0559983',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLR6MyJBOMYp2wYGaL6G2mEbJcrZztqNeLLTecSXpMa+10UdVkrFMZxX+qiOC0ccFZIJleCiHIYc6JXNFg7vRmgJ0JtTU6W+KAYc8u1JxRO51IwGJ30ByO68fx1sTbOqEg==',key_name='tempest-TestNetworkBasicOps-1715641436',keypairs=<?>,launch_index=0,launched_at=2025-10-08T10:14:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0edb2c85b88b4b168a4b2e8c5ed4a05c',ramdisk_id='',reservation_id='r-b2owsx2f',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e5994bac-385d-4cfe-962e-386aa0559983',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-139500885',owner_user_name='tempest-TestNetworkBasicOps-139500885-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-08T10:14:35Z,user_data=None,user_id='d50b19166a7245e390a6e29682191263',uuid=ea469a2e-bf09-495c-9b5e-02ad38d32d40,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "79d28498-fe9d-49dc-ad2c-bde432b239db", "address": "fa:16:3e:40:4d:66", "network": {"id": "0a28a475-c59d-4526-93af-b8af40052e5c", "bridge": "br-int", "label": "tempest-network-smoke--1745814783", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79d28498-fe", "ovs_interfaceid": "79d28498-fe9d-49dc-ad2c-bde432b239db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 08 10:16:34 compute-0 nova_compute[262220]: 2025-10-08 10:16:34.805 2 DEBUG nova.network.os_vif_util [req-a1aadc2d-190e-4eb8-8971-e56567d153df req-4eb46238-2d8e-488b-aaef-6342045264a0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Converting VIF {"id": "79d28498-fe9d-49dc-ad2c-bde432b239db", "address": "fa:16:3e:40:4d:66", "network": {"id": "0a28a475-c59d-4526-93af-b8af40052e5c", "bridge": "br-int", "label": "tempest-network-smoke--1745814783", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79d28498-fe", "ovs_interfaceid": "79d28498-fe9d-49dc-ad2c-bde432b239db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 08 10:16:34 compute-0 nova_compute[262220]: 2025-10-08 10:16:34.806 2 DEBUG nova.network.os_vif_util [req-a1aadc2d-190e-4eb8-8971-e56567d153df req-4eb46238-2d8e-488b-aaef-6342045264a0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:40:4d:66,bridge_name='br-int',has_traffic_filtering=True,id=79d28498-fe9d-49dc-ad2c-bde432b239db,network=Network(0a28a475-c59d-4526-93af-b8af40052e5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79d28498-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 08 10:16:34 compute-0 nova_compute[262220]: 2025-10-08 10:16:34.806 2 DEBUG os_vif [req-a1aadc2d-190e-4eb8-8971-e56567d153df req-4eb46238-2d8e-488b-aaef-6342045264a0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:40:4d:66,bridge_name='br-int',has_traffic_filtering=True,id=79d28498-fe9d-49dc-ad2c-bde432b239db,network=Network(0a28a475-c59d-4526-93af-b8af40052e5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79d28498-fe') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 08 10:16:34 compute-0 nova_compute[262220]: 2025-10-08 10:16:34.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:16:34 compute-0 nova_compute[262220]: 2025-10-08 10:16:34.807 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79d28498-fe, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 10:16:34 compute-0 nova_compute[262220]: 2025-10-08 10:16:34.808 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 10:16:34 compute-0 nova_compute[262220]: 2025-10-08 10:16:34.810 2 INFO os_vif [req-a1aadc2d-190e-4eb8-8971-e56567d153df req-4eb46238-2d8e-488b-aaef-6342045264a0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:40:4d:66,bridge_name='br-int',has_traffic_filtering=True,id=79d28498-fe9d-49dc-ad2c-bde432b239db,network=Network(0a28a475-c59d-4526-93af-b8af40052e5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79d28498-fe')
Oct 08 10:16:34 compute-0 nova_compute[262220]: 2025-10-08 10:16:34.810 2 DEBUG nova.virt.libvirt.guest [req-a1aadc2d-190e-4eb8-8971-e56567d153df req-4eb46238-2d8e-488b-aaef-6342045264a0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 08 10:16:34 compute-0 nova_compute[262220]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   <nova:name>tempest-TestNetworkBasicOps-server-1473882269</nova:name>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   <nova:creationTime>2025-10-08 10:16:34</nova:creationTime>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   <nova:flavor name="m1.nano">
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <nova:memory>128</nova:memory>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <nova:disk>1</nova:disk>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <nova:swap>0</nova:swap>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <nova:ephemeral>0</nova:ephemeral>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <nova:vcpus>1</nova:vcpus>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   </nova:flavor>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   <nova:owner>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <nova:user uuid="d50b19166a7245e390a6e29682191263">tempest-TestNetworkBasicOps-139500885-project-member</nova:user>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <nova:project uuid="0edb2c85b88b4b168a4b2e8c5ed4a05c">tempest-TestNetworkBasicOps-139500885</nova:project>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   </nova:owner>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   <nova:root type="image" uuid="e5994bac-385d-4cfe-962e-386aa0559983"/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   <nova:ports>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     <nova:port uuid="be4ec274-2a90-48e8-bd51-fd01f3c659da">
Oct 08 10:16:34 compute-0 nova_compute[262220]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 08 10:16:34 compute-0 nova_compute[262220]:     </nova:port>
Oct 08 10:16:34 compute-0 nova_compute[262220]:   </nova:ports>
Oct 08 10:16:34 compute-0 nova_compute[262220]: </nova:instance>
Oct 08 10:16:34 compute-0 nova_compute[262220]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 08 10:16:34 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:16:34 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:16:34 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:16:34.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:16:35 compute-0 podman[277859]: 2025-10-08 10:16:35.080348495 +0000 UTC m=+0.059102975 container create a26ada7dc500c7c321e0b13a970f0141d944b972a1ecac53eaea5b5b2e1f2436 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_jemison, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:16:35 compute-0 systemd[1]: Started libpod-conmon-a26ada7dc500c7c321e0b13a970f0141d944b972a1ecac53eaea5b5b2e1f2436.scope.
Oct 08 10:16:35 compute-0 podman[277859]: 2025-10-08 10:16:35.041628878 +0000 UTC m=+0.020383388 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:16:35 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:16:35 compute-0 podman[277859]: 2025-10-08 10:16:35.159217896 +0000 UTC m=+0.137972396 container init a26ada7dc500c7c321e0b13a970f0141d944b972a1ecac53eaea5b5b2e1f2436 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_jemison, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 10:16:35 compute-0 podman[277859]: 2025-10-08 10:16:35.166062556 +0000 UTC m=+0.144817036 container start a26ada7dc500c7c321e0b13a970f0141d944b972a1ecac53eaea5b5b2e1f2436 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_jemison, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 10:16:35 compute-0 podman[277859]: 2025-10-08 10:16:35.16928001 +0000 UTC m=+0.148034490 container attach a26ada7dc500c7c321e0b13a970f0141d944b972a1ecac53eaea5b5b2e1f2436 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_jemison, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 10:16:35 compute-0 distracted_jemison[277875]: 167 167
Oct 08 10:16:35 compute-0 systemd[1]: libpod-a26ada7dc500c7c321e0b13a970f0141d944b972a1ecac53eaea5b5b2e1f2436.scope: Deactivated successfully.
Oct 08 10:16:35 compute-0 podman[277859]: 2025-10-08 10:16:35.1714586 +0000 UTC m=+0.150213080 container died a26ada7dc500c7c321e0b13a970f0141d944b972a1ecac53eaea5b5b2e1f2436 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_jemison, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 10:16:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-e06d261bdd26ea7582cbc0dd05b494af78f2c33044d74c1b52c3545e9bf621e3-merged.mount: Deactivated successfully.
Oct 08 10:16:35 compute-0 podman[277859]: 2025-10-08 10:16:35.206756037 +0000 UTC m=+0.185510507 container remove a26ada7dc500c7c321e0b13a970f0141d944b972a1ecac53eaea5b5b2e1f2436 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_jemison, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct 08 10:16:35 compute-0 systemd[1]: libpod-conmon-a26ada7dc500c7c321e0b13a970f0141d944b972a1ecac53eaea5b5b2e1f2436.scope: Deactivated successfully.
Oct 08 10:16:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:35 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_48] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940039d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:35 compute-0 podman[277900]: 2025-10-08 10:16:35.400450576 +0000 UTC m=+0.071893656 container create 326d505055daf726e00ca3600d17aefe9ebeb5bc427d6aa4631d203aba271ff3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_lehmann, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct 08 10:16:35 compute-0 podman[277900]: 2025-10-08 10:16:35.353934998 +0000 UTC m=+0.025378088 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:16:35 compute-0 systemd[1]: Started libpod-conmon-326d505055daf726e00ca3600d17aefe9ebeb5bc427d6aa4631d203aba271ff3.scope.
Oct 08 10:16:35 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:16:35 compute-0 ceph-mon[73572]: pgmap v959: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 14 KiB/s wr, 30 op/s
Oct 08 10:16:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12ac354ade5a51f651f3fa6eaa8ac26157769b8221044f30b8aa0c2277f1b49c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:16:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12ac354ade5a51f651f3fa6eaa8ac26157769b8221044f30b8aa0c2277f1b49c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:16:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12ac354ade5a51f651f3fa6eaa8ac26157769b8221044f30b8aa0c2277f1b49c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:16:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12ac354ade5a51f651f3fa6eaa8ac26157769b8221044f30b8aa0c2277f1b49c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:16:35 compute-0 podman[277900]: 2025-10-08 10:16:35.517833248 +0000 UTC m=+0.189276358 container init 326d505055daf726e00ca3600d17aefe9ebeb5bc427d6aa4631d203aba271ff3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_lehmann, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:16:35 compute-0 podman[277900]: 2025-10-08 10:16:35.525888168 +0000 UTC m=+0.197331248 container start 326d505055daf726e00ca3600d17aefe9ebeb5bc427d6aa4631d203aba271ff3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_lehmann, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:16:35 compute-0 podman[277900]: 2025-10-08 10:16:35.529062519 +0000 UTC m=+0.200505629 container attach 326d505055daf726e00ca3600d17aefe9ebeb5bc427d6aa4631d203aba271ff3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 08 10:16:35 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:16:35 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:16:35 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:16:35.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:16:35 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:16:35] "GET /metrics HTTP/1.1" 200 48469 "" "Prometheus/2.51.0"
Oct 08 10:16:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:16:35] "GET /metrics HTTP/1.1" 200 48469 "" "Prometheus/2.51.0"
Oct 08 10:16:35 compute-0 ovn_controller[153187]: 2025-10-08T10:16:35Z|00053|binding|INFO|Releasing lport f613d263-6ad2-4e23-84bc-b066c6b6b34a from this chassis (sb_readonly=0)
Oct 08 10:16:35 compute-0 nova_compute[262220]: 2025-10-08 10:16:35.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:16:35 compute-0 dreamy_lehmann[277916]: {
Oct 08 10:16:35 compute-0 dreamy_lehmann[277916]:     "1": [
Oct 08 10:16:35 compute-0 dreamy_lehmann[277916]:         {
Oct 08 10:16:35 compute-0 dreamy_lehmann[277916]:             "devices": [
Oct 08 10:16:35 compute-0 dreamy_lehmann[277916]:                 "/dev/loop3"
Oct 08 10:16:35 compute-0 dreamy_lehmann[277916]:             ],
Oct 08 10:16:35 compute-0 dreamy_lehmann[277916]:             "lv_name": "ceph_lv0",
Oct 08 10:16:35 compute-0 dreamy_lehmann[277916]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:16:35 compute-0 dreamy_lehmann[277916]:             "lv_size": "21470642176",
Oct 08 10:16:35 compute-0 dreamy_lehmann[277916]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 08 10:16:35 compute-0 dreamy_lehmann[277916]:             "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 10:16:35 compute-0 dreamy_lehmann[277916]:             "name": "ceph_lv0",
Oct 08 10:16:35 compute-0 dreamy_lehmann[277916]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:16:35 compute-0 dreamy_lehmann[277916]:             "tags": {
Oct 08 10:16:35 compute-0 dreamy_lehmann[277916]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:16:35 compute-0 dreamy_lehmann[277916]:                 "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 10:16:35 compute-0 dreamy_lehmann[277916]:                 "ceph.cephx_lockbox_secret": "",
Oct 08 10:16:35 compute-0 dreamy_lehmann[277916]:                 "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct 08 10:16:35 compute-0 dreamy_lehmann[277916]:                 "ceph.cluster_name": "ceph",
Oct 08 10:16:35 compute-0 dreamy_lehmann[277916]:                 "ceph.crush_device_class": "",
Oct 08 10:16:35 compute-0 dreamy_lehmann[277916]:                 "ceph.encrypted": "0",
Oct 08 10:16:35 compute-0 dreamy_lehmann[277916]:                 "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct 08 10:16:35 compute-0 dreamy_lehmann[277916]:                 "ceph.osd_id": "1",
Oct 08 10:16:35 compute-0 dreamy_lehmann[277916]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 08 10:16:35 compute-0 dreamy_lehmann[277916]:                 "ceph.type": "block",
Oct 08 10:16:35 compute-0 dreamy_lehmann[277916]:                 "ceph.vdo": "0",
Oct 08 10:16:35 compute-0 dreamy_lehmann[277916]:                 "ceph.with_tpm": "0"
Oct 08 10:16:35 compute-0 dreamy_lehmann[277916]:             },
Oct 08 10:16:35 compute-0 dreamy_lehmann[277916]:             "type": "block",
Oct 08 10:16:35 compute-0 dreamy_lehmann[277916]:             "vg_name": "ceph_vg0"
Oct 08 10:16:35 compute-0 dreamy_lehmann[277916]:         }
Oct 08 10:16:35 compute-0 dreamy_lehmann[277916]:     ]
Oct 08 10:16:35 compute-0 dreamy_lehmann[277916]: }
Oct 08 10:16:35 compute-0 systemd[1]: libpod-326d505055daf726e00ca3600d17aefe9ebeb5bc427d6aa4631d203aba271ff3.scope: Deactivated successfully.
Oct 08 10:16:35 compute-0 podman[277900]: 2025-10-08 10:16:35.882408052 +0000 UTC m=+0.553851132 container died 326d505055daf726e00ca3600d17aefe9ebeb5bc427d6aa4631d203aba271ff3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:16:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-12ac354ade5a51f651f3fa6eaa8ac26157769b8221044f30b8aa0c2277f1b49c-merged.mount: Deactivated successfully.
Oct 08 10:16:35 compute-0 podman[277900]: 2025-10-08 10:16:35.930169441 +0000 UTC m=+0.601612521 container remove 326d505055daf726e00ca3600d17aefe9ebeb5bc427d6aa4631d203aba271ff3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_lehmann, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 10:16:35 compute-0 systemd[1]: libpod-conmon-326d505055daf726e00ca3600d17aefe9ebeb5bc427d6aa4631d203aba271ff3.scope: Deactivated successfully.
Oct 08 10:16:35 compute-0 sudo[277794]: pam_unix(sudo:session): session closed for user root
Oct 08 10:16:36 compute-0 sudo[277940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:16:36 compute-0 nova_compute[262220]: 2025-10-08 10:16:36.033 2 INFO nova.network.neutron [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Port 79d28498-fe9d-49dc-ad2c-bde432b239db from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Oct 08 10:16:36 compute-0 nova_compute[262220]: 2025-10-08 10:16:36.034 2 DEBUG nova.network.neutron [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Updating instance_info_cache with network_info: [{"id": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "address": "fa:16:3e:e6:b0:e0", "network": {"id": "834a886f-bb33-49ed-b47e-ef0308a38e89", "bridge": "br-int", "label": "tempest-network-smoke--1879322246", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe4ec274-2a", "ovs_interfaceid": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 08 10:16:36 compute-0 sudo[277940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:16:36 compute-0 sudo[277940]: pam_unix(sudo:session): session closed for user root
Oct 08 10:16:36 compute-0 nova_compute[262220]: 2025-10-08 10:16:36.070 2 DEBUG oslo_concurrency.lockutils [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Releasing lock "refresh_cache-ea469a2e-bf09-495c-9b5e-02ad38d32d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 08 10:16:36 compute-0 sudo[277965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- raw list --format json
Oct 08 10:16:36 compute-0 sudo[277965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:16:36 compute-0 nova_compute[262220]: 2025-10-08 10:16:36.095 2 DEBUG oslo_concurrency.lockutils [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "interface-ea469a2e-bf09-495c-9b5e-02ad38d32d40-79d28498-fe9d-49dc-ad2c-bde432b239db" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 4.820s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:16:36 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v960: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.9 KiB/s wr, 28 op/s
Oct 08 10:16:36 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:36 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0047e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:36 compute-0 podman[278032]: 2025-10-08 10:16:36.530749819 +0000 UTC m=+0.048690460 container create 34fe3732289c69f49498934497ba552dc5fdaa9f81e5b3b24353850118066cb9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_lewin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 08 10:16:36 compute-0 systemd[1]: Started libpod-conmon-34fe3732289c69f49498934497ba552dc5fdaa9f81e5b3b24353850118066cb9.scope.
Oct 08 10:16:36 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:16:36 compute-0 podman[278032]: 2025-10-08 10:16:36.508294995 +0000 UTC m=+0.026235656 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:16:36 compute-0 podman[278032]: 2025-10-08 10:16:36.613411461 +0000 UTC m=+0.131352132 container init 34fe3732289c69f49498934497ba552dc5fdaa9f81e5b3b24353850118066cb9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_lewin, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Oct 08 10:16:36 compute-0 podman[278032]: 2025-10-08 10:16:36.621117229 +0000 UTC m=+0.139057870 container start 34fe3732289c69f49498934497ba552dc5fdaa9f81e5b3b24353850118066cb9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_lewin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct 08 10:16:36 compute-0 podman[278032]: 2025-10-08 10:16:36.624313032 +0000 UTC m=+0.142253693 container attach 34fe3732289c69f49498934497ba552dc5fdaa9f81e5b3b24353850118066cb9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:16:36 compute-0 peaceful_lewin[278049]: 167 167
Oct 08 10:16:36 compute-0 systemd[1]: libpod-34fe3732289c69f49498934497ba552dc5fdaa9f81e5b3b24353850118066cb9.scope: Deactivated successfully.
Oct 08 10:16:36 compute-0 podman[278032]: 2025-10-08 10:16:36.626449031 +0000 UTC m=+0.144389672 container died 34fe3732289c69f49498934497ba552dc5fdaa9f81e5b3b24353850118066cb9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:16:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-15f62404d77d12f04baf89dac6d54f39b48bfa0e8a75b2aabf18b33058699fb5-merged.mount: Deactivated successfully.
Oct 08 10:16:36 compute-0 podman[278032]: 2025-10-08 10:16:36.668078503 +0000 UTC m=+0.186019144 container remove 34fe3732289c69f49498934497ba552dc5fdaa9f81e5b3b24353850118066cb9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_lewin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid)
Oct 08 10:16:36 compute-0 systemd[1]: libpod-conmon-34fe3732289c69f49498934497ba552dc5fdaa9f81e5b3b24353850118066cb9.scope: Deactivated successfully.
Oct 08 10:16:36 compute-0 nova_compute[262220]: 2025-10-08 10:16:36.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:16:36 compute-0 nova_compute[262220]: 2025-10-08 10:16:36.769 2 DEBUG nova.compute.manager [req-83553102-41f1-4c9b-b2d0-c5b376ba553b req-d1c88407-5c65-4555-90c2-4a1cee97cef0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Received event network-changed-be4ec274-2a90-48e8-bd51-fd01f3c659da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 08 10:16:36 compute-0 nova_compute[262220]: 2025-10-08 10:16:36.769 2 DEBUG nova.compute.manager [req-83553102-41f1-4c9b-b2d0-c5b376ba553b req-d1c88407-5c65-4555-90c2-4a1cee97cef0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Refreshing instance network info cache due to event network-changed-be4ec274-2a90-48e8-bd51-fd01f3c659da. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 08 10:16:36 compute-0 nova_compute[262220]: 2025-10-08 10:16:36.769 2 DEBUG oslo_concurrency.lockutils [req-83553102-41f1-4c9b-b2d0-c5b376ba553b req-d1c88407-5c65-4555-90c2-4a1cee97cef0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "refresh_cache-ea469a2e-bf09-495c-9b5e-02ad38d32d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 08 10:16:36 compute-0 nova_compute[262220]: 2025-10-08 10:16:36.769 2 DEBUG oslo_concurrency.lockutils [req-83553102-41f1-4c9b-b2d0-c5b376ba553b req-d1c88407-5c65-4555-90c2-4a1cee97cef0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquired lock "refresh_cache-ea469a2e-bf09-495c-9b5e-02ad38d32d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 08 10:16:36 compute-0 nova_compute[262220]: 2025-10-08 10:16:36.769 2 DEBUG nova.network.neutron [req-83553102-41f1-4c9b-b2d0-c5b376ba553b req-d1c88407-5c65-4555-90c2-4a1cee97cef0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Refreshing network info cache for port be4ec274-2a90-48e8-bd51-fd01f3c659da _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 08 10:16:36 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:36 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac780031c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:36 compute-0 podman[278073]: 2025-10-08 10:16:36.836801107 +0000 UTC m=+0.041472817 container create a0409f557dfd61ffda492937d279acfa5bf3c7a21fd612de155a528b3f759a0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct 08 10:16:36 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:16:36 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:16:36 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:16:36.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:16:36 compute-0 systemd[1]: Started libpod-conmon-a0409f557dfd61ffda492937d279acfa5bf3c7a21fd612de155a528b3f759a0a.scope.
Oct 08 10:16:36 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:16:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca6d8df09b7ce16543a93f705c57ac8819de140be41e51546844e71bc226ea92/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:16:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca6d8df09b7ce16543a93f705c57ac8819de140be41e51546844e71bc226ea92/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:16:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca6d8df09b7ce16543a93f705c57ac8819de140be41e51546844e71bc226ea92/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:16:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca6d8df09b7ce16543a93f705c57ac8819de140be41e51546844e71bc226ea92/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:16:36 compute-0 podman[278073]: 2025-10-08 10:16:36.821333439 +0000 UTC m=+0.026005179 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:16:37 compute-0 nova_compute[262220]: 2025-10-08 10:16:37.015 2 DEBUG oslo_concurrency.lockutils [None req-27b10dcf-62fa-454f-b5a2-ce6e4b6c7959 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:16:37 compute-0 nova_compute[262220]: 2025-10-08 10:16:37.015 2 DEBUG oslo_concurrency.lockutils [None req-27b10dcf-62fa-454f-b5a2-ce6e4b6c7959 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:16:37 compute-0 nova_compute[262220]: 2025-10-08 10:16:37.015 2 DEBUG oslo_concurrency.lockutils [None req-27b10dcf-62fa-454f-b5a2-ce6e4b6c7959 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:16:37 compute-0 nova_compute[262220]: 2025-10-08 10:16:37.015 2 DEBUG oslo_concurrency.lockutils [None req-27b10dcf-62fa-454f-b5a2-ce6e4b6c7959 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:16:37 compute-0 nova_compute[262220]: 2025-10-08 10:16:37.016 2 DEBUG oslo_concurrency.lockutils [None req-27b10dcf-62fa-454f-b5a2-ce6e4b6c7959 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:16:37 compute-0 nova_compute[262220]: 2025-10-08 10:16:37.017 2 INFO nova.compute.manager [None req-27b10dcf-62fa-454f-b5a2-ce6e4b6c7959 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Terminating instance
Oct 08 10:16:37 compute-0 nova_compute[262220]: 2025-10-08 10:16:37.017 2 DEBUG nova.compute.manager [None req-27b10dcf-62fa-454f-b5a2-ce6e4b6c7959 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 08 10:16:37 compute-0 podman[278073]: 2025-10-08 10:16:37.02093101 +0000 UTC m=+0.225602780 container init a0409f557dfd61ffda492937d279acfa5bf3c7a21fd612de155a528b3f759a0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True)
Oct 08 10:16:37 compute-0 podman[278073]: 2025-10-08 10:16:37.02965225 +0000 UTC m=+0.234323960 container start a0409f557dfd61ffda492937d279acfa5bf3c7a21fd612de155a528b3f759a0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_mclaren, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct 08 10:16:37 compute-0 podman[278073]: 2025-10-08 10:16:37.03275379 +0000 UTC m=+0.237425600 container attach a0409f557dfd61ffda492937d279acfa5bf3c7a21fd612de155a528b3f759a0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_mclaren, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2)
Oct 08 10:16:37 compute-0 kernel: tapbe4ec274-2a (unregistering): left promiscuous mode
Oct 08 10:16:37 compute-0 NetworkManager[44872]: <info>  [1759918597.0751] device (tapbe4ec274-2a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 08 10:16:37 compute-0 ovn_controller[153187]: 2025-10-08T10:16:37Z|00054|binding|INFO|Releasing lport be4ec274-2a90-48e8-bd51-fd01f3c659da from this chassis (sb_readonly=0)
Oct 08 10:16:37 compute-0 ovn_controller[153187]: 2025-10-08T10:16:37Z|00055|binding|INFO|Setting lport be4ec274-2a90-48e8-bd51-fd01f3c659da down in Southbound
Oct 08 10:16:37 compute-0 ovn_controller[153187]: 2025-10-08T10:16:37Z|00056|binding|INFO|Removing iface tapbe4ec274-2a ovn-installed in OVS
Oct 08 10:16:37 compute-0 nova_compute[262220]: 2025-10-08 10:16:37.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:16:37 compute-0 nova_compute[262220]: 2025-10-08 10:16:37.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:16:37 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:16:37.142 163175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:b0:e0 10.100.0.3'], port_security=['fa:16:3e:e6:b0:e0 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ea469a2e-bf09-495c-9b5e-02ad38d32d40', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-834a886f-bb33-49ed-b47e-ef0308a38e89', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0edb2c85b88b4b168a4b2e8c5ed4a05c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '13817d67-6af8-4060-9f0c-16a7fd8532c0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2eaf1a8f-1880-48d7-9974-4c1e9169efe5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f191f105a60>], logical_port=be4ec274-2a90-48e8-bd51-fd01f3c659da) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f191f105a60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 08 10:16:37 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:16:37.143 163175 INFO neutron.agent.ovn.metadata.agent [-] Port be4ec274-2a90-48e8-bd51-fd01f3c659da in datapath 834a886f-bb33-49ed-b47e-ef0308a38e89 unbound from our chassis
Oct 08 10:16:37 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:16:37.144 163175 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 834a886f-bb33-49ed-b47e-ef0308a38e89, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 08 10:16:37 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:16:37.145 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[cd2ce43d-b577-40d2-a800-61ed62442c85]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:16:37 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:16:37.146 163175 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-834a886f-bb33-49ed-b47e-ef0308a38e89 namespace which is not needed anymore
Oct 08 10:16:37 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000006.scope: Deactivated successfully.
Oct 08 10:16:37 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000006.scope: Consumed 19.101s CPU time.
Oct 08 10:16:37 compute-0 systemd-machined[216030]: Machine qemu-2-instance-00000006 terminated.
Oct 08 10:16:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:16:37.167Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:16:37 compute-0 nova_compute[262220]: 2025-10-08 10:16:37.264 2 INFO nova.virt.libvirt.driver [-] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Instance destroyed successfully.
Oct 08 10:16:37 compute-0 nova_compute[262220]: 2025-10-08 10:16:37.266 2 DEBUG nova.objects.instance [None req-27b10dcf-62fa-454f-b5a2-ce6e4b6c7959 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lazy-loading 'resources' on Instance uuid ea469a2e-bf09-495c-9b5e-02ad38d32d40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 08 10:16:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:37 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_47] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:37 compute-0 neutron-haproxy-ovnmeta-834a886f-bb33-49ed-b47e-ef0308a38e89[274443]: [NOTICE]   (274447) : haproxy version is 2.8.14-c23fe91
Oct 08 10:16:37 compute-0 neutron-haproxy-ovnmeta-834a886f-bb33-49ed-b47e-ef0308a38e89[274443]: [NOTICE]   (274447) : path to executable is /usr/sbin/haproxy
Oct 08 10:16:37 compute-0 neutron-haproxy-ovnmeta-834a886f-bb33-49ed-b47e-ef0308a38e89[274443]: [WARNING]  (274447) : Exiting Master process...
Oct 08 10:16:37 compute-0 neutron-haproxy-ovnmeta-834a886f-bb33-49ed-b47e-ef0308a38e89[274443]: [ALERT]    (274447) : Current worker (274449) exited with code 143 (Terminated)
Oct 08 10:16:37 compute-0 neutron-haproxy-ovnmeta-834a886f-bb33-49ed-b47e-ef0308a38e89[274443]: [WARNING]  (274447) : All workers exited. Exiting... (0)
Oct 08 10:16:37 compute-0 systemd[1]: libpod-2c85508ebe5bbfbd488053b850f882989cf060afde04acce5a32fa0a4320dce9.scope: Deactivated successfully.
Oct 08 10:16:37 compute-0 podman[278126]: 2025-10-08 10:16:37.290448881 +0000 UTC m=+0.052516692 container died 2c85508ebe5bbfbd488053b850f882989cf060afde04acce5a32fa0a4320dce9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-834a886f-bb33-49ed-b47e-ef0308a38e89, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 08 10:16:37 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2c85508ebe5bbfbd488053b850f882989cf060afde04acce5a32fa0a4320dce9-userdata-shm.mount: Deactivated successfully.
Oct 08 10:16:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-841b76c2441b0eb7f658de0d9799efa6ab00baf820e9b70f7311256c5c904ae8-merged.mount: Deactivated successfully.
Oct 08 10:16:37 compute-0 podman[278126]: 2025-10-08 10:16:37.328818898 +0000 UTC m=+0.090886699 container cleanup 2c85508ebe5bbfbd488053b850f882989cf060afde04acce5a32fa0a4320dce9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-834a886f-bb33-49ed-b47e-ef0308a38e89, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 08 10:16:37 compute-0 nova_compute[262220]: 2025-10-08 10:16:37.340 2 DEBUG nova.virt.libvirt.vif [None req-27b10dcf-62fa-454f-b5a2-ce6e4b6c7959 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-08T10:14:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1473882269',display_name='tempest-TestNetworkBasicOps-server-1473882269',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1473882269',id=6,image_ref='e5994bac-385d-4cfe-962e-386aa0559983',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLR6MyJBOMYp2wYGaL6G2mEbJcrZztqNeLLTecSXpMa+10UdVkrFMZxX+qiOC0ccFZIJleCiHIYc6JXNFg7vRmgJ0JtTU6W+KAYc8u1JxRO51IwGJ30ByO68fx1sTbOqEg==',key_name='tempest-TestNetworkBasicOps-1715641436',keypairs=<?>,launch_index=0,launched_at=2025-10-08T10:14:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0edb2c85b88b4b168a4b2e8c5ed4a05c',ramdisk_id='',reservation_id='r-b2owsx2f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e5994bac-385d-4cfe-962e-386aa0559983',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-139500885',owner_user_name='tempest-TestNetworkBasicOps-139500885-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-08T10:14:35Z,user_data=None,user_id='d50b19166a7245e390a6e29682191263',uuid=ea469a2e-bf09-495c-9b5e-02ad38d32d40,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "address": "fa:16:3e:e6:b0:e0", "network": {"id": "834a886f-bb33-49ed-b47e-ef0308a38e89", "bridge": "br-int", "label": "tempest-network-smoke--1879322246", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe4ec274-2a", "ovs_interfaceid": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 08 10:16:37 compute-0 systemd[1]: libpod-conmon-2c85508ebe5bbfbd488053b850f882989cf060afde04acce5a32fa0a4320dce9.scope: Deactivated successfully.
Oct 08 10:16:37 compute-0 nova_compute[262220]: 2025-10-08 10:16:37.342 2 DEBUG nova.network.os_vif_util [None req-27b10dcf-62fa-454f-b5a2-ce6e4b6c7959 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converting VIF {"id": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "address": "fa:16:3e:e6:b0:e0", "network": {"id": "834a886f-bb33-49ed-b47e-ef0308a38e89", "bridge": "br-int", "label": "tempest-network-smoke--1879322246", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe4ec274-2a", "ovs_interfaceid": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 08 10:16:37 compute-0 nova_compute[262220]: 2025-10-08 10:16:37.343 2 DEBUG nova.network.os_vif_util [None req-27b10dcf-62fa-454f-b5a2-ce6e4b6c7959 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e6:b0:e0,bridge_name='br-int',has_traffic_filtering=True,id=be4ec274-2a90-48e8-bd51-fd01f3c659da,network=Network(834a886f-bb33-49ed-b47e-ef0308a38e89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe4ec274-2a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 08 10:16:37 compute-0 nova_compute[262220]: 2025-10-08 10:16:37.343 2 DEBUG os_vif [None req-27b10dcf-62fa-454f-b5a2-ce6e4b6c7959 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e6:b0:e0,bridge_name='br-int',has_traffic_filtering=True,id=be4ec274-2a90-48e8-bd51-fd01f3c659da,network=Network(834a886f-bb33-49ed-b47e-ef0308a38e89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe4ec274-2a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 08 10:16:37 compute-0 nova_compute[262220]: 2025-10-08 10:16:37.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:16:37 compute-0 nova_compute[262220]: 2025-10-08 10:16:37.345 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbe4ec274-2a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 10:16:37 compute-0 nova_compute[262220]: 2025-10-08 10:16:37.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:16:37 compute-0 nova_compute[262220]: 2025-10-08 10:16:37.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:16:37 compute-0 nova_compute[262220]: 2025-10-08 10:16:37.351 2 INFO os_vif [None req-27b10dcf-62fa-454f-b5a2-ce6e4b6c7959 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e6:b0:e0,bridge_name='br-int',has_traffic_filtering=True,id=be4ec274-2a90-48e8-bd51-fd01f3c659da,network=Network(834a886f-bb33-49ed-b47e-ef0308a38e89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe4ec274-2a')
Oct 08 10:16:37 compute-0 podman[278177]: 2025-10-08 10:16:37.402073157 +0000 UTC m=+0.049893608 container remove 2c85508ebe5bbfbd488053b850f882989cf060afde04acce5a32fa0a4320dce9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-834a886f-bb33-49ed-b47e-ef0308a38e89, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct 08 10:16:37 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:16:37.410 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[74326e86-6f5d-4c6d-92c2-fa9a3bae5279]: (4, ('Wed Oct  8 10:16:37 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-834a886f-bb33-49ed-b47e-ef0308a38e89 (2c85508ebe5bbfbd488053b850f882989cf060afde04acce5a32fa0a4320dce9)\n2c85508ebe5bbfbd488053b850f882989cf060afde04acce5a32fa0a4320dce9\nWed Oct  8 10:16:37 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-834a886f-bb33-49ed-b47e-ef0308a38e89 (2c85508ebe5bbfbd488053b850f882989cf060afde04acce5a32fa0a4320dce9)\n2c85508ebe5bbfbd488053b850f882989cf060afde04acce5a32fa0a4320dce9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:16:37 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:16:37.412 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[ca343fa8-cd8f-4685-8363-3fea3b738625]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:16:37 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:16:37.413 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap834a886f-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 10:16:37 compute-0 nova_compute[262220]: 2025-10-08 10:16:37.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:16:37 compute-0 kernel: tap834a886f-b0: left promiscuous mode
Oct 08 10:16:37 compute-0 nova_compute[262220]: 2025-10-08 10:16:37.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:16:37 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:16:37.425 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[9c12be10-57de-4b57-83eb-193d3f36ec0a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:16:37 compute-0 nova_compute[262220]: 2025-10-08 10:16:37.431 2 DEBUG nova.compute.manager [req-42590b41-e40f-42b7-9348-c288e35a8e89 req-ac73dbe6-1615-4f23-af57-a374f2b0a6ec 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Received event network-vif-unplugged-be4ec274-2a90-48e8-bd51-fd01f3c659da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 08 10:16:37 compute-0 nova_compute[262220]: 2025-10-08 10:16:37.433 2 DEBUG oslo_concurrency.lockutils [req-42590b41-e40f-42b7-9348-c288e35a8e89 req-ac73dbe6-1615-4f23-af57-a374f2b0a6ec 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:16:37 compute-0 nova_compute[262220]: 2025-10-08 10:16:37.434 2 DEBUG oslo_concurrency.lockutils [req-42590b41-e40f-42b7-9348-c288e35a8e89 req-ac73dbe6-1615-4f23-af57-a374f2b0a6ec 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:16:37 compute-0 nova_compute[262220]: 2025-10-08 10:16:37.434 2 DEBUG oslo_concurrency.lockutils [req-42590b41-e40f-42b7-9348-c288e35a8e89 req-ac73dbe6-1615-4f23-af57-a374f2b0a6ec 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:16:37 compute-0 nova_compute[262220]: 2025-10-08 10:16:37.435 2 DEBUG nova.compute.manager [req-42590b41-e40f-42b7-9348-c288e35a8e89 req-ac73dbe6-1615-4f23-af57-a374f2b0a6ec 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] No waiting events found dispatching network-vif-unplugged-be4ec274-2a90-48e8-bd51-fd01f3c659da pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 08 10:16:37 compute-0 nova_compute[262220]: 2025-10-08 10:16:37.436 2 DEBUG nova.compute.manager [req-42590b41-e40f-42b7-9348-c288e35a8e89 req-ac73dbe6-1615-4f23-af57-a374f2b0a6ec 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Received event network-vif-unplugged-be4ec274-2a90-48e8-bd51-fd01f3c659da for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 08 10:16:37 compute-0 nova_compute[262220]: 2025-10-08 10:16:37.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:16:37 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:16:37.462 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[b6f48bd9-a239-4d2c-b4ba-ce91ab509b3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:16:37 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:16:37.463 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[a5b00854-daf8-4c26-9bed-13fcb4cd486d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:16:37 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:16:37.480 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[5ecbb59a-24a4-4ba6-bae6-f675a577afb0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 443280, 'reachable_time': 17227, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278234, 'error': None, 'target': 'ovnmeta-834a886f-bb33-49ed-b47e-ef0308a38e89', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:16:37 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:16:37.483 163290 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-834a886f-bb33-49ed-b47e-ef0308a38e89 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 08 10:16:37 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:16:37.483 163290 DEBUG oslo.privsep.daemon [-] privsep: reply[e65d050a-14b6-4cdb-baea-23076ec0532b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:16:37 compute-0 ceph-mon[73572]: pgmap v960: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.9 KiB/s wr, 28 op/s
Oct 08 10:16:37 compute-0 systemd[1]: run-netns-ovnmeta\x2d834a886f\x2dbb33\x2d49ed\x2db47e\x2def0308a38e89.mount: Deactivated successfully.
Oct 08 10:16:37 compute-0 lvm[278260]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 08 10:16:37 compute-0 lvm[278260]: VG ceph_vg0 finished
Oct 08 10:16:37 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:16:37 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:16:37 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:16:37.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:16:37 compute-0 keen_mclaren[278090]: {}
Oct 08 10:16:37 compute-0 systemd[1]: libpod-a0409f557dfd61ffda492937d279acfa5bf3c7a21fd612de155a528b3f759a0a.scope: Deactivated successfully.
Oct 08 10:16:37 compute-0 systemd[1]: libpod-a0409f557dfd61ffda492937d279acfa5bf3c7a21fd612de155a528b3f759a0a.scope: Consumed 1.206s CPU time.
Oct 08 10:16:37 compute-0 podman[278073]: 2025-10-08 10:16:37.745574373 +0000 UTC m=+0.950246093 container died a0409f557dfd61ffda492937d279acfa5bf3c7a21fd612de155a528b3f759a0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_mclaren, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 08 10:16:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca6d8df09b7ce16543a93f705c57ac8819de140be41e51546844e71bc226ea92-merged.mount: Deactivated successfully.
Oct 08 10:16:37 compute-0 podman[278073]: 2025-10-08 10:16:37.797199016 +0000 UTC m=+1.001870746 container remove a0409f557dfd61ffda492937d279acfa5bf3c7a21fd612de155a528b3f759a0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_mclaren, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 08 10:16:37 compute-0 systemd[1]: libpod-conmon-a0409f557dfd61ffda492937d279acfa5bf3c7a21fd612de155a528b3f759a0a.scope: Deactivated successfully.
Oct 08 10:16:37 compute-0 sudo[277965]: pam_unix(sudo:session): session closed for user root
Oct 08 10:16:37 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 10:16:37 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:16:37 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 10:16:37 compute-0 nova_compute[262220]: 2025-10-08 10:16:37.862 2 INFO nova.virt.libvirt.driver [None req-27b10dcf-62fa-454f-b5a2-ce6e4b6c7959 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Deleting instance files /var/lib/nova/instances/ea469a2e-bf09-495c-9b5e-02ad38d32d40_del
Oct 08 10:16:37 compute-0 nova_compute[262220]: 2025-10-08 10:16:37.864 2 INFO nova.virt.libvirt.driver [None req-27b10dcf-62fa-454f-b5a2-ce6e4b6c7959 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Deletion of /var/lib/nova/instances/ea469a2e-bf09-495c-9b5e-02ad38d32d40_del complete
Oct 08 10:16:37 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:16:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:37 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:16:37 compute-0 sudo[278274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 08 10:16:37 compute-0 sudo[278274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:16:37 compute-0 sudo[278274]: pam_unix(sudo:session): session closed for user root
Oct 08 10:16:38 compute-0 nova_compute[262220]: 2025-10-08 10:16:38.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:16:38 compute-0 nova_compute[262220]: 2025-10-08 10:16:38.103 2 INFO nova.compute.manager [None req-27b10dcf-62fa-454f-b5a2-ce6e4b6c7959 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Took 1.08 seconds to destroy the instance on the hypervisor.
Oct 08 10:16:38 compute-0 nova_compute[262220]: 2025-10-08 10:16:38.103 2 DEBUG oslo.service.loopingcall [None req-27b10dcf-62fa-454f-b5a2-ce6e4b6c7959 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 08 10:16:38 compute-0 nova_compute[262220]: 2025-10-08 10:16:38.104 2 DEBUG nova.compute.manager [-] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 08 10:16:38 compute-0 nova_compute[262220]: 2025-10-08 10:16:38.104 2 DEBUG nova.network.neutron [-] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 08 10:16:38 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v961: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.9 KiB/s wr, 28 op/s
Oct 08 10:16:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:38 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_48] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940039d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:38 compute-0 nova_compute[262220]: 2025-10-08 10:16:38.466 2 DEBUG nova.network.neutron [req-83553102-41f1-4c9b-b2d0-c5b376ba553b req-d1c88407-5c65-4555-90c2-4a1cee97cef0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Updated VIF entry in instance network info cache for port be4ec274-2a90-48e8-bd51-fd01f3c659da. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 08 10:16:38 compute-0 nova_compute[262220]: 2025-10-08 10:16:38.466 2 DEBUG nova.network.neutron [req-83553102-41f1-4c9b-b2d0-c5b376ba553b req-d1c88407-5c65-4555-90c2-4a1cee97cef0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Updating instance_info_cache with network_info: [{"id": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "address": "fa:16:3e:e6:b0:e0", "network": {"id": "834a886f-bb33-49ed-b47e-ef0308a38e89", "bridge": "br-int", "label": "tempest-network-smoke--1879322246", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe4ec274-2a", "ovs_interfaceid": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 08 10:16:38 compute-0 nova_compute[262220]: 2025-10-08 10:16:38.610 2 DEBUG oslo_concurrency.lockutils [req-83553102-41f1-4c9b-b2d0-c5b376ba553b req-d1c88407-5c65-4555-90c2-4a1cee97cef0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Releasing lock "refresh_cache-ea469a2e-bf09-495c-9b5e-02ad38d32d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 08 10:16:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:38 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c004800 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:38 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:16:38 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:16:38 compute-0 ceph-mon[73572]: pgmap v961: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.9 KiB/s wr, 28 op/s
Oct 08 10:16:38 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:16:38 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:16:38 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:16:38.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:16:38 compute-0 nova_compute[262220]: 2025-10-08 10:16:38.969 2 DEBUG nova.network.neutron [-] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 08 10:16:38 compute-0 nova_compute[262220]: 2025-10-08 10:16:38.972 2 DEBUG nova.compute.manager [req-7167e09e-8b78-4752-8e62-dc358ce87d6b req-0b6eccab-71fe-405f-b78a-a804c8c8e9d8 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Received event network-vif-deleted-be4ec274-2a90-48e8-bd51-fd01f3c659da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 08 10:16:38 compute-0 nova_compute[262220]: 2025-10-08 10:16:38.973 2 INFO nova.compute.manager [req-7167e09e-8b78-4752-8e62-dc358ce87d6b req-0b6eccab-71fe-405f-b78a-a804c8c8e9d8 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Neutron deleted interface be4ec274-2a90-48e8-bd51-fd01f3c659da; detaching it from the instance and deleting it from the info cache
Oct 08 10:16:38 compute-0 nova_compute[262220]: 2025-10-08 10:16:38.973 2 DEBUG nova.network.neutron [req-7167e09e-8b78-4752-8e62-dc358ce87d6b req-0b6eccab-71fe-405f-b78a-a804c8c8e9d8 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 08 10:16:39 compute-0 nova_compute[262220]: 2025-10-08 10:16:39.129 2 INFO nova.compute.manager [-] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Took 1.02 seconds to deallocate network for instance.
Oct 08 10:16:39 compute-0 nova_compute[262220]: 2025-10-08 10:16:39.134 2 DEBUG nova.compute.manager [req-7167e09e-8b78-4752-8e62-dc358ce87d6b req-0b6eccab-71fe-405f-b78a-a804c8c8e9d8 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Detach interface failed, port_id=be4ec274-2a90-48e8-bd51-fd01f3c659da, reason: Instance ea469a2e-bf09-495c-9b5e-02ad38d32d40 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 08 10:16:39 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:16:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:39 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac780031c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:39 compute-0 nova_compute[262220]: 2025-10-08 10:16:39.278 2 DEBUG oslo_concurrency.lockutils [None req-27b10dcf-62fa-454f-b5a2-ce6e4b6c7959 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:16:39 compute-0 nova_compute[262220]: 2025-10-08 10:16:39.278 2 DEBUG oslo_concurrency.lockutils [None req-27b10dcf-62fa-454f-b5a2-ce6e4b6c7959 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:16:39 compute-0 nova_compute[262220]: 2025-10-08 10:16:39.325 2 DEBUG oslo_concurrency.processutils [None req-27b10dcf-62fa-454f-b5a2-ce6e4b6c7959 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:16:39 compute-0 nova_compute[262220]: 2025-10-08 10:16:39.561 2 DEBUG nova.compute.manager [req-c997eee2-2b9d-4b0b-bd3a-35fc40bff629 req-394a24e4-6831-4fbc-ac37-ac8ddf7993f4 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Received event network-vif-plugged-be4ec274-2a90-48e8-bd51-fd01f3c659da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 08 10:16:39 compute-0 nova_compute[262220]: 2025-10-08 10:16:39.562 2 DEBUG oslo_concurrency.lockutils [req-c997eee2-2b9d-4b0b-bd3a-35fc40bff629 req-394a24e4-6831-4fbc-ac37-ac8ddf7993f4 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:16:39 compute-0 nova_compute[262220]: 2025-10-08 10:16:39.562 2 DEBUG oslo_concurrency.lockutils [req-c997eee2-2b9d-4b0b-bd3a-35fc40bff629 req-394a24e4-6831-4fbc-ac37-ac8ddf7993f4 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:16:39 compute-0 nova_compute[262220]: 2025-10-08 10:16:39.562 2 DEBUG oslo_concurrency.lockutils [req-c997eee2-2b9d-4b0b-bd3a-35fc40bff629 req-394a24e4-6831-4fbc-ac37-ac8ddf7993f4 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:16:39 compute-0 nova_compute[262220]: 2025-10-08 10:16:39.563 2 DEBUG nova.compute.manager [req-c997eee2-2b9d-4b0b-bd3a-35fc40bff629 req-394a24e4-6831-4fbc-ac37-ac8ddf7993f4 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] No waiting events found dispatching network-vif-plugged-be4ec274-2a90-48e8-bd51-fd01f3c659da pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 08 10:16:39 compute-0 nova_compute[262220]: 2025-10-08 10:16:39.563 2 WARNING nova.compute.manager [req-c997eee2-2b9d-4b0b-bd3a-35fc40bff629 req-394a24e4-6831-4fbc-ac37-ac8ddf7993f4 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Received unexpected event network-vif-plugged-be4ec274-2a90-48e8-bd51-fd01f3c659da for instance with vm_state deleted and task_state None.
Oct 08 10:16:39 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:16:39 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:16:39 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:16:39.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:16:39 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 08 10:16:39 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2575423298' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:16:39 compute-0 nova_compute[262220]: 2025-10-08 10:16:39.768 2 DEBUG oslo_concurrency.processutils [None req-27b10dcf-62fa-454f-b5a2-ce6e4b6c7959 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:16:39 compute-0 nova_compute[262220]: 2025-10-08 10:16:39.774 2 DEBUG nova.compute.provider_tree [None req-27b10dcf-62fa-454f-b5a2-ce6e4b6c7959 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 08 10:16:39 compute-0 nova_compute[262220]: 2025-10-08 10:16:39.802 2 DEBUG nova.scheduler.client.report [None req-27b10dcf-62fa-454f-b5a2-ce6e4b6c7959 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 08 10:16:39 compute-0 nova_compute[262220]: 2025-10-08 10:16:39.827 2 DEBUG oslo_concurrency.lockutils [None req-27b10dcf-62fa-454f-b5a2-ce6e4b6c7959 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.548s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:16:39 compute-0 nova_compute[262220]: 2025-10-08 10:16:39.856 2 INFO nova.scheduler.client.report [None req-27b10dcf-62fa-454f-b5a2-ce6e4b6c7959 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Deleted allocations for instance ea469a2e-bf09-495c-9b5e-02ad38d32d40
Oct 08 10:16:39 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/2575423298' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:16:40 compute-0 nova_compute[262220]: 2025-10-08 10:16:40.064 2 DEBUG oslo_concurrency.lockutils [None req-27b10dcf-62fa-454f-b5a2-ce6e4b6c7959 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.049s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:16:40 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v962: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 4.1 KiB/s wr, 57 op/s
Oct 08 10:16:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:40 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_47] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:40 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_48] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940039d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:40 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:16:40 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:16:40 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:16:40.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:16:40 compute-0 ceph-mon[73572]: pgmap v962: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 4.1 KiB/s wr, 57 op/s
Oct 08 10:16:41 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:41 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c004820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:41 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:16:41 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:16:41 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:16:41.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:16:42 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v963: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.6 KiB/s wr, 28 op/s
Oct 08 10:16:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:42 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac780031c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:42 compute-0 nova_compute[262220]: 2025-10-08 10:16:42.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:16:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:42 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_47] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:42 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:16:42 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:16:42 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:16:42.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:16:43 compute-0 nova_compute[262220]: 2025-10-08 10:16:43.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:16:43 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:43 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_48] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940039d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:43 compute-0 ceph-mon[73572]: pgmap v963: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.6 KiB/s wr, 28 op/s
Oct 08 10:16:43 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:16:43 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:16:43 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:16:43.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:16:44 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:16:44 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v964: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.6 KiB/s wr, 28 op/s
Oct 08 10:16:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:44 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c004840 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:44 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac780031c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:44 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:16:44 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:16:44 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:16:44.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:16:44 compute-0 nova_compute[262220]: 2025-10-08 10:16:44.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:16:44 compute-0 nova_compute[262220]: 2025-10-08 10:16:44.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:16:45 compute-0 nova_compute[262220]: 2025-10-08 10:16:45.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:16:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:45 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 08 10:16:45 compute-0 nova_compute[262220]: 2025-10-08 10:16:45.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:16:45 compute-0 ceph-mon[73572]: pgmap v964: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.6 KiB/s wr, 28 op/s
Oct 08 10:16:45 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:16:45 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:16:45 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:16:45.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:16:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:16:45] "GET /metrics HTTP/1.1" 200 48469 "" "Prometheus/2.51.0"
Oct 08 10:16:45 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:16:45] "GET /metrics HTTP/1.1" 200 48469 "" "Prometheus/2.51.0"
Oct 08 10:16:45 compute-0 nova_compute[262220]: 2025-10-08 10:16:45.881 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:16:45 compute-0 nova_compute[262220]: 2025-10-08 10:16:45.946 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:16:45 compute-0 nova_compute[262220]: 2025-10-08 10:16:45.946 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:16:45 compute-0 nova_compute[262220]: 2025-10-08 10:16:45.947 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:16:45 compute-0 podman[278331]: 2025-10-08 10:16:45.956315664 +0000 UTC m=+0.110409528 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:16:45 compute-0 nova_compute[262220]: 2025-10-08 10:16:45.977 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:16:45 compute-0 nova_compute[262220]: 2025-10-08 10:16:45.977 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:16:45 compute-0 nova_compute[262220]: 2025-10-08 10:16:45.977 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:16:45 compute-0 nova_compute[262220]: 2025-10-08 10:16:45.977 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 08 10:16:45 compute-0 nova_compute[262220]: 2025-10-08 10:16:45.977 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:16:46 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v965: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 08 10:16:46 compute-0 kernel: ganesha.nfsd[277237]: segfault at 50 ip 00007fad51cb432e sp 00007fad1bffe210 error 4 in libntirpc.so.5.8[7fad51c99000+2c000] likely on CPU 5 (core 0, socket 5)
Oct 08 10:16:46 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 08 10:16:46 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:46 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_48] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940039d0 fd 48 proxy ignored for local
Oct 08 10:16:46 compute-0 systemd[1]: Started Process Core Dump (PID 278379/UID 0).
Oct 08 10:16:46 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 08 10:16:46 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2115030173' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:16:46 compute-0 nova_compute[262220]: 2025-10-08 10:16:46.440 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:16:46 compute-0 nova_compute[262220]: 2025-10-08 10:16:46.604 2 WARNING nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 10:16:46 compute-0 nova_compute[262220]: 2025-10-08 10:16:46.613 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4529MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 08 10:16:46 compute-0 nova_compute[262220]: 2025-10-08 10:16:46.614 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:16:46 compute-0 nova_compute[262220]: 2025-10-08 10:16:46.614 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:16:46 compute-0 nova_compute[262220]: 2025-10-08 10:16:46.848 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 08 10:16:46 compute-0 nova_compute[262220]: 2025-10-08 10:16:46.848 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 08 10:16:46 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:16:46 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:16:46 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:16:46.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:16:46 compute-0 nova_compute[262220]: 2025-10-08 10:16:46.873 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:16:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:16:47.169Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:16:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 08 10:16:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1774158923' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:16:47 compute-0 nova_compute[262220]: 2025-10-08 10:16:47.302 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:16:47 compute-0 nova_compute[262220]: 2025-10-08 10:16:47.308 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 08 10:16:47 compute-0 nova_compute[262220]: 2025-10-08 10:16:47.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:16:47 compute-0 nova_compute[262220]: 2025-10-08 10:16:47.404 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 08 10:16:47 compute-0 ceph-mon[73572]: pgmap v965: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 08 10:16:47 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/2115030173' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:16:47 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/1774158923' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:16:47 compute-0 nova_compute[262220]: 2025-10-08 10:16:47.550 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 08 10:16:47 compute-0 nova_compute[262220]: 2025-10-08 10:16:47.551 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.937s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:16:47 compute-0 systemd-coredump[278380]: Process 265041 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 90:
                                                    #0  0x00007fad51cb432e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Oct 08 10:16:47 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:16:47 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:16:47 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:16:47.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:16:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:16:47
Oct 08 10:16:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 08 10:16:47 compute-0 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct 08 10:16:47 compute-0 ceph-mgr[73869]: [balancer INFO root] pools ['.nfs', 'volumes', '.rgw.root', 'images', 'default.rgw.meta', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'vms', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta']
Oct 08 10:16:47 compute-0 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct 08 10:16:47 compute-0 systemd[1]: systemd-coredump@10-278379-0.service: Deactivated successfully.
Oct 08 10:16:47 compute-0 systemd[1]: systemd-coredump@10-278379-0.service: Consumed 1.073s CPU time.
Oct 08 10:16:47 compute-0 podman[278411]: 2025-10-08 10:16:47.794424047 +0000 UTC m=+0.041476207 container died ff96eb6387ce0570d856673e2f9d2de072dc140ca0ed9b744a95fbf6029b655c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:16:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-7680890908c887a4af3f6279a54cd446656bf9035c0c45bf7374d576d707e16e-merged.mount: Deactivated successfully.
Oct 08 10:16:47 compute-0 podman[278411]: 2025-10-08 10:16:47.845912046 +0000 UTC m=+0.092964186 container remove ff96eb6387ce0570d856673e2f9d2de072dc140ca0ed9b744a95fbf6029b655c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:16:47 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Main process exited, code=exited, status=139/n/a
Oct 08 10:16:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:16:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:16:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:16:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:16:48 compute-0 nova_compute[262220]: 2025-10-08 10:16:48.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:16:48 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Failed with result 'exit-code'.
Oct 08 10:16:48 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Consumed 2.252s CPU time.
Oct 08 10:16:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct 08 10:16:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:16:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 08 10:16:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:16:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 10:16:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:16:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:16:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:16:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:16:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:16:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 08 10:16:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:16:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 08 10:16:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:16:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:16:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:16:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 08 10:16:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:16:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 08 10:16:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:16:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:16:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:16:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 08 10:16:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:16:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 10:16:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 08 10:16:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:16:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:16:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:16:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:16:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:16:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:16:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:16:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:16:48 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v966: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 08 10:16:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 08 10:16:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:16:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:16:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:16:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:16:48 compute-0 sudo[278455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:16:48 compute-0 sudo[278455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:16:48 compute-0 sudo[278455]: pam_unix(sudo:session): session closed for user root
Oct 08 10:16:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:16:48 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:16:48 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:16:48 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:16:48.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:16:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:16:49 compute-0 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Oct 08 10:16:49 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:16:49.202163) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 08 10:16:49 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Oct 08 10:16:49 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918609202249, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 1465, "num_deletes": 257, "total_data_size": 2777328, "memory_usage": 2828192, "flush_reason": "Manual Compaction"}
Oct 08 10:16:49 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Oct 08 10:16:49 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918609217431, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 2696170, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26909, "largest_seqno": 28373, "table_properties": {"data_size": 2689326, "index_size": 3915, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14098, "raw_average_key_size": 19, "raw_value_size": 2675668, "raw_average_value_size": 3695, "num_data_blocks": 172, "num_entries": 724, "num_filter_entries": 724, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759918475, "oldest_key_time": 1759918475, "file_creation_time": 1759918609, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Oct 08 10:16:49 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 15296 microseconds, and 6492 cpu microseconds.
Oct 08 10:16:49 compute-0 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 08 10:16:49 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:16:49.217471) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 2696170 bytes OK
Oct 08 10:16:49 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:16:49.217490) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Oct 08 10:16:49 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:16:49.218682) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Oct 08 10:16:49 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:16:49.218700) EVENT_LOG_v1 {"time_micros": 1759918609218694, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 08 10:16:49 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:16:49.218718) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 08 10:16:49 compute-0 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 2771028, prev total WAL file size 2771028, number of live WAL files 2.
Oct 08 10:16:49 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 10:16:49 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:16:49.219711) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353035' seq:72057594037927935, type:22 .. '6C6F676D00373538' seq:0, type:0; will stop at (end)
Oct 08 10:16:49 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 08 10:16:49 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(2632KB)], [59(13MB)]
Oct 08 10:16:49 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918609219814, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 17013496, "oldest_snapshot_seqno": -1}
Oct 08 10:16:49 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 6008 keys, 16864503 bytes, temperature: kUnknown
Oct 08 10:16:49 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918609327166, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 16864503, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16821148, "index_size": 27245, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15045, "raw_key_size": 153034, "raw_average_key_size": 25, "raw_value_size": 16709759, "raw_average_value_size": 2781, "num_data_blocks": 1115, "num_entries": 6008, "num_filter_entries": 6008, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916581, "oldest_key_time": 0, "file_creation_time": 1759918609, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Oct 08 10:16:49 compute-0 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 08 10:16:49 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:16:49.327556) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 16864503 bytes
Oct 08 10:16:49 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:16:49.329329) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 158.4 rd, 157.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 13.7 +0.0 blob) out(16.1 +0.0 blob), read-write-amplify(12.6) write-amplify(6.3) OK, records in: 6540, records dropped: 532 output_compression: NoCompression
Oct 08 10:16:49 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:16:49.329406) EVENT_LOG_v1 {"time_micros": 1759918609329378, "job": 32, "event": "compaction_finished", "compaction_time_micros": 107431, "compaction_time_cpu_micros": 35877, "output_level": 6, "num_output_files": 1, "total_output_size": 16864503, "num_input_records": 6540, "num_output_records": 6008, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 08 10:16:49 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 10:16:49 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918609331083, "job": 32, "event": "table_file_deletion", "file_number": 61}
Oct 08 10:16:49 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 10:16:49 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918609337582, "job": 32, "event": "table_file_deletion", "file_number": 59}
Oct 08 10:16:49 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:16:49.219567) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:16:49 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:16:49.337713) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:16:49 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:16:49.337723) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:16:49 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:16:49.337727) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:16:49 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:16:49.337731) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:16:49 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:16:49.337735) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:16:49 compute-0 ceph-mon[73572]: pgmap v966: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 08 10:16:49 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/3029546326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:16:49 compute-0 nova_compute[262220]: 2025-10-08 10:16:49.491 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:16:49 compute-0 nova_compute[262220]: 2025-10-08 10:16:49.492 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 08 10:16:49 compute-0 nova_compute[262220]: 2025-10-08 10:16:49.493 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 08 10:16:49 compute-0 nova_compute[262220]: 2025-10-08 10:16:49.550 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 08 10:16:49 compute-0 nova_compute[262220]: 2025-10-08 10:16:49.551 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:16:49 compute-0 nova_compute[262220]: 2025-10-08 10:16:49.551 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:16:49 compute-0 nova_compute[262220]: 2025-10-08 10:16:49.552 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 08 10:16:49 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:16:49 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:16:49 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:16:49.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:16:49 compute-0 nova_compute[262220]: 2025-10-08 10:16:49.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:16:50 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v967: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 08 10:16:50 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/3622741335' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:16:50 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/10431101' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:16:50 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:16:50 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:16:50 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:16:50.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:16:51 compute-0 ceph-mon[73572]: pgmap v967: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 08 10:16:51 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/4265113444' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:16:51 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:16:51 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:16:51 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:16:51.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:16:52 compute-0 nova_compute[262220]: 2025-10-08 10:16:52.261 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759918597.2590716, ea469a2e-bf09-495c-9b5e-02ad38d32d40 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 08 10:16:52 compute-0 nova_compute[262220]: 2025-10-08 10:16:52.262 2 INFO nova.compute.manager [-] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] VM Stopped (Lifecycle Event)
Oct 08 10:16:52 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v968: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 08 10:16:52 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/101652 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 08 10:16:52 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [NOTICE] 280/101652 (4) : haproxy version is 2.3.17-d1c9119
Oct 08 10:16:52 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [NOTICE] 280/101652 (4) : path to executable is /usr/local/sbin/haproxy
Oct 08 10:16:52 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [ALERT] 280/101652 (4) : backend 'backend' has no server available!
Oct 08 10:16:52 compute-0 nova_compute[262220]: 2025-10-08 10:16:52.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:16:52 compute-0 nova_compute[262220]: 2025-10-08 10:16:52.600 2 DEBUG nova.compute.manager [None req-b33200b9-d89a-4310-9cb7-5ce4eec60b55 - - - - - -] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 08 10:16:52 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:16:52 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:16:52 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:16:52.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:16:52 compute-0 podman[278484]: 2025-10-08 10:16:52.914949928 +0000 UTC m=+0.065495240 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 08 10:16:52 compute-0 podman[278485]: 2025-10-08 10:16:52.93359782 +0000 UTC m=+0.082354765 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 08 10:16:53 compute-0 nova_compute[262220]: 2025-10-08 10:16:53.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:16:53 compute-0 ceph-mon[73572]: pgmap v968: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 08 10:16:53 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:16:53 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:16:53 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:16:53.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:16:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:16:54 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v969: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Oct 08 10:16:54 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:16:54 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:16:54 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:16:54.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:16:55 compute-0 ceph-mon[73572]: pgmap v969: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Oct 08 10:16:55 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:16:55 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:16:55 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:16:55.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:16:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:16:55] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Oct 08 10:16:55 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:16:55] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Oct 08 10:16:56 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v970: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 08 10:16:56 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:16:56 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:16:56 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:16:56.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:16:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:16:57.169Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:16:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:16:57.416 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:16:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:16:57.416 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:16:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:16:57.417 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:16:57 compute-0 nova_compute[262220]: 2025-10-08 10:16:57.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:16:57 compute-0 ceph-mon[73572]: pgmap v970: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 08 10:16:57 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:16:57 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:16:57 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:16:57.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:16:58 compute-0 nova_compute[262220]: 2025-10-08 10:16:58.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:16:58 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v971: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 08 10:16:58 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Scheduled restart job, restart counter is at 11.
Oct 08 10:16:58 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct 08 10:16:58 compute-0 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Consumed 2.252s CPU time.
Oct 08 10:16:58 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct 08 10:16:58 compute-0 podman[278576]: 2025-10-08 10:16:58.570844719 +0000 UTC m=+0.051484660 container create 90486abb955ec1d9472e9211269572dd99696faaed865d52f07cc20a187b4c4b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:16:58 compute-0 ceph-mon[73572]: pgmap v971: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 08 10:16:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f71d56eda20e561c36aebfa14cd6c6b082450f31285702bdb269ddf373f4272/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 08 10:16:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f71d56eda20e561c36aebfa14cd6c6b082450f31285702bdb269ddf373f4272/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 10:16:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f71d56eda20e561c36aebfa14cd6c6b082450f31285702bdb269ddf373f4272/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:16:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f71d56eda20e561c36aebfa14cd6c6b082450f31285702bdb269ddf373f4272/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.uynkmx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 10:16:58 compute-0 podman[278576]: 2025-10-08 10:16:58.54480998 +0000 UTC m=+0.025449951 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:16:58 compute-0 podman[278576]: 2025-10-08 10:16:58.656180768 +0000 UTC m=+0.136820729 container init 90486abb955ec1d9472e9211269572dd99696faaed865d52f07cc20a187b4c4b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 08 10:16:58 compute-0 podman[278576]: 2025-10-08 10:16:58.661536061 +0000 UTC m=+0.142176002 container start 90486abb955ec1d9472e9211269572dd99696faaed865d52f07cc20a187b4c4b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 08 10:16:58 compute-0 bash[278576]: 90486abb955ec1d9472e9211269572dd99696faaed865d52f07cc20a187b4c4b
Oct 08 10:16:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:16:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 08 10:16:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:16:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 08 10:16:58 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct 08 10:16:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:16:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 08 10:16:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:16:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 08 10:16:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:16:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 08 10:16:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:16:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 08 10:16:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:16:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 08 10:16:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:16:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:16:58 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:16:58 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:16:58 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:16:58.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:16:59 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:16:59 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:16:59 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:16:59 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:16:59.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:17:00 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v972: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 85 B/s wr, 1 op/s
Oct 08 10:17:00 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:17:00 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:17:00 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:17:00.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:17:01 compute-0 ceph-mon[73572]: pgmap v972: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 85 B/s wr, 1 op/s
Oct 08 10:17:01 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:17:01 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:17:01 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:17:01.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:17:02 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v973: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 85 B/s wr, 1 op/s
Oct 08 10:17:02 compute-0 nova_compute[262220]: 2025-10-08 10:17:02.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:17:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:17:02 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:17:02 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:17:02 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:17:02 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:17:02.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:17:03 compute-0 nova_compute[262220]: 2025-10-08 10:17:03.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:17:03 compute-0 ceph-mon[73572]: pgmap v973: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 85 B/s wr, 1 op/s
Oct 08 10:17:03 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:17:03 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:17:03 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:17:03 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:17:03.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:17:04 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:17:04 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v974: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 170 B/s wr, 1 op/s
Oct 08 10:17:04 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:17:04 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:17:04 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:17:04.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:17:04 compute-0 podman[278640]: 2025-10-08 10:17:04.902192777 +0000 UTC m=+0.060847341 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 08 10:17:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:04 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:17:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:04 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:17:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:04 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:17:05 compute-0 ceph-mon[73572]: pgmap v974: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 170 B/s wr, 1 op/s
Oct 08 10:17:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:17:05] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Oct 08 10:17:05 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:17:05] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Oct 08 10:17:05 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:17:05 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:17:05 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:17:05.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:17:06 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v975: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 170 B/s wr, 1 op/s
Oct 08 10:17:06 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:17:06 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:17:06 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:17:06.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:17:06 compute-0 ceph-mon[73572]: pgmap v975: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 170 B/s wr, 1 op/s
Oct 08 10:17:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:17:07.170Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:17:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:17:07.170Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:17:07 compute-0 nova_compute[262220]: 2025-10-08 10:17:07.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:17:07 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:17:07 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:17:07 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:17:07.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:17:08 compute-0 nova_compute[262220]: 2025-10-08 10:17:08.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:17:08 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v976: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 170 B/s wr, 1 op/s
Oct 08 10:17:08 compute-0 sudo[278664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:17:08 compute-0 sudo[278664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:17:08 compute-0 sudo[278664]: pam_unix(sudo:session): session closed for user root
Oct 08 10:17:08 compute-0 ceph-mon[73572]: pgmap v976: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 170 B/s wr, 1 op/s
Oct 08 10:17:08 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:17:08 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:17:08 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:17:08.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:17:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:17:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:17:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:17:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:09 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:17:09 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:17:09 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:17:09 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:17:09 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:17:09.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:17:10 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v977: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 170 B/s wr, 2 op/s
Oct 08 10:17:10 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:17:10 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:17:10 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:17:10.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:17:11 compute-0 ceph-mon[73572]: pgmap v977: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 170 B/s wr, 2 op/s
Oct 08 10:17:11 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/1827604991' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:17:11 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:17:11 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:17:11 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:17:11.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:17:12 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v978: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 85 B/s wr, 1 op/s
Oct 08 10:17:12 compute-0 nova_compute[262220]: 2025-10-08 10:17:12.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:17:12 compute-0 ceph-mon[73572]: pgmap v978: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 85 B/s wr, 1 op/s
Oct 08 10:17:12 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:17:12 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:17:12 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:17:12.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:17:13 compute-0 nova_compute[262220]: 2025-10-08 10:17:13.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:17:13 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:17:13 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:17:13 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:17:13.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:17:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:17:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:17:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:17:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:14 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:17:14 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:17:14 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v979: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 08 10:17:14 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:17:14 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:17:14 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:17:14.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:17:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:17:15] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Oct 08 10:17:15 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:17:15] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Oct 08 10:17:15 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:17:15 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:17:15 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:17:15.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:17:15 compute-0 ceph-mon[73572]: pgmap v979: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 08 10:17:16 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v980: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 08 10:17:16 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:17:16 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:17:16 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:17:16.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:17:16 compute-0 podman[278697]: 2025-10-08 10:17:16.929890409 +0000 UTC m=+0.092964386 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 08 10:17:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:17:17.170Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:17:17 compute-0 ceph-mon[73572]: pgmap v980: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 08 10:17:17 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/4157667937' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 08 10:17:17 compute-0 nova_compute[262220]: 2025-10-08 10:17:17.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:17:17 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:17:17 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:17:17 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:17:17.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:17:17 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:17:17 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:17:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:17:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:17:18 compute-0 nova_compute[262220]: 2025-10-08 10:17:18.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:17:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:17:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:17:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:17:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:17:18 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v981: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 08 10:17:18 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/2898920040' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 08 10:17:18 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:17:18 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:17:18 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:17:18 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:17:18.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:17:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:17:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:17:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:17:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:19 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:17:19 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:17:19 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:17:19 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:17:19 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:17:19.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:17:19 compute-0 ceph-mon[73572]: pgmap v981: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 08 10:17:20 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v982: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Oct 08 10:17:20 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 08 10:17:20 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/733252192' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 08 10:17:20 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 08 10:17:20 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/733252192' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 08 10:17:20 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:17:20 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:17:20 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:17:20.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:17:21 compute-0 ceph-mon[73572]: pgmap v982: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Oct 08 10:17:21 compute-0 ceph-mon[73572]: from='client.? 192.168.122.10:0/733252192' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 08 10:17:21 compute-0 ceph-mon[73572]: from='client.? 192.168.122.10:0/733252192' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 08 10:17:21 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:17:21 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:17:21 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:17:21.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:17:22 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v983: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Oct 08 10:17:22 compute-0 nova_compute[262220]: 2025-10-08 10:17:22.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:17:22 compute-0 ceph-mon[73572]: pgmap v983: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Oct 08 10:17:22 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:17:22 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:17:22 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:17:22.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:17:23 compute-0 nova_compute[262220]: 2025-10-08 10:17:23.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:17:23 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:17:23 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:17:23 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:17:23.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:17:23 compute-0 podman[278730]: 2025-10-08 10:17:23.899737937 +0000 UTC m=+0.060734938 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 08 10:17:23 compute-0 podman[278731]: 2025-10-08 10:17:23.900007795 +0000 UTC m=+0.055594872 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct 08 10:17:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:17:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:17:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:17:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:24 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:17:24 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:17:24 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v984: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 08 10:17:24 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:17:24 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:17:24 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:17:24.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:17:25 compute-0 ovn_controller[153187]: 2025-10-08T10:17:25Z|00057|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct 08 10:17:25 compute-0 ceph-mon[73572]: pgmap v984: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 08 10:17:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:17:25] "GET /metrics HTTP/1.1" 200 48480 "" "Prometheus/2.51.0"
Oct 08 10:17:25 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:17:25] "GET /metrics HTTP/1.1" 200 48480 "" "Prometheus/2.51.0"
Oct 08 10:17:25 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:17:25 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:17:25 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:17:25.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:17:26 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v985: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 08 10:17:26 compute-0 ceph-mon[73572]: pgmap v985: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 08 10:17:26 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:17:26 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.003000098s ======
Oct 08 10:17:26 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:17:26.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000098s
Oct 08 10:17:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:17:27.172Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:17:27 compute-0 nova_compute[262220]: 2025-10-08 10:17:27.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:17:27 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:17:27 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:17:27 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:17:27.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:17:28 compute-0 nova_compute[262220]: 2025-10-08 10:17:28.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:17:28 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v986: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 08 10:17:28 compute-0 sudo[278774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:17:28 compute-0 sudo[278774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:17:28 compute-0 sudo[278774]: pam_unix(sudo:session): session closed for user root
Oct 08 10:17:28 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:17:28 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:17:28 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:17:28.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:17:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:17:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:17:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:17:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:29 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:17:29 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:17:29 compute-0 ceph-mon[73572]: pgmap v986: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 08 10:17:29 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/3723377571' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:17:29 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:17:29 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:17:29 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:17:29.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:17:30 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v987: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Oct 08 10:17:30 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:17:30.780 163175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2a:d6:ca', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'b2:8b:1e:40:84:a3'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 08 10:17:30 compute-0 nova_compute[262220]: 2025-10-08 10:17:30.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:17:30 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:17:30.781 163175 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 08 10:17:30 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:17:30 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:17:30 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:17:30.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:17:31 compute-0 ceph-mon[73572]: pgmap v987: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Oct 08 10:17:31 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:17:31 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:17:31 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:17:31.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:17:32 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v988: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 96 op/s
Oct 08 10:17:32 compute-0 nova_compute[262220]: 2025-10-08 10:17:32.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:17:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:17:32 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:17:32 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:17:32 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:17:32 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:17:32.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:17:33 compute-0 nova_compute[262220]: 2025-10-08 10:17:33.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:17:33 compute-0 ceph-mon[73572]: pgmap v988: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 96 op/s
Oct 08 10:17:33 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:17:33 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:17:33 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:17:33 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:17:33.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:17:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:17:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:17:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:17:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:34 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:17:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:17:34 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v989: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 96 op/s
Oct 08 10:17:34 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:17:34.783 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26869918-b723-425c-a2e1-0d697f3d0fec, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 10:17:34 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:17:34 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:17:34 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:17:34.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:17:35 compute-0 ceph-mon[73572]: pgmap v989: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 96 op/s
Oct 08 10:17:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:17:35] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 08 10:17:35 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:17:35] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 08 10:17:35 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:17:35 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:17:35 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:17:35.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:17:35 compute-0 podman[278806]: 2025-10-08 10:17:35.891859002 +0000 UTC m=+0.057635027 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 08 10:17:36 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v990: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 08 10:17:36 compute-0 ceph-mon[73572]: pgmap v990: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 08 10:17:36 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:17:36 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:17:36 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:17:36.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:17:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:17:37.173Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:17:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:17:37.173Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:17:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:17:37.173Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:17:37 compute-0 nova_compute[262220]: 2025-10-08 10:17:37.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:17:37 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:17:37 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:17:37 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:17:37.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:17:37 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/545138405' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:17:38 compute-0 nova_compute[262220]: 2025-10-08 10:17:38.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:17:38 compute-0 sudo[278830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:17:38 compute-0 sudo[278830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:17:38 compute-0 sudo[278830]: pam_unix(sudo:session): session closed for user root
Oct 08 10:17:38 compute-0 sudo[278855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 08 10:17:38 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v991: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 08 10:17:38 compute-0 sudo[278855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:17:38 compute-0 sudo[278855]: pam_unix(sudo:session): session closed for user root
Oct 08 10:17:38 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:17:38 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:17:38 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:17:38.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:17:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 10:17:38 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:17:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 08 10:17:38 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 10:17:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 08 10:17:38 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:17:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 08 10:17:38 compute-0 ceph-mon[73572]: pgmap v991: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 08 10:17:38 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:17:38 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 10:17:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:17:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:39 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:17:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:39 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:17:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:39 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:17:39 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:17:39 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 08 10:17:39 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 10:17:39 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 08 10:17:39 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 10:17:39 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 10:17:39 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:17:39 compute-0 sudo[278913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:17:39 compute-0 sudo[278913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:17:39 compute-0 sudo[278913]: pam_unix(sudo:session): session closed for user root
Oct 08 10:17:39 compute-0 sudo[278938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 08 10:17:39 compute-0 sudo[278938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:17:39 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:17:39 compute-0 podman[279008]: 2025-10-08 10:17:39.522766529 +0000 UTC m=+0.029519171 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:17:39 compute-0 podman[279008]: 2025-10-08 10:17:39.650272937 +0000 UTC m=+0.157025559 container create de16f3c78ea15a6c7100d47f3bc65eb39d11d0aab58fe6e803d8b0c9676de68f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:17:39 compute-0 systemd[1]: Started libpod-conmon-de16f3c78ea15a6c7100d47f3bc65eb39d11d0aab58fe6e803d8b0c9676de68f.scope.
Oct 08 10:17:39 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:17:39 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:17:39 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:17:39 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:17:39.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:17:39 compute-0 podman[279008]: 2025-10-08 10:17:39.837955222 +0000 UTC m=+0.344707874 container init de16f3c78ea15a6c7100d47f3bc65eb39d11d0aab58fe6e803d8b0c9676de68f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_elbakyan, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 08 10:17:39 compute-0 podman[279008]: 2025-10-08 10:17:39.845149705 +0000 UTC m=+0.351902327 container start de16f3c78ea15a6c7100d47f3bc65eb39d11d0aab58fe6e803d8b0c9676de68f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:17:39 compute-0 systemd[1]: libpod-de16f3c78ea15a6c7100d47f3bc65eb39d11d0aab58fe6e803d8b0c9676de68f.scope: Deactivated successfully.
Oct 08 10:17:39 compute-0 cool_elbakyan[279024]: 167 167
Oct 08 10:17:39 compute-0 conmon[279024]: conmon de16f3c78ea15a6c7100 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-de16f3c78ea15a6c7100d47f3bc65eb39d11d0aab58fe6e803d8b0c9676de68f.scope/container/memory.events
Oct 08 10:17:39 compute-0 podman[279008]: 2025-10-08 10:17:39.908487175 +0000 UTC m=+0.415239797 container attach de16f3c78ea15a6c7100d47f3bc65eb39d11d0aab58fe6e803d8b0c9676de68f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 08 10:17:39 compute-0 podman[279008]: 2025-10-08 10:17:39.911416819 +0000 UTC m=+0.418169441 container died de16f3c78ea15a6c7100d47f3bc65eb39d11d0aab58fe6e803d8b0c9676de68f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_elbakyan, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:17:40 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:17:40 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:17:40 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 10:17:40 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 10:17:40 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:17:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd4f7fcb379d97d6a5d82d342c3337ff457226c593bad8aaecaf76e5c30f4e5a-merged.mount: Deactivated successfully.
Oct 08 10:17:40 compute-0 podman[279008]: 2025-10-08 10:17:40.186512411 +0000 UTC m=+0.693265043 container remove de16f3c78ea15a6c7100d47f3bc65eb39d11d0aab58fe6e803d8b0c9676de68f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 08 10:17:40 compute-0 systemd[1]: libpod-conmon-de16f3c78ea15a6c7100d47f3bc65eb39d11d0aab58fe6e803d8b0c9676de68f.scope: Deactivated successfully.
Oct 08 10:17:40 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v992: 353 pgs: 353 active+clean; 82 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.3 MiB/s wr, 42 op/s
Oct 08 10:17:40 compute-0 podman[279050]: 2025-10-08 10:17:40.329342192 +0000 UTC m=+0.026099752 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:17:40 compute-0 podman[279050]: 2025-10-08 10:17:40.427627368 +0000 UTC m=+0.124384898 container create ea89eea5734e94bdd3153cb0de33c2002b50ea27201ca6847a73ec7ff6f302a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_ganguly, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 08 10:17:40 compute-0 systemd[1]: Started libpod-conmon-ea89eea5734e94bdd3153cb0de33c2002b50ea27201ca6847a73ec7ff6f302a6.scope.
Oct 08 10:17:40 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:17:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f51425f82106070af1b8a68285ec075e36fbc48eb2c537642f021f0a0b24981/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:17:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f51425f82106070af1b8a68285ec075e36fbc48eb2c537642f021f0a0b24981/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:17:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f51425f82106070af1b8a68285ec075e36fbc48eb2c537642f021f0a0b24981/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:17:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f51425f82106070af1b8a68285ec075e36fbc48eb2c537642f021f0a0b24981/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:17:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f51425f82106070af1b8a68285ec075e36fbc48eb2c537642f021f0a0b24981/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 10:17:40 compute-0 podman[279050]: 2025-10-08 10:17:40.535488423 +0000 UTC m=+0.232245953 container init ea89eea5734e94bdd3153cb0de33c2002b50ea27201ca6847a73ec7ff6f302a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_ganguly, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 08 10:17:40 compute-0 podman[279050]: 2025-10-08 10:17:40.543463129 +0000 UTC m=+0.240220659 container start ea89eea5734e94bdd3153cb0de33c2002b50ea27201ca6847a73ec7ff6f302a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct 08 10:17:40 compute-0 podman[279050]: 2025-10-08 10:17:40.562568406 +0000 UTC m=+0.259325936 container attach ea89eea5734e94bdd3153cb0de33c2002b50ea27201ca6847a73ec7ff6f302a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 10:17:40 compute-0 recursing_ganguly[279067]: --> passed data devices: 0 physical, 1 LVM
Oct 08 10:17:40 compute-0 recursing_ganguly[279067]: --> All data devices are unavailable
Oct 08 10:17:40 compute-0 systemd[1]: libpod-ea89eea5734e94bdd3153cb0de33c2002b50ea27201ca6847a73ec7ff6f302a6.scope: Deactivated successfully.
Oct 08 10:17:40 compute-0 podman[279050]: 2025-10-08 10:17:40.894432376 +0000 UTC m=+0.591189906 container died ea89eea5734e94bdd3153cb0de33c2002b50ea27201ca6847a73ec7ff6f302a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct 08 10:17:40 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:17:40 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:17:40 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:17:40.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:17:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f51425f82106070af1b8a68285ec075e36fbc48eb2c537642f021f0a0b24981-merged.mount: Deactivated successfully.
Oct 08 10:17:41 compute-0 podman[279050]: 2025-10-08 10:17:41.034669954 +0000 UTC m=+0.731427484 container remove ea89eea5734e94bdd3153cb0de33c2002b50ea27201ca6847a73ec7ff6f302a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_ganguly, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 10:17:41 compute-0 systemd[1]: libpod-conmon-ea89eea5734e94bdd3153cb0de33c2002b50ea27201ca6847a73ec7ff6f302a6.scope: Deactivated successfully.
Oct 08 10:17:41 compute-0 sudo[278938]: pam_unix(sudo:session): session closed for user root
Oct 08 10:17:41 compute-0 ceph-mon[73572]: pgmap v992: 353 pgs: 353 active+clean; 82 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.3 MiB/s wr, 42 op/s
Oct 08 10:17:41 compute-0 sudo[279094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:17:41 compute-0 sudo[279094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:17:41 compute-0 sudo[279094]: pam_unix(sudo:session): session closed for user root
Oct 08 10:17:41 compute-0 sudo[279119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- lvm list --format json
Oct 08 10:17:41 compute-0 sudo[279119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:17:41 compute-0 podman[279185]: 2025-10-08 10:17:41.637242075 +0000 UTC m=+0.075872615 container create df97a4060bd16281427ab3f2b537ff7a2bcc6bedd4cc2fa8c86e04e0d0403e58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_ritchie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 08 10:17:41 compute-0 podman[279185]: 2025-10-08 10:17:41.586945825 +0000 UTC m=+0.025576405 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:17:41 compute-0 systemd[1]: Started libpod-conmon-df97a4060bd16281427ab3f2b537ff7a2bcc6bedd4cc2fa8c86e04e0d0403e58.scope.
Oct 08 10:17:41 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:17:41 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:17:41 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:17:41 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:17:41.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:17:41 compute-0 podman[279185]: 2025-10-08 10:17:41.853739349 +0000 UTC m=+0.292369919 container init df97a4060bd16281427ab3f2b537ff7a2bcc6bedd4cc2fa8c86e04e0d0403e58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_ritchie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:17:41 compute-0 podman[279185]: 2025-10-08 10:17:41.867676588 +0000 UTC m=+0.306307128 container start df97a4060bd16281427ab3f2b537ff7a2bcc6bedd4cc2fa8c86e04e0d0403e58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_ritchie, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:17:41 compute-0 inspiring_ritchie[279203]: 167 167
Oct 08 10:17:41 compute-0 systemd[1]: libpod-df97a4060bd16281427ab3f2b537ff7a2bcc6bedd4cc2fa8c86e04e0d0403e58.scope: Deactivated successfully.
Oct 08 10:17:41 compute-0 conmon[279203]: conmon df97a4060bd16281427a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-df97a4060bd16281427ab3f2b537ff7a2bcc6bedd4cc2fa8c86e04e0d0403e58.scope/container/memory.events
Oct 08 10:17:41 compute-0 podman[279185]: 2025-10-08 10:17:41.87922093 +0000 UTC m=+0.317851500 container attach df97a4060bd16281427ab3f2b537ff7a2bcc6bedd4cc2fa8c86e04e0d0403e58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct 08 10:17:41 compute-0 podman[279185]: 2025-10-08 10:17:41.880566054 +0000 UTC m=+0.319196594 container died df97a4060bd16281427ab3f2b537ff7a2bcc6bedd4cc2fa8c86e04e0d0403e58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct 08 10:17:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1c29ffbd1250596e9da410bdfb5815131dc55cdb2048a318527a6a22d8b5aa5-merged.mount: Deactivated successfully.
Oct 08 10:17:42 compute-0 podman[279185]: 2025-10-08 10:17:42.191457768 +0000 UTC m=+0.630088308 container remove df97a4060bd16281427ab3f2b537ff7a2bcc6bedd4cc2fa8c86e04e0d0403e58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct 08 10:17:42 compute-0 systemd[1]: libpod-conmon-df97a4060bd16281427ab3f2b537ff7a2bcc6bedd4cc2fa8c86e04e0d0403e58.scope: Deactivated successfully.
Oct 08 10:17:42 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v993: 353 pgs: 353 active+clean; 82 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 8.5 KiB/s rd, 1.3 MiB/s wr, 15 op/s
Oct 08 10:17:42 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/357973710' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 08 10:17:42 compute-0 podman[279227]: 2025-10-08 10:17:42.406482715 +0000 UTC m=+0.084532724 container create 01909e937007be4d214389753978e70ce1f1d29d668ff02aaa4b881a6641ec87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 10:17:42 compute-0 nova_compute[262220]: 2025-10-08 10:17:42.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:17:42 compute-0 podman[279227]: 2025-10-08 10:17:42.351172974 +0000 UTC m=+0.029223003 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:17:42 compute-0 systemd[1]: Started libpod-conmon-01909e937007be4d214389753978e70ce1f1d29d668ff02aaa4b881a6641ec87.scope.
Oct 08 10:17:42 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:17:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef6f9993a854070d45b27220b013b82bc5fffa457855230ef44159692780eb43/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:17:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef6f9993a854070d45b27220b013b82bc5fffa457855230ef44159692780eb43/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:17:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef6f9993a854070d45b27220b013b82bc5fffa457855230ef44159692780eb43/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:17:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef6f9993a854070d45b27220b013b82bc5fffa457855230ef44159692780eb43/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:17:42 compute-0 podman[279227]: 2025-10-08 10:17:42.554523674 +0000 UTC m=+0.232573703 container init 01909e937007be4d214389753978e70ce1f1d29d668ff02aaa4b881a6641ec87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_brahmagupta, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct 08 10:17:42 compute-0 podman[279227]: 2025-10-08 10:17:42.561194229 +0000 UTC m=+0.239244238 container start 01909e937007be4d214389753978e70ce1f1d29d668ff02aaa4b881a6641ec87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 08 10:17:42 compute-0 podman[279227]: 2025-10-08 10:17:42.61740651 +0000 UTC m=+0.295456549 container attach 01909e937007be4d214389753978e70ce1f1d29d668ff02aaa4b881a6641ec87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_brahmagupta, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 08 10:17:42 compute-0 sharp_brahmagupta[279244]: {
Oct 08 10:17:42 compute-0 sharp_brahmagupta[279244]:     "1": [
Oct 08 10:17:42 compute-0 sharp_brahmagupta[279244]:         {
Oct 08 10:17:42 compute-0 sharp_brahmagupta[279244]:             "devices": [
Oct 08 10:17:42 compute-0 sharp_brahmagupta[279244]:                 "/dev/loop3"
Oct 08 10:17:42 compute-0 sharp_brahmagupta[279244]:             ],
Oct 08 10:17:42 compute-0 sharp_brahmagupta[279244]:             "lv_name": "ceph_lv0",
Oct 08 10:17:42 compute-0 sharp_brahmagupta[279244]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:17:42 compute-0 sharp_brahmagupta[279244]:             "lv_size": "21470642176",
Oct 08 10:17:42 compute-0 sharp_brahmagupta[279244]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 08 10:17:42 compute-0 sharp_brahmagupta[279244]:             "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 10:17:42 compute-0 sharp_brahmagupta[279244]:             "name": "ceph_lv0",
Oct 08 10:17:42 compute-0 sharp_brahmagupta[279244]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:17:42 compute-0 sharp_brahmagupta[279244]:             "tags": {
Oct 08 10:17:42 compute-0 sharp_brahmagupta[279244]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:17:42 compute-0 sharp_brahmagupta[279244]:                 "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 10:17:42 compute-0 sharp_brahmagupta[279244]:                 "ceph.cephx_lockbox_secret": "",
Oct 08 10:17:42 compute-0 sharp_brahmagupta[279244]:                 "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct 08 10:17:42 compute-0 sharp_brahmagupta[279244]:                 "ceph.cluster_name": "ceph",
Oct 08 10:17:42 compute-0 sharp_brahmagupta[279244]:                 "ceph.crush_device_class": "",
Oct 08 10:17:42 compute-0 sharp_brahmagupta[279244]:                 "ceph.encrypted": "0",
Oct 08 10:17:42 compute-0 sharp_brahmagupta[279244]:                 "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct 08 10:17:42 compute-0 sharp_brahmagupta[279244]:                 "ceph.osd_id": "1",
Oct 08 10:17:42 compute-0 sharp_brahmagupta[279244]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 08 10:17:42 compute-0 sharp_brahmagupta[279244]:                 "ceph.type": "block",
Oct 08 10:17:42 compute-0 sharp_brahmagupta[279244]:                 "ceph.vdo": "0",
Oct 08 10:17:42 compute-0 sharp_brahmagupta[279244]:                 "ceph.with_tpm": "0"
Oct 08 10:17:42 compute-0 sharp_brahmagupta[279244]:             },
Oct 08 10:17:42 compute-0 sharp_brahmagupta[279244]:             "type": "block",
Oct 08 10:17:42 compute-0 sharp_brahmagupta[279244]:             "vg_name": "ceph_vg0"
Oct 08 10:17:42 compute-0 sharp_brahmagupta[279244]:         }
Oct 08 10:17:42 compute-0 sharp_brahmagupta[279244]:     ]
Oct 08 10:17:42 compute-0 sharp_brahmagupta[279244]: }
Oct 08 10:17:42 compute-0 systemd[1]: libpod-01909e937007be4d214389753978e70ce1f1d29d668ff02aaa4b881a6641ec87.scope: Deactivated successfully.
Oct 08 10:17:42 compute-0 podman[279253]: 2025-10-08 10:17:42.904357443 +0000 UTC m=+0.026519074 container died 01909e937007be4d214389753978e70ce1f1d29d668ff02aaa4b881a6641ec87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 10:17:42 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:17:42 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:17:42 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:17:42.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:17:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef6f9993a854070d45b27220b013b82bc5fffa457855230ef44159692780eb43-merged.mount: Deactivated successfully.
Oct 08 10:17:43 compute-0 podman[279253]: 2025-10-08 10:17:43.06442777 +0000 UTC m=+0.186589371 container remove 01909e937007be4d214389753978e70ce1f1d29d668ff02aaa4b881a6641ec87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_brahmagupta, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct 08 10:17:43 compute-0 systemd[1]: libpod-conmon-01909e937007be4d214389753978e70ce1f1d29d668ff02aaa4b881a6641ec87.scope: Deactivated successfully.
Oct 08 10:17:43 compute-0 nova_compute[262220]: 2025-10-08 10:17:43.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:17:43 compute-0 sudo[279119]: pam_unix(sudo:session): session closed for user root
Oct 08 10:17:43 compute-0 sudo[279268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:17:43 compute-0 sudo[279268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:17:43 compute-0 sudo[279268]: pam_unix(sudo:session): session closed for user root
Oct 08 10:17:43 compute-0 sudo[279294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- raw list --format json
Oct 08 10:17:43 compute-0 sudo[279294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:17:43 compute-0 ceph-mon[73572]: pgmap v993: 353 pgs: 353 active+clean; 82 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 8.5 KiB/s rd, 1.3 MiB/s wr, 15 op/s
Oct 08 10:17:43 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/4140928616' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 08 10:17:43 compute-0 podman[279360]: 2025-10-08 10:17:43.673722068 +0000 UTC m=+0.057391329 container create bd4bd911b8fc43caf636c9704021d70dce67511b1bc7f320adf16afc923ccc76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:17:43 compute-0 systemd[1]: Started libpod-conmon-bd4bd911b8fc43caf636c9704021d70dce67511b1bc7f320adf16afc923ccc76.scope.
Oct 08 10:17:43 compute-0 podman[279360]: 2025-10-08 10:17:43.641327505 +0000 UTC m=+0.024996796 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:17:43 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:17:43 compute-0 podman[279360]: 2025-10-08 10:17:43.780216978 +0000 UTC m=+0.163886259 container init bd4bd911b8fc43caf636c9704021d70dce67511b1bc7f320adf16afc923ccc76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_pasteur, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 10:17:43 compute-0 podman[279360]: 2025-10-08 10:17:43.792232236 +0000 UTC m=+0.175901497 container start bd4bd911b8fc43caf636c9704021d70dce67511b1bc7f320adf16afc923ccc76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 08 10:17:43 compute-0 clever_pasteur[279376]: 167 167
Oct 08 10:17:43 compute-0 systemd[1]: libpod-bd4bd911b8fc43caf636c9704021d70dce67511b1bc7f320adf16afc923ccc76.scope: Deactivated successfully.
Oct 08 10:17:43 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:17:43 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:17:43 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:17:43.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:17:43 compute-0 podman[279360]: 2025-10-08 10:17:43.821396446 +0000 UTC m=+0.205065747 container attach bd4bd911b8fc43caf636c9704021d70dce67511b1bc7f320adf16afc923ccc76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_pasteur, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 10:17:43 compute-0 podman[279360]: 2025-10-08 10:17:43.822587134 +0000 UTC m=+0.206256395 container died bd4bd911b8fc43caf636c9704021d70dce67511b1bc7f320adf16afc923ccc76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True)
Oct 08 10:17:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-88aacc33048058288e60e6c19667588cbc1d125a1384724b7e38fc639b12fb67-merged.mount: Deactivated successfully.
Oct 08 10:17:43 compute-0 podman[279360]: 2025-10-08 10:17:43.945120421 +0000 UTC m=+0.328789692 container remove bd4bd911b8fc43caf636c9704021d70dce67511b1bc7f320adf16afc923ccc76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct 08 10:17:43 compute-0 systemd[1]: libpod-conmon-bd4bd911b8fc43caf636c9704021d70dce67511b1bc7f320adf16afc923ccc76.scope: Deactivated successfully.
Oct 08 10:17:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:17:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:44 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:17:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:44 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:17:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:44 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:17:44 compute-0 podman[279399]: 2025-10-08 10:17:44.122224746 +0000 UTC m=+0.045003251 container create da398a7c7c6f37ac8dca9d8f85e2151edc4257bc43a24f32948cb9e3142f2f51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_cray, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct 08 10:17:44 compute-0 systemd[1]: Started libpod-conmon-da398a7c7c6f37ac8dca9d8f85e2151edc4257bc43a24f32948cb9e3142f2f51.scope.
Oct 08 10:17:44 compute-0 podman[279399]: 2025-10-08 10:17:44.101792668 +0000 UTC m=+0.024571193 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:17:44 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:17:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccf772530f6e6caa80c59be9d1339e8f1b905444c997dc7750fa6506603200d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:17:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccf772530f6e6caa80c59be9d1339e8f1b905444c997dc7750fa6506603200d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:17:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccf772530f6e6caa80c59be9d1339e8f1b905444c997dc7750fa6506603200d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:17:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccf772530f6e6caa80c59be9d1339e8f1b905444c997dc7750fa6506603200d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:17:44 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:17:44 compute-0 podman[279399]: 2025-10-08 10:17:44.22135606 +0000 UTC m=+0.144134585 container init da398a7c7c6f37ac8dca9d8f85e2151edc4257bc43a24f32948cb9e3142f2f51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_cray, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 08 10:17:44 compute-0 podman[279399]: 2025-10-08 10:17:44.228258492 +0000 UTC m=+0.151036997 container start da398a7c7c6f37ac8dca9d8f85e2151edc4257bc43a24f32948cb9e3142f2f51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_cray, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 08 10:17:44 compute-0 podman[279399]: 2025-10-08 10:17:44.232019443 +0000 UTC m=+0.154797958 container attach da398a7c7c6f37ac8dca9d8f85e2151edc4257bc43a24f32948cb9e3142f2f51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_cray, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 08 10:17:44 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v994: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Oct 08 10:17:44 compute-0 nova_compute[262220]: 2025-10-08 10:17:44.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:17:44 compute-0 lvm[279490]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 08 10:17:44 compute-0 lvm[279490]: VG ceph_vg0 finished
Oct 08 10:17:44 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:17:44 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:17:44 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:17:44.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:17:44 compute-0 boring_cray[279416]: {}
Oct 08 10:17:44 compute-0 systemd[1]: libpod-da398a7c7c6f37ac8dca9d8f85e2151edc4257bc43a24f32948cb9e3142f2f51.scope: Deactivated successfully.
Oct 08 10:17:44 compute-0 podman[279399]: 2025-10-08 10:17:44.969181641 +0000 UTC m=+0.891960146 container died da398a7c7c6f37ac8dca9d8f85e2151edc4257bc43a24f32948cb9e3142f2f51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_cray, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 08 10:17:44 compute-0 systemd[1]: libpod-da398a7c7c6f37ac8dca9d8f85e2151edc4257bc43a24f32948cb9e3142f2f51.scope: Consumed 1.201s CPU time.
Oct 08 10:17:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-ccf772530f6e6caa80c59be9d1339e8f1b905444c997dc7750fa6506603200d3-merged.mount: Deactivated successfully.
Oct 08 10:17:45 compute-0 podman[279399]: 2025-10-08 10:17:45.021613219 +0000 UTC m=+0.944391714 container remove da398a7c7c6f37ac8dca9d8f85e2151edc4257bc43a24f32948cb9e3142f2f51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_cray, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 08 10:17:45 compute-0 systemd[1]: libpod-conmon-da398a7c7c6f37ac8dca9d8f85e2151edc4257bc43a24f32948cb9e3142f2f51.scope: Deactivated successfully.
Oct 08 10:17:45 compute-0 sudo[279294]: pam_unix(sudo:session): session closed for user root
Oct 08 10:17:45 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 10:17:45 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:17:45 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 10:17:45 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:17:45 compute-0 sudo[279508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 08 10:17:45 compute-0 sudo[279508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:17:45 compute-0 sudo[279508]: pam_unix(sudo:session): session closed for user root
Oct 08 10:17:45 compute-0 ceph-mon[73572]: pgmap v994: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Oct 08 10:17:45 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:17:45 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:17:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:17:45] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 08 10:17:45 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:17:45] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 08 10:17:45 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:17:45 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:17:45 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:17:45.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:17:45 compute-0 nova_compute[262220]: 2025-10-08 10:17:45.888 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:17:46 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v995: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 08 10:17:46 compute-0 nova_compute[262220]: 2025-10-08 10:17:46.882 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:17:46 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:17:46 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:17:46 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:17:46.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:17:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:17:47.174Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:17:47 compute-0 nova_compute[262220]: 2025-10-08 10:17:47.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:17:47 compute-0 ceph-mon[73572]: pgmap v995: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 08 10:17:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:17:47
Oct 08 10:17:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 08 10:17:47 compute-0 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct 08 10:17:47 compute-0 ceph-mgr[73869]: [balancer INFO root] pools ['default.rgw.meta', 'vms', 'backups', '.nfs', '.mgr', 'volumes', 'default.rgw.log', 'images', 'cephfs.cephfs.data', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta']
Oct 08 10:17:47 compute-0 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct 08 10:17:47 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:17:47 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:17:47 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:17:47.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:17:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:17:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:17:47 compute-0 nova_compute[262220]: 2025-10-08 10:17:47.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:17:47 compute-0 nova_compute[262220]: 2025-10-08 10:17:47.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:17:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:17:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:17:47 compute-0 podman[279535]: 2025-10-08 10:17:47.92690404 +0000 UTC m=+0.082494798 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller)
Oct 08 10:17:47 compute-0 nova_compute[262220]: 2025-10-08 10:17:47.988 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:17:47 compute-0 nova_compute[262220]: 2025-10-08 10:17:47.988 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:17:47 compute-0 nova_compute[262220]: 2025-10-08 10:17:47.989 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:17:47 compute-0 nova_compute[262220]: 2025-10-08 10:17:47.990 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 08 10:17:47 compute-0 nova_compute[262220]: 2025-10-08 10:17:47.990 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:17:48 compute-0 nova_compute[262220]: 2025-10-08 10:17:48.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:17:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct 08 10:17:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:17:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 08 10:17:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:17:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Oct 08 10:17:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:17:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:17:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:17:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:17:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:17:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 08 10:17:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:17:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 08 10:17:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:17:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:17:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:17:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 08 10:17:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:17:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 08 10:17:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:17:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:17:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:17:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 08 10:17:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:17:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 10:17:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 08 10:17:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:17:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:17:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:17:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:17:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:17:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:17:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:17:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:17:48 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v996: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 08 10:17:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 08 10:17:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:17:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:17:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:17:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:17:48 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 08 10:17:48 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/705253964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:17:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:17:48 compute-0 nova_compute[262220]: 2025-10-08 10:17:48.482 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:17:48 compute-0 sudo[279583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:17:48 compute-0 sudo[279583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:17:48 compute-0 sudo[279583]: pam_unix(sudo:session): session closed for user root
Oct 08 10:17:48 compute-0 nova_compute[262220]: 2025-10-08 10:17:48.692 2 WARNING nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 10:17:48 compute-0 nova_compute[262220]: 2025-10-08 10:17:48.693 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4525MB free_disk=59.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 08 10:17:48 compute-0 nova_compute[262220]: 2025-10-08 10:17:48.694 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:17:48 compute-0 nova_compute[262220]: 2025-10-08 10:17:48.694 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:17:48 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:17:48 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.002000066s ======
Oct 08 10:17:48 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:17:48.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000066s
Oct 08 10:17:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:17:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:17:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:17:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:49 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:17:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:17:49 compute-0 ceph-mon[73572]: pgmap v996: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 08 10:17:49 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/705253964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:17:49 compute-0 nova_compute[262220]: 2025-10-08 10:17:49.564 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 08 10:17:49 compute-0 nova_compute[262220]: 2025-10-08 10:17:49.565 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 08 10:17:49 compute-0 nova_compute[262220]: 2025-10-08 10:17:49.584 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:17:49 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:17:49 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:17:49 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:17:49.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:17:49 compute-0 ceph-mgr[73869]: [devicehealth INFO root] Check health
Oct 08 10:17:50 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 08 10:17:50 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/494990636' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:17:50 compute-0 nova_compute[262220]: 2025-10-08 10:17:50.041 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:17:50 compute-0 nova_compute[262220]: 2025-10-08 10:17:50.046 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 08 10:17:50 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v997: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 08 10:17:50 compute-0 nova_compute[262220]: 2025-10-08 10:17:50.298 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 08 10:17:50 compute-0 nova_compute[262220]: 2025-10-08 10:17:50.299 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 08 10:17:50 compute-0 nova_compute[262220]: 2025-10-08 10:17:50.300 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.605s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:17:50 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/494990636' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:17:50 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:17:50 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:17:50 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:17:50.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:17:51 compute-0 nova_compute[262220]: 2025-10-08 10:17:51.300 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:17:51 compute-0 nova_compute[262220]: 2025-10-08 10:17:51.300 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 08 10:17:51 compute-0 nova_compute[262220]: 2025-10-08 10:17:51.301 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 08 10:17:51 compute-0 nova_compute[262220]: 2025-10-08 10:17:51.348 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 08 10:17:51 compute-0 nova_compute[262220]: 2025-10-08 10:17:51.349 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:17:51 compute-0 nova_compute[262220]: 2025-10-08 10:17:51.349 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:17:51 compute-0 nova_compute[262220]: 2025-10-08 10:17:51.349 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 08 10:17:51 compute-0 ceph-mon[73572]: pgmap v997: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 08 10:17:51 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/2511713893' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:17:51 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:17:51 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:17:51 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:17:51.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:17:51 compute-0 nova_compute[262220]: 2025-10-08 10:17:51.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:17:52 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v998: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 485 KiB/s wr, 87 op/s
Oct 08 10:17:52 compute-0 nova_compute[262220]: 2025-10-08 10:17:52.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:17:52 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/2368244423' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:17:52 compute-0 ceph-mon[73572]: pgmap v998: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 485 KiB/s wr, 87 op/s
Oct 08 10:17:52 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:17:52 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:17:52 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:17:52.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:17:53 compute-0 nova_compute[262220]: 2025-10-08 10:17:53.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:17:53 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:17:53 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:17:53 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:17:53.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:17:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:17:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:17:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:17:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:54 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:17:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:17:54 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/3189929695' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:17:54 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v999: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 486 KiB/s wr, 112 op/s
Oct 08 10:17:54 compute-0 podman[279636]: 2025-10-08 10:17:54.897554851 +0000 UTC m=+0.056199482 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 08 10:17:54 compute-0 podman[279637]: 2025-10-08 10:17:54.899475653 +0000 UTC m=+0.056506811 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 08 10:17:54 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:17:54 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:17:54 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:17:54.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:17:55 compute-0 ceph-mon[73572]: pgmap v999: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 486 KiB/s wr, 112 op/s
Oct 08 10:17:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:17:55] "GET /metrics HTTP/1.1" 200 48478 "" "Prometheus/2.51.0"
Oct 08 10:17:55 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:17:55] "GET /metrics HTTP/1.1" 200 48478 "" "Prometheus/2.51.0"
Oct 08 10:17:55 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:17:55 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:17:55 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:17:55.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:17:56 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/3795117804' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:17:56 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1000: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 94 op/s
Oct 08 10:17:56 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:17:56 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:17:56 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:17:56.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:17:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:17:57.175Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:17:57 compute-0 ceph-mon[73572]: pgmap v1000: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 94 op/s
Oct 08 10:17:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:17:57.416 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:17:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:17:57.417 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:17:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:17:57.417 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:17:57 compute-0 nova_compute[262220]: 2025-10-08 10:17:57.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:17:57 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:17:57 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:17:57 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:17:57.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:17:58 compute-0 nova_compute[262220]: 2025-10-08 10:17:58.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:17:58 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1001: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 94 op/s
Oct 08 10:17:58 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:17:58 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:17:58 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:17:58.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:17:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:17:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:17:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:17:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:59 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:17:59 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:17:59 compute-0 ceph-mon[73572]: pgmap v1001: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 94 op/s
Oct 08 10:17:59 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/3254729871' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:17:59 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:17:59 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:17:59 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:17:59.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:18:00 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1002: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 96 op/s
Oct 08 10:18:00 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:18:00 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:18:00 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:18:00.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:18:01 compute-0 ceph-mon[73572]: pgmap v1002: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 96 op/s
Oct 08 10:18:01 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:18:01 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:18:01 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:18:01.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:18:02 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1003: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 08 10:18:02 compute-0 nova_compute[262220]: 2025-10-08 10:18:02.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:18:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:18:02 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:18:02 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:18:02 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:18:02 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:18:02.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:18:03 compute-0 ceph-mon[73572]: pgmap v1003: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 08 10:18:03 compute-0 nova_compute[262220]: 2025-10-08 10:18:03.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:18:03 compute-0 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Oct 08 10:18:03 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:18:03.446583) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 08 10:18:03 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Oct 08 10:18:03 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918683446662, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 1164, "num_deletes": 501, "total_data_size": 1412293, "memory_usage": 1445744, "flush_reason": "Manual Compaction"}
Oct 08 10:18:03 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Oct 08 10:18:03 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918683516808, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 1006345, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28374, "largest_seqno": 29537, "table_properties": {"data_size": 1001883, "index_size": 1538, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14512, "raw_average_key_size": 19, "raw_value_size": 990453, "raw_average_value_size": 1336, "num_data_blocks": 67, "num_entries": 741, "num_filter_entries": 741, "num_deletions": 501, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759918609, "oldest_key_time": 1759918609, "file_creation_time": 1759918683, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Oct 08 10:18:03 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 70275 microseconds, and 3709 cpu microseconds.
Oct 08 10:18:03 compute-0 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 08 10:18:03 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:18:03.516863) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 1006345 bytes OK
Oct 08 10:18:03 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:18:03.516891) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Oct 08 10:18:03 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:18:03.752006) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Oct 08 10:18:03 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:18:03.752098) EVENT_LOG_v1 {"time_micros": 1759918683752089, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 08 10:18:03 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:18:03.752122) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 08 10:18:03 compute-0 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 1405960, prev total WAL file size 1405960, number of live WAL files 2.
Oct 08 10:18:03 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 10:18:03 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:18:03.753767) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Oct 08 10:18:03 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 08 10:18:03 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(982KB)], [62(16MB)]
Oct 08 10:18:03 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918683753885, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 17870848, "oldest_snapshot_seqno": -1}
Oct 08 10:18:03 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:18:03 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:18:03 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:18:03.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:18:03 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 5756 keys, 12129461 bytes, temperature: kUnknown
Oct 08 10:18:03 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918683947878, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 12129461, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12093108, "index_size": 20883, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14405, "raw_key_size": 148954, "raw_average_key_size": 25, "raw_value_size": 11991340, "raw_average_value_size": 2083, "num_data_blocks": 834, "num_entries": 5756, "num_filter_entries": 5756, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916581, "oldest_key_time": 0, "file_creation_time": 1759918683, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Oct 08 10:18:03 compute-0 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 08 10:18:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:18:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:18:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:18:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:04 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:18:04 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:18:03.948155) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 12129461 bytes
Oct 08 10:18:04 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:18:04.100612) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 92.1 rd, 62.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 16.1 +0.0 blob) out(11.6 +0.0 blob), read-write-amplify(29.8) write-amplify(12.1) OK, records in: 6749, records dropped: 993 output_compression: NoCompression
Oct 08 10:18:04 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:18:04.100671) EVENT_LOG_v1 {"time_micros": 1759918684100650, "job": 34, "event": "compaction_finished", "compaction_time_micros": 194062, "compaction_time_cpu_micros": 50409, "output_level": 6, "num_output_files": 1, "total_output_size": 12129461, "num_input_records": 6749, "num_output_records": 5756, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 08 10:18:04 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 10:18:04 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918684101337, "job": 34, "event": "table_file_deletion", "file_number": 64}
Oct 08 10:18:04 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 10:18:04 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918684106218, "job": 34, "event": "table_file_deletion", "file_number": 62}
Oct 08 10:18:04 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:18:03.753529) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:18:04 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:18:04.106334) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:18:04 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:18:04.106340) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:18:04 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:18:04.106342) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:18:04 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:18:04.106343) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:18:04 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:18:04.106345) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:18:04 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:18:04 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:18:04 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1004: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 08 10:18:04 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:18:04 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:18:04 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:18:04.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:18:05 compute-0 ceph-mon[73572]: pgmap v1004: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 08 10:18:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:18:05] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Oct 08 10:18:05 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:18:05] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Oct 08 10:18:05 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:18:05 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:18:05 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:18:05.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:18:06 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1005: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 170 B/s wr, 2 op/s
Oct 08 10:18:06 compute-0 ceph-mon[73572]: pgmap v1005: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 170 B/s wr, 2 op/s
Oct 08 10:18:06 compute-0 podman[279685]: 2025-10-08 10:18:06.920088905 +0000 UTC m=+0.068837359 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 08 10:18:06 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:18:06 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:18:06 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:18:06.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:18:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:18:07.176Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:18:07 compute-0 nova_compute[262220]: 2025-10-08 10:18:07.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:18:07 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:18:07 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:18:07 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:18:07.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:18:08 compute-0 nova_compute[262220]: 2025-10-08 10:18:08.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:18:08 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1006: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 170 B/s wr, 2 op/s
Oct 08 10:18:08 compute-0 sudo[279707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:18:08 compute-0 sudo[279707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:18:08 compute-0 sudo[279707]: pam_unix(sudo:session): session closed for user root
Oct 08 10:18:08 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:18:08 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:18:08 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:18:08.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:18:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:18:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:18:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:18:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:09 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:18:09 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:18:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=plugins.update.checker t=2025-10-08T10:18:09.566221338Z level=info msg="Update check succeeded" duration=53.637388ms
Oct 08 10:18:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=grafana.update.checker t=2025-10-08T10:18:09.629643791Z level=info msg="Update check succeeded" duration=117.01454ms
Oct 08 10:18:09 compute-0 ceph-mon[73572]: pgmap v1006: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 170 B/s wr, 2 op/s
Oct 08 10:18:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=cleanup t=2025-10-08T10:18:09.685566033Z level=info msg="Completed cleanup jobs" duration=250.462248ms
Oct 08 10:18:09 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:18:09 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:18:09 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:18:09.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:18:10 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1007: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 170 B/s wr, 2 op/s
Oct 08 10:18:10 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:18:10 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:18:10 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:18:10.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:18:11 compute-0 ceph-mon[73572]: pgmap v1007: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 170 B/s wr, 2 op/s
Oct 08 10:18:11 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:18:11 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:18:11 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:18:11.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:18:12 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1008: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:18:12 compute-0 nova_compute[262220]: 2025-10-08 10:18:12.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:18:12 compute-0 ceph-mon[73572]: pgmap v1008: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:18:12 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:18:12 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:18:12 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:18:12.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:18:13 compute-0 nova_compute[262220]: 2025-10-08 10:18:13.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:18:13 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:18:13 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:18:13 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:18:13.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:18:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:18:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:18:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:18:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:14 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:18:14 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:18:14 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1009: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:18:14 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:18:14 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:18:14 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:18:14.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:18:15 compute-0 ceph-mon[73572]: pgmap v1009: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:18:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:18:15] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Oct 08 10:18:15 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:18:15] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Oct 08 10:18:15 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:18:15 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:18:15 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:18:15.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:18:16 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1010: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:18:16 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:18:16 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:18:16 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:18:16.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:18:17 compute-0 ceph-mon[73572]: pgmap v1010: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:18:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:18:17.176Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:18:17 compute-0 nova_compute[262220]: 2025-10-08 10:18:17.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:18:17 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:18:17 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:18:17 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:18:17 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:18:17 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:18:17.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:18:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:18:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:18:18 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:18:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:18:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:18:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:18:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:18:18 compute-0 nova_compute[262220]: 2025-10-08 10:18:18.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:18:18 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1011: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:18:18 compute-0 podman[279742]: 2025-10-08 10:18:18.925073327 +0000 UTC m=+0.084985576 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 08 10:18:18 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:18:18 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:18:18 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:18:18.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:18:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:18:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:18:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:18:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:19 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:18:19 compute-0 ceph-mon[73572]: pgmap v1011: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:18:19 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:18:19 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:18:19 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:18:19 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:18:19.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:18:20 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1012: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:18:20 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:18:20 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:18:20 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:18:20.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:18:21 compute-0 ceph-mon[73572]: pgmap v1012: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:18:21 compute-0 ceph-mon[73572]: from='client.? 192.168.122.10:0/1329066292' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 08 10:18:21 compute-0 ceph-mon[73572]: from='client.? 192.168.122.10:0/1329066292' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 08 10:18:21 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:18:21 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:18:21 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:18:21.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:18:22 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1013: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:18:22 compute-0 nova_compute[262220]: 2025-10-08 10:18:22.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:18:22 compute-0 ceph-mon[73572]: pgmap v1013: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:18:22 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:18:22 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:18:22 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:18:22.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:18:23 compute-0 nova_compute[262220]: 2025-10-08 10:18:23.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:18:23 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:18:23 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:18:23 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:18:23.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:18:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:18:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:18:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:18:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:24 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:18:24 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:18:24 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1014: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:18:24 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:18:24 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:18:24 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:18:24.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:18:25 compute-0 ceph-mon[73572]: pgmap v1014: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:18:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:18:25] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 08 10:18:25 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:18:25] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 08 10:18:25 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:18:25 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:18:25 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:18:25.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:18:25 compute-0 podman[279775]: 2025-10-08 10:18:25.898692351 +0000 UTC m=+0.059730480 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 08 10:18:25 compute-0 podman[279776]: 2025-10-08 10:18:25.898209855 +0000 UTC m=+0.053653344 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 08 10:18:26 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1015: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:18:26 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:18:26 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:18:26 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:18:26.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:18:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:18:27.178Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:18:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:18:27.178Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:18:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:18:27.178Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:18:27 compute-0 nova_compute[262220]: 2025-10-08 10:18:27.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:18:27 compute-0 ceph-mon[73572]: pgmap v1015: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:18:27 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:18:27 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:18:27 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:18:27.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:18:28 compute-0 nova_compute[262220]: 2025-10-08 10:18:28.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:18:28 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1016: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:18:28 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/3105044423' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:18:28 compute-0 sudo[279817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:18:28 compute-0 sudo[279817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:18:28 compute-0 sudo[279817]: pam_unix(sudo:session): session closed for user root
Oct 08 10:18:28 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:18:28 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:18:28 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:18:28.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:18:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:18:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:18:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:18:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:29 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:18:29 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:18:29 compute-0 ceph-mon[73572]: pgmap v1016: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:18:29 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:18:29 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:18:29 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:18:29.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:18:30 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1017: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:18:30 compute-0 ceph-mon[73572]: pgmap v1017: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:18:30 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:18:30 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:18:30 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:18:30.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:18:31 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:18:31 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:18:31 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:18:31.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:18:32 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1018: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:18:32 compute-0 nova_compute[262220]: 2025-10-08 10:18:32.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:18:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:18:32 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:18:32 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:18:32 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:18:32 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:18:32.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:18:33 compute-0 nova_compute[262220]: 2025-10-08 10:18:33.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:18:33 compute-0 ceph-mon[73572]: pgmap v1018: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:18:33 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:18:33 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:18:33 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:18:33 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:18:33.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:18:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:18:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:18:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:18:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:34 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:18:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:18:34 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1019: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 08 10:18:34 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:18:34 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:18:34 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:18:34.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:18:35 compute-0 ceph-mon[73572]: pgmap v1019: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 08 10:18:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:18:35] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Oct 08 10:18:35 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:18:35] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Oct 08 10:18:35 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:18:35 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:18:35 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:18:35.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:18:36 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1020: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 08 10:18:36 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:18:36 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:18:36 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:18:36.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:18:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:18:37.179Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:18:37 compute-0 ceph-mon[73572]: pgmap v1020: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 08 10:18:37 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/4208943231' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 08 10:18:37 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/2483617096' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 08 10:18:37 compute-0 nova_compute[262220]: 2025-10-08 10:18:37.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:18:37 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:18:37 compute-0 podman[279851]: 2025-10-08 10:18:37.902104129 +0000 UTC m=+0.058092296 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, config_id=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 08 10:18:37 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:18:37 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:18:37.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:18:38 compute-0 nova_compute[262220]: 2025-10-08 10:18:38.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:18:38 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1021: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 08 10:18:38 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:18:38 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:18:38 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:18:38.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:18:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:18:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:18:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:18:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:39 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:18:39 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:18:39 compute-0 ceph-mon[73572]: pgmap v1021: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 08 10:18:39 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:18:39 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:18:39 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:18:39.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:18:40 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1022: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Oct 08 10:18:40 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:18:40 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:18:40 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:18:40.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:18:41 compute-0 ceph-mon[73572]: pgmap v1022: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Oct 08 10:18:41 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:18:41 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:18:41 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:18:41.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:18:42 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1023: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Oct 08 10:18:42 compute-0 nova_compute[262220]: 2025-10-08 10:18:42.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:18:42 compute-0 ceph-mon[73572]: pgmap v1023: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Oct 08 10:18:42 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:18:42 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:18:42 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:18:42.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:18:43 compute-0 nova_compute[262220]: 2025-10-08 10:18:43.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:18:43 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:18:43 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:18:43 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:18:43.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:18:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:18:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:18:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:18:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:44 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:18:44 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:18:44 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1024: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 08 10:18:44 compute-0 nova_compute[262220]: 2025-10-08 10:18:44.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:18:44 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:18:44 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:18:44 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:18:44.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:18:45 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:18:45.283 163175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2a:d6:ca', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'b2:8b:1e:40:84:a3'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 08 10:18:45 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:18:45.283 163175 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 08 10:18:45 compute-0 nova_compute[262220]: 2025-10-08 10:18:45.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:18:45 compute-0 sudo[279879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:18:45 compute-0 sudo[279879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:18:45 compute-0 sudo[279879]: pam_unix(sudo:session): session closed for user root
Oct 08 10:18:45 compute-0 sudo[279904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 08 10:18:45 compute-0 ceph-mon[73572]: pgmap v1024: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 08 10:18:45 compute-0 sudo[279904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:18:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:18:45] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Oct 08 10:18:45 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:18:45] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Oct 08 10:18:45 compute-0 nova_compute[262220]: 2025-10-08 10:18:45.882 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:18:45 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:18:45 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:18:45 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:18:45.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:18:46 compute-0 sudo[279904]: pam_unix(sudo:session): session closed for user root
Oct 08 10:18:46 compute-0 sudo[279964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:18:46 compute-0 sudo[279964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:18:46 compute-0 sudo[279964]: pam_unix(sudo:session): session closed for user root
Oct 08 10:18:46 compute-0 sudo[279989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Oct 08 10:18:46 compute-0 sudo[279989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:18:46 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1025: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 08 10:18:46 compute-0 sudo[279989]: pam_unix(sudo:session): session closed for user root
Oct 08 10:18:46 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 10:18:46 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:18:46 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 10:18:46 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:18:46 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Oct 08 10:18:46 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 08 10:18:46 compute-0 nova_compute[262220]: 2025-10-08 10:18:46.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:18:46 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:18:46 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.002000065s ======
Oct 08 10:18:46 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:18:46.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000065s
Oct 08 10:18:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:18:47.180Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:18:47 compute-0 nova_compute[262220]: 2025-10-08 10:18:47.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:18:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 08 10:18:47 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:18:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 08 10:18:47 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:18:47 compute-0 ceph-mon[73572]: pgmap v1025: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 08 10:18:47 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:18:47 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:18:47 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 08 10:18:47 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:18:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:18:47
Oct 08 10:18:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 08 10:18:47 compute-0 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct 08 10:18:47 compute-0 ceph-mgr[73869]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.control', '.nfs', '.mgr', 'default.rgw.log', 'vms', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data', 'images', 'default.rgw.meta', 'backups']
Oct 08 10:18:47 compute-0 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct 08 10:18:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:18:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:18:47 compute-0 nova_compute[262220]: 2025-10-08 10:18:47.884 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:18:47 compute-0 nova_compute[262220]: 2025-10-08 10:18:47.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:18:47 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:18:47 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:18:47 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:18:47.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:18:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:18:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:18:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 08 10:18:47 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:18:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 08 10:18:48 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:18:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct 08 10:18:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:18:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 08 10:18:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:18:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Oct 08 10:18:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:18:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:18:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:18:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:18:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:18:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 08 10:18:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:18:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 08 10:18:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:18:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:18:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:18:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 08 10:18:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:18:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 08 10:18:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:18:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:18:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:18:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 08 10:18:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:18:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 10:18:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 08 10:18:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:18:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:18:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:18:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:18:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:18:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:18:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:18:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:18:48 compute-0 nova_compute[262220]: 2025-10-08 10:18:48.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:18:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 08 10:18:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:18:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:18:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:18:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:18:48 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1026: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 08 10:18:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:18:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:18:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:18:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:18:48 compute-0 ceph-mon[73572]: pgmap v1026: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 08 10:18:48 compute-0 sudo[280036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:18:48 compute-0 sudo[280036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:18:48 compute-0 sudo[280036]: pam_unix(sudo:session): session closed for user root
Oct 08 10:18:48 compute-0 nova_compute[262220]: 2025-10-08 10:18:48.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:18:48 compute-0 nova_compute[262220]: 2025-10-08 10:18:48.887 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 08 10:18:48 compute-0 nova_compute[262220]: 2025-10-08 10:18:48.887 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 08 10:18:48 compute-0 nova_compute[262220]: 2025-10-08 10:18:48.926 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 08 10:18:48 compute-0 nova_compute[262220]: 2025-10-08 10:18:48.927 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:18:48 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 08 10:18:48 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:18:48 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:18:48 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:18:48.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:18:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:18:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:18:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:18:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:49 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:18:49 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:18:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 08 10:18:49 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:18:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Oct 08 10:18:49 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 08 10:18:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 08 10:18:49 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:18:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 08 10:18:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:18:49 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:18:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Oct 08 10:18:49 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 08 10:18:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 10:18:49 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:18:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 08 10:18:49 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 10:18:49 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1027: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 14 KiB/s wr, 81 op/s
Oct 08 10:18:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 08 10:18:49 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:18:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 08 10:18:49 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:18:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 08 10:18:49 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 10:18:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 08 10:18:49 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 10:18:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 10:18:49 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:18:49 compute-0 sudo[280062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:18:49 compute-0 sudo[280062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:18:49 compute-0 sudo[280062]: pam_unix(sudo:session): session closed for user root
Oct 08 10:18:49 compute-0 sudo[280093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 08 10:18:49 compute-0 sudo[280093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:18:49 compute-0 podman[280086]: 2025-10-08 10:18:49.620278738 +0000 UTC m=+0.118812077 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 08 10:18:49 compute-0 nova_compute[262220]: 2025-10-08 10:18:49.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:18:49 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:18:49 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:18:49 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:18:49.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:18:49 compute-0 nova_compute[262220]: 2025-10-08 10:18:49.935 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:18:49 compute-0 nova_compute[262220]: 2025-10-08 10:18:49.936 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:18:49 compute-0 nova_compute[262220]: 2025-10-08 10:18:49.936 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:18:49 compute-0 nova_compute[262220]: 2025-10-08 10:18:49.936 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 08 10:18:49 compute-0 nova_compute[262220]: 2025-10-08 10:18:49.937 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:18:50 compute-0 podman[280179]: 2025-10-08 10:18:49.94797346 +0000 UTC m=+0.024696258 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:18:50 compute-0 podman[280179]: 2025-10-08 10:18:50.065171624 +0000 UTC m=+0.141894402 container create e66af8cdc9bb4a3a7ec529f8e4553dd90b7b7de519bdac859af6bd183c2c2045 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_boyd, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 10:18:50 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:18:50 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:18:50 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 08 10:18:50 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:18:50 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:18:50 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 08 10:18:50 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:18:50 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 10:18:50 compute-0 ceph-mon[73572]: pgmap v1027: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 14 KiB/s wr, 81 op/s
Oct 08 10:18:50 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:18:50 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:18:50 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 10:18:50 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 10:18:50 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:18:50 compute-0 systemd[1]: Started libpod-conmon-e66af8cdc9bb4a3a7ec529f8e4553dd90b7b7de519bdac859af6bd183c2c2045.scope.
Oct 08 10:18:50 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:18:50 compute-0 ceph-mon[73572]: log_channel(cluster) log [WRN] : Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Oct 08 10:18:50 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 08 10:18:50 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/582462880' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:18:50 compute-0 podman[280179]: 2025-10-08 10:18:50.45154445 +0000 UTC m=+0.528267248 container init e66af8cdc9bb4a3a7ec529f8e4553dd90b7b7de519bdac859af6bd183c2c2045 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_boyd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:18:50 compute-0 nova_compute[262220]: 2025-10-08 10:18:50.455 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:18:50 compute-0 podman[280179]: 2025-10-08 10:18:50.459955132 +0000 UTC m=+0.536677910 container start e66af8cdc9bb4a3a7ec529f8e4553dd90b7b7de519bdac859af6bd183c2c2045 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_boyd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct 08 10:18:50 compute-0 systemd[1]: libpod-e66af8cdc9bb4a3a7ec529f8e4553dd90b7b7de519bdac859af6bd183c2c2045.scope: Deactivated successfully.
Oct 08 10:18:50 compute-0 peaceful_boyd[280216]: 167 167
Oct 08 10:18:50 compute-0 conmon[280216]: conmon e66af8cdc9bb4a3a7ec5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e66af8cdc9bb4a3a7ec529f8e4553dd90b7b7de519bdac859af6bd183c2c2045.scope/container/memory.events
Oct 08 10:18:50 compute-0 podman[280179]: 2025-10-08 10:18:50.612092925 +0000 UTC m=+0.688815713 container attach e66af8cdc9bb4a3a7ec529f8e4553dd90b7b7de519bdac859af6bd183c2c2045 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_boyd, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct 08 10:18:50 compute-0 podman[280179]: 2025-10-08 10:18:50.612739716 +0000 UTC m=+0.689462494 container died e66af8cdc9bb4a3a7ec529f8e4553dd90b7b7de519bdac859af6bd183c2c2045 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct 08 10:18:50 compute-0 nova_compute[262220]: 2025-10-08 10:18:50.636 2 WARNING nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 10:18:50 compute-0 nova_compute[262220]: 2025-10-08 10:18:50.639 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4558MB free_disk=59.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 08 10:18:50 compute-0 nova_compute[262220]: 2025-10-08 10:18:50.639 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:18:50 compute-0 nova_compute[262220]: 2025-10-08 10:18:50.639 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:18:50 compute-0 nova_compute[262220]: 2025-10-08 10:18:50.703 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 08 10:18:50 compute-0 nova_compute[262220]: 2025-10-08 10:18:50.704 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 08 10:18:50 compute-0 nova_compute[262220]: 2025-10-08 10:18:50.722 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:18:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-f10d2577c33aaf17490b22058472f07eb4194ac0c193a7561f0b20fbebb4a537-merged.mount: Deactivated successfully.
Oct 08 10:18:50 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:18:50 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:18:50 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:18:50.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:18:51 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 08 10:18:51 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3983260049' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:18:51 compute-0 podman[280179]: 2025-10-08 10:18:51.261517195 +0000 UTC m=+1.338239973 container remove e66af8cdc9bb4a3a7ec529f8e4553dd90b7b7de519bdac859af6bd183c2c2045 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_boyd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:18:51 compute-0 nova_compute[262220]: 2025-10-08 10:18:51.266 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:18:51 compute-0 systemd[1]: libpod-conmon-e66af8cdc9bb4a3a7ec529f8e4553dd90b7b7de519bdac859af6bd183c2c2045.scope: Deactivated successfully.
Oct 08 10:18:51 compute-0 nova_compute[262220]: 2025-10-08 10:18:51.277 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 08 10:18:51 compute-0 ceph-mon[73572]: Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Oct 08 10:18:51 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/582462880' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:18:51 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/2381079717' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:18:51 compute-0 nova_compute[262220]: 2025-10-08 10:18:51.299 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 08 10:18:51 compute-0 nova_compute[262220]: 2025-10-08 10:18:51.301 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 08 10:18:51 compute-0 nova_compute[262220]: 2025-10-08 10:18:51.301 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:18:51 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1028: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 75 op/s
Oct 08 10:18:51 compute-0 podman[280265]: 2025-10-08 10:18:51.514835736 +0000 UTC m=+0.117595238 container create 083778ccbe5b15ebf5a99309cdd92b692ada136b6070363edbda563b1770f0b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 10:18:51 compute-0 podman[280265]: 2025-10-08 10:18:51.423453944 +0000 UTC m=+0.026213476 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:18:51 compute-0 systemd[1]: Started libpod-conmon-083778ccbe5b15ebf5a99309cdd92b692ada136b6070363edbda563b1770f0b0.scope.
Oct 08 10:18:51 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:18:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04a78981cf25d09b84d399a17eb3a832bd9612ca4f6cb8bc659f5f4b8705549a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:18:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04a78981cf25d09b84d399a17eb3a832bd9612ca4f6cb8bc659f5f4b8705549a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:18:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04a78981cf25d09b84d399a17eb3a832bd9612ca4f6cb8bc659f5f4b8705549a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:18:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04a78981cf25d09b84d399a17eb3a832bd9612ca4f6cb8bc659f5f4b8705549a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:18:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04a78981cf25d09b84d399a17eb3a832bd9612ca4f6cb8bc659f5f4b8705549a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 10:18:51 compute-0 podman[280265]: 2025-10-08 10:18:51.772774355 +0000 UTC m=+0.375533887 container init 083778ccbe5b15ebf5a99309cdd92b692ada136b6070363edbda563b1770f0b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_beaver, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:18:51 compute-0 podman[280265]: 2025-10-08 10:18:51.782095035 +0000 UTC m=+0.384854537 container start 083778ccbe5b15ebf5a99309cdd92b692ada136b6070363edbda563b1770f0b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_beaver, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Oct 08 10:18:51 compute-0 podman[280265]: 2025-10-08 10:18:51.811183065 +0000 UTC m=+0.413942587 container attach 083778ccbe5b15ebf5a99309cdd92b692ada136b6070363edbda563b1770f0b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct 08 10:18:51 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:18:51 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:18:51 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:18:51.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:18:52 compute-0 optimistic_beaver[280282]: --> passed data devices: 0 physical, 1 LVM
Oct 08 10:18:52 compute-0 optimistic_beaver[280282]: --> All data devices are unavailable
Oct 08 10:18:52 compute-0 systemd[1]: libpod-083778ccbe5b15ebf5a99309cdd92b692ada136b6070363edbda563b1770f0b0.scope: Deactivated successfully.
Oct 08 10:18:52 compute-0 podman[280265]: 2025-10-08 10:18:52.165801236 +0000 UTC m=+0.768560758 container died 083778ccbe5b15ebf5a99309cdd92b692ada136b6070363edbda563b1770f0b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:18:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-04a78981cf25d09b84d399a17eb3a832bd9612ca4f6cb8bc659f5f4b8705549a-merged.mount: Deactivated successfully.
Oct 08 10:18:52 compute-0 nova_compute[262220]: 2025-10-08 10:18:52.301 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:18:52 compute-0 nova_compute[262220]: 2025-10-08 10:18:52.303 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 08 10:18:52 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3983260049' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:18:52 compute-0 ceph-mon[73572]: pgmap v1028: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 75 op/s
Oct 08 10:18:52 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/3294409412' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:18:52 compute-0 podman[280265]: 2025-10-08 10:18:52.398882192 +0000 UTC m=+1.001641694 container remove 083778ccbe5b15ebf5a99309cdd92b692ada136b6070363edbda563b1770f0b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_beaver, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 10:18:52 compute-0 systemd[1]: libpod-conmon-083778ccbe5b15ebf5a99309cdd92b692ada136b6070363edbda563b1770f0b0.scope: Deactivated successfully.
Oct 08 10:18:52 compute-0 sudo[280093]: pam_unix(sudo:session): session closed for user root
Oct 08 10:18:52 compute-0 nova_compute[262220]: 2025-10-08 10:18:52.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:18:52 compute-0 sudo[280310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:18:52 compute-0 sudo[280310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:18:52 compute-0 sudo[280310]: pam_unix(sudo:session): session closed for user root
Oct 08 10:18:52 compute-0 sudo[280335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- lvm list --format json
Oct 08 10:18:52 compute-0 sudo[280335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:18:53 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:18:53 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:18:53 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:18:52.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:18:53 compute-0 podman[280400]: 2025-10-08 10:18:52.988197231 +0000 UTC m=+0.026477835 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:18:53 compute-0 nova_compute[262220]: 2025-10-08 10:18:53.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:18:53 compute-0 podman[280400]: 2025-10-08 10:18:53.324689297 +0000 UTC m=+0.362969881 container create b77294d51df80c52afb1e5d3fb24369e192ef7436af6be61f3f9efdd74529a55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_morse, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:18:53 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1029: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 75 op/s
Oct 08 10:18:53 compute-0 systemd[1]: Started libpod-conmon-b77294d51df80c52afb1e5d3fb24369e192ef7436af6be61f3f9efdd74529a55.scope.
Oct 08 10:18:53 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:18:53 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/2666340170' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:18:53 compute-0 podman[280400]: 2025-10-08 10:18:53.579987761 +0000 UTC m=+0.618268375 container init b77294d51df80c52afb1e5d3fb24369e192ef7436af6be61f3f9efdd74529a55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_morse, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 10:18:53 compute-0 podman[280400]: 2025-10-08 10:18:53.587555695 +0000 UTC m=+0.625836279 container start b77294d51df80c52afb1e5d3fb24369e192ef7436af6be61f3f9efdd74529a55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct 08 10:18:53 compute-0 zealous_morse[280417]: 167 167
Oct 08 10:18:53 compute-0 systemd[1]: libpod-b77294d51df80c52afb1e5d3fb24369e192ef7436af6be61f3f9efdd74529a55.scope: Deactivated successfully.
Oct 08 10:18:53 compute-0 podman[280400]: 2025-10-08 10:18:53.594751728 +0000 UTC m=+0.633032332 container attach b77294d51df80c52afb1e5d3fb24369e192ef7436af6be61f3f9efdd74529a55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 08 10:18:53 compute-0 podman[280400]: 2025-10-08 10:18:53.595609015 +0000 UTC m=+0.633889599 container died b77294d51df80c52afb1e5d3fb24369e192ef7436af6be61f3f9efdd74529a55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_morse, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct 08 10:18:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b4ee78217bacdce93a5b98c659bbc000db6e7584692c858a88ef759dd807d80-merged.mount: Deactivated successfully.
Oct 08 10:18:53 compute-0 podman[280400]: 2025-10-08 10:18:53.696340388 +0000 UTC m=+0.734620972 container remove b77294d51df80c52afb1e5d3fb24369e192ef7436af6be61f3f9efdd74529a55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_morse, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 10:18:53 compute-0 systemd[1]: libpod-conmon-b77294d51df80c52afb1e5d3fb24369e192ef7436af6be61f3f9efdd74529a55.scope: Deactivated successfully.
Oct 08 10:18:53 compute-0 podman[280441]: 2025-10-08 10:18:53.864221779 +0000 UTC m=+0.042312567 container create f85f57a07af7a7b1ecde2e7048c6394d8559cd10bb0199565c604225bcb2edab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_spence, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 08 10:18:53 compute-0 nova_compute[262220]: 2025-10-08 10:18:53.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:18:53 compute-0 systemd[1]: Started libpod-conmon-f85f57a07af7a7b1ecde2e7048c6394d8559cd10bb0199565c604225bcb2edab.scope.
Oct 08 10:18:53 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:18:53 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:18:53 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:18:53.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:18:53 compute-0 podman[280441]: 2025-10-08 10:18:53.845311828 +0000 UTC m=+0.023402636 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:18:53 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:18:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cf6fcc220f013fb3ae29c618c5ef1a851a9739ec84df7c4300ae7927fcb3949/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:18:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cf6fcc220f013fb3ae29c618c5ef1a851a9739ec84df7c4300ae7927fcb3949/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:18:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cf6fcc220f013fb3ae29c618c5ef1a851a9739ec84df7c4300ae7927fcb3949/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:18:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cf6fcc220f013fb3ae29c618c5ef1a851a9739ec84df7c4300ae7927fcb3949/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:18:53 compute-0 podman[280441]: 2025-10-08 10:18:53.970732328 +0000 UTC m=+0.148823126 container init f85f57a07af7a7b1ecde2e7048c6394d8559cd10bb0199565c604225bcb2edab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_spence, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Oct 08 10:18:53 compute-0 podman[280441]: 2025-10-08 10:18:53.978178519 +0000 UTC m=+0.156269297 container start f85f57a07af7a7b1ecde2e7048c6394d8559cd10bb0199565c604225bcb2edab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 08 10:18:53 compute-0 podman[280441]: 2025-10-08 10:18:53.982223939 +0000 UTC m=+0.160314747 container attach f85f57a07af7a7b1ecde2e7048c6394d8559cd10bb0199565c604225bcb2edab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_spence, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 10:18:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:18:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:18:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:18:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:54 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:18:54 compute-0 peaceful_spence[280458]: {
Oct 08 10:18:54 compute-0 peaceful_spence[280458]:     "1": [
Oct 08 10:18:54 compute-0 peaceful_spence[280458]:         {
Oct 08 10:18:54 compute-0 peaceful_spence[280458]:             "devices": [
Oct 08 10:18:54 compute-0 peaceful_spence[280458]:                 "/dev/loop3"
Oct 08 10:18:54 compute-0 peaceful_spence[280458]:             ],
Oct 08 10:18:54 compute-0 peaceful_spence[280458]:             "lv_name": "ceph_lv0",
Oct 08 10:18:54 compute-0 peaceful_spence[280458]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:18:54 compute-0 peaceful_spence[280458]:             "lv_size": "21470642176",
Oct 08 10:18:54 compute-0 peaceful_spence[280458]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 08 10:18:54 compute-0 peaceful_spence[280458]:             "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 10:18:54 compute-0 peaceful_spence[280458]:             "name": "ceph_lv0",
Oct 08 10:18:54 compute-0 peaceful_spence[280458]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:18:54 compute-0 peaceful_spence[280458]:             "tags": {
Oct 08 10:18:54 compute-0 peaceful_spence[280458]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:18:54 compute-0 peaceful_spence[280458]:                 "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 10:18:54 compute-0 peaceful_spence[280458]:                 "ceph.cephx_lockbox_secret": "",
Oct 08 10:18:54 compute-0 peaceful_spence[280458]:                 "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct 08 10:18:54 compute-0 peaceful_spence[280458]:                 "ceph.cluster_name": "ceph",
Oct 08 10:18:54 compute-0 peaceful_spence[280458]:                 "ceph.crush_device_class": "",
Oct 08 10:18:54 compute-0 peaceful_spence[280458]:                 "ceph.encrypted": "0",
Oct 08 10:18:54 compute-0 peaceful_spence[280458]:                 "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct 08 10:18:54 compute-0 peaceful_spence[280458]:                 "ceph.osd_id": "1",
Oct 08 10:18:54 compute-0 peaceful_spence[280458]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 08 10:18:54 compute-0 peaceful_spence[280458]:                 "ceph.type": "block",
Oct 08 10:18:54 compute-0 peaceful_spence[280458]:                 "ceph.vdo": "0",
Oct 08 10:18:54 compute-0 peaceful_spence[280458]:                 "ceph.with_tpm": "0"
Oct 08 10:18:54 compute-0 peaceful_spence[280458]:             },
Oct 08 10:18:54 compute-0 peaceful_spence[280458]:             "type": "block",
Oct 08 10:18:54 compute-0 peaceful_spence[280458]:             "vg_name": "ceph_vg0"
Oct 08 10:18:54 compute-0 peaceful_spence[280458]:         }
Oct 08 10:18:54 compute-0 peaceful_spence[280458]:     ]
Oct 08 10:18:54 compute-0 peaceful_spence[280458]: }
Oct 08 10:18:54 compute-0 systemd[1]: libpod-f85f57a07af7a7b1ecde2e7048c6394d8559cd10bb0199565c604225bcb2edab.scope: Deactivated successfully.
Oct 08 10:18:54 compute-0 podman[280441]: 2025-10-08 10:18:54.276414169 +0000 UTC m=+0.454504967 container died f85f57a07af7a7b1ecde2e7048c6394d8559cd10bb0199565c604225bcb2edab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_spence, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:18:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:18:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-6cf6fcc220f013fb3ae29c618c5ef1a851a9739ec84df7c4300ae7927fcb3949-merged.mount: Deactivated successfully.
Oct 08 10:18:54 compute-0 podman[280441]: 2025-10-08 10:18:54.338797233 +0000 UTC m=+0.516888011 container remove f85f57a07af7a7b1ecde2e7048c6394d8559cd10bb0199565c604225bcb2edab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:18:54 compute-0 systemd[1]: libpod-conmon-f85f57a07af7a7b1ecde2e7048c6394d8559cd10bb0199565c604225bcb2edab.scope: Deactivated successfully.
Oct 08 10:18:54 compute-0 sudo[280335]: pam_unix(sudo:session): session closed for user root
Oct 08 10:18:54 compute-0 sudo[280483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:18:54 compute-0 sudo[280483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:18:54 compute-0 sudo[280483]: pam_unix(sudo:session): session closed for user root
Oct 08 10:18:54 compute-0 sudo[280508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- raw list --format json
Oct 08 10:18:54 compute-0 sudo[280508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:18:54 compute-0 ceph-mon[73572]: pgmap v1029: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 75 op/s
Oct 08 10:18:54 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/3286169347' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:18:54 compute-0 podman[280573]: 2025-10-08 10:18:54.920421214 +0000 UTC m=+0.036956044 container create 98a36cf99b62017bfd10d393678b9b69d8892b18ca76cc077ba8eadeead9a5da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_wescoff, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:18:54 compute-0 systemd[1]: Started libpod-conmon-98a36cf99b62017bfd10d393678b9b69d8892b18ca76cc077ba8eadeead9a5da.scope.
Oct 08 10:18:54 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:18:55 compute-0 podman[280573]: 2025-10-08 10:18:54.905553345 +0000 UTC m=+0.022088195 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:18:55 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:18:55 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:18:55 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:18:55.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:18:55 compute-0 podman[280573]: 2025-10-08 10:18:55.005121909 +0000 UTC m=+0.121656799 container init 98a36cf99b62017bfd10d393678b9b69d8892b18ca76cc077ba8eadeead9a5da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Oct 08 10:18:55 compute-0 podman[280573]: 2025-10-08 10:18:55.016191777 +0000 UTC m=+0.132726597 container start 98a36cf99b62017bfd10d393678b9b69d8892b18ca76cc077ba8eadeead9a5da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_wescoff, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:18:55 compute-0 podman[280573]: 2025-10-08 10:18:55.019962639 +0000 UTC m=+0.136497579 container attach 98a36cf99b62017bfd10d393678b9b69d8892b18ca76cc077ba8eadeead9a5da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_wescoff, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:18:55 compute-0 romantic_wescoff[280589]: 167 167
Oct 08 10:18:55 compute-0 systemd[1]: libpod-98a36cf99b62017bfd10d393678b9b69d8892b18ca76cc077ba8eadeead9a5da.scope: Deactivated successfully.
Oct 08 10:18:55 compute-0 podman[280573]: 2025-10-08 10:18:55.022596204 +0000 UTC m=+0.139131044 container died 98a36cf99b62017bfd10d393678b9b69d8892b18ca76cc077ba8eadeead9a5da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct 08 10:18:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-3cb75991f0aa03e0cb433af04209c40d6304384a53b59439bfc624eb4fe2506b-merged.mount: Deactivated successfully.
Oct 08 10:18:55 compute-0 podman[280573]: 2025-10-08 10:18:55.068427443 +0000 UTC m=+0.184962273 container remove 98a36cf99b62017bfd10d393678b9b69d8892b18ca76cc077ba8eadeead9a5da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_wescoff, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct 08 10:18:55 compute-0 systemd[1]: libpod-conmon-98a36cf99b62017bfd10d393678b9b69d8892b18ca76cc077ba8eadeead9a5da.scope: Deactivated successfully.
Oct 08 10:18:55 compute-0 podman[280614]: 2025-10-08 10:18:55.232008296 +0000 UTC m=+0.045545472 container create f5ebc85e516d4ea8e99874c20940ccf82cdbcc8f0a92239c8cc729edd24ede2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 10:18:55 compute-0 systemd[1]: Started libpod-conmon-f5ebc85e516d4ea8e99874c20940ccf82cdbcc8f0a92239c8cc729edd24ede2f.scope.
Oct 08 10:18:55 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:18:55.285 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26869918-b723-425c-a2e1-0d697f3d0fec, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 10:18:55 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:18:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99ab2ca7e1c0ecb88920b5a825ced396aa6ca3db9eced323c6a05c325eeccd41/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:18:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99ab2ca7e1c0ecb88920b5a825ced396aa6ca3db9eced323c6a05c325eeccd41/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:18:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99ab2ca7e1c0ecb88920b5a825ced396aa6ca3db9eced323c6a05c325eeccd41/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:18:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99ab2ca7e1c0ecb88920b5a825ced396aa6ca3db9eced323c6a05c325eeccd41/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:18:55 compute-0 podman[280614]: 2025-10-08 10:18:55.213280071 +0000 UTC m=+0.026817267 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:18:55 compute-0 podman[280614]: 2025-10-08 10:18:55.309844979 +0000 UTC m=+0.123382175 container init f5ebc85e516d4ea8e99874c20940ccf82cdbcc8f0a92239c8cc729edd24ede2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_goldwasser, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:18:55 compute-0 podman[280614]: 2025-10-08 10:18:55.318187548 +0000 UTC m=+0.131724724 container start f5ebc85e516d4ea8e99874c20940ccf82cdbcc8f0a92239c8cc729edd24ede2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 08 10:18:55 compute-0 podman[280614]: 2025-10-08 10:18:55.321347771 +0000 UTC m=+0.134884957 container attach f5ebc85e516d4ea8e99874c20940ccf82cdbcc8f0a92239c8cc729edd24ede2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:18:55 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1030: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 380 KiB/s rd, 2.3 MiB/s wr, 66 op/s
Oct 08 10:18:55 compute-0 ceph-mon[73572]: pgmap v1030: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 380 KiB/s rd, 2.3 MiB/s wr, 66 op/s
Oct 08 10:18:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:18:55] "GET /metrics HTTP/1.1" 200 48470 "" "Prometheus/2.51.0"
Oct 08 10:18:55 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:18:55] "GET /metrics HTTP/1.1" 200 48470 "" "Prometheus/2.51.0"
Oct 08 10:18:55 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:18:55 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:18:55 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:18:55.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:18:55 compute-0 lvm[280720]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 08 10:18:55 compute-0 lvm[280720]: VG ceph_vg0 finished
Oct 08 10:18:56 compute-0 podman[280706]: 2025-10-08 10:18:56.011432313 +0000 UTC m=+0.064815673 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, managed_by=edpm_ansible)
Oct 08 10:18:56 compute-0 podman[280705]: 2025-10-08 10:18:56.011761464 +0000 UTC m=+0.065876318 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=multipathd)
Oct 08 10:18:56 compute-0 blissful_goldwasser[280631]: {}
Oct 08 10:18:56 compute-0 systemd[1]: libpod-f5ebc85e516d4ea8e99874c20940ccf82cdbcc8f0a92239c8cc729edd24ede2f.scope: Deactivated successfully.
Oct 08 10:18:56 compute-0 systemd[1]: libpod-f5ebc85e516d4ea8e99874c20940ccf82cdbcc8f0a92239c8cc729edd24ede2f.scope: Consumed 1.175s CPU time.
Oct 08 10:18:56 compute-0 podman[280614]: 2025-10-08 10:18:56.055617271 +0000 UTC m=+0.869154467 container died f5ebc85e516d4ea8e99874c20940ccf82cdbcc8f0a92239c8cc729edd24ede2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_goldwasser, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct 08 10:18:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-99ab2ca7e1c0ecb88920b5a825ced396aa6ca3db9eced323c6a05c325eeccd41-merged.mount: Deactivated successfully.
Oct 08 10:18:56 compute-0 podman[280614]: 2025-10-08 10:18:56.112407694 +0000 UTC m=+0.925944900 container remove f5ebc85e516d4ea8e99874c20940ccf82cdbcc8f0a92239c8cc729edd24ede2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_goldwasser, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:18:56 compute-0 systemd[1]: libpod-conmon-f5ebc85e516d4ea8e99874c20940ccf82cdbcc8f0a92239c8cc729edd24ede2f.scope: Deactivated successfully.
Oct 08 10:18:56 compute-0 sudo[280508]: pam_unix(sudo:session): session closed for user root
Oct 08 10:18:56 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 10:18:56 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:18:56 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 10:18:56 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:18:56 compute-0 sudo[280763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 08 10:18:56 compute-0 sudo[280763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:18:56 compute-0 sudo[280763]: pam_unix(sudo:session): session closed for user root
Oct 08 10:18:57 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:18:57 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:18:57 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:18:57.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:18:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:18:57.181Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:18:57 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:18:57 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:18:57 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1031: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 380 KiB/s rd, 2.3 MiB/s wr, 66 op/s
Oct 08 10:18:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:18:57.417 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:18:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:18:57.417 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:18:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:18:57.417 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:18:57 compute-0 nova_compute[262220]: 2025-10-08 10:18:57.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:18:57 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:18:57 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:18:57 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:18:57.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:18:58 compute-0 nova_compute[262220]: 2025-10-08 10:18:58.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:18:58 compute-0 ceph-mon[73572]: pgmap v1031: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 380 KiB/s rd, 2.3 MiB/s wr, 66 op/s
Oct 08 10:18:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:18:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:18:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:18:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:59 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:18:59 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:18:59 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:18:59 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:18:59.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:18:59 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:18:59 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1032: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 423 KiB/s rd, 2.3 MiB/s wr, 74 op/s
Oct 08 10:18:59 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:18:59 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:18:59 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:18:59.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:19:00 compute-0 ceph-mon[73572]: pgmap v1032: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 423 KiB/s rd, 2.3 MiB/s wr, 74 op/s
Oct 08 10:19:01 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:19:01 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:19:01 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:19:01.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:19:01 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1033: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 388 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Oct 08 10:19:01 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:19:01 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:19:01 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:19:01.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:19:02 compute-0 ceph-mon[73572]: pgmap v1033: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 388 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Oct 08 10:19:02 compute-0 nova_compute[262220]: 2025-10-08 10:19:02.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:19:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Oct 08 10:19:02 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:19:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:19:02 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:19:03 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:19:03 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:19:03 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:19:03.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:19:03 compute-0 nova_compute[262220]: 2025-10-08 10:19:03.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:19:03 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1034: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 388 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Oct 08 10:19:03 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:19:03 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:19:03 compute-0 ceph-mon[73572]: pgmap v1034: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 388 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Oct 08 10:19:03 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:19:03 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:19:03 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:19:03.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:19:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:19:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:19:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:19:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:04 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:19:04 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:19:05 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:19:05 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:19:05 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:19:05.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:19:05 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1035: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 388 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Oct 08 10:19:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:19:05] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Oct 08 10:19:05 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:19:05] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Oct 08 10:19:05 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:19:05 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:19:05 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:19:05.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:19:06 compute-0 ceph-mon[73572]: pgmap v1035: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 388 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Oct 08 10:19:07 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:19:07 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:19:07 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:19:07.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:19:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:19:07.183Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:19:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:19:07.183Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:19:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:19:07.183Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:19:07 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1036: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 22 KiB/s wr, 7 op/s
Oct 08 10:19:07 compute-0 nova_compute[262220]: 2025-10-08 10:19:07.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:19:07 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/2188657694' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:19:07 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:19:07 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:19:07 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:19:07.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:19:08 compute-0 nova_compute[262220]: 2025-10-08 10:19:08.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:19:08 compute-0 ceph-mon[73572]: pgmap v1036: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 22 KiB/s wr, 7 op/s
Oct 08 10:19:08 compute-0 podman[280800]: 2025-10-08 10:19:08.931804572 +0000 UTC m=+0.084805439 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 08 10:19:08 compute-0 sudo[280821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:19:08 compute-0 sudo[280821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:19:08 compute-0 sudo[280821]: pam_unix(sudo:session): session closed for user root
Oct 08 10:19:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:19:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:19:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:19:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:09 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:19:09 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:19:09 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:19:09 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:19:09.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:19:09 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:19:09 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1037: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 23 KiB/s wr, 35 op/s
Oct 08 10:19:09 compute-0 ceph-mon[73572]: pgmap v1037: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 23 KiB/s wr, 35 op/s
Oct 08 10:19:09 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:19:09 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:19:09 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:19:09.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:19:11 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:19:11 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:19:11 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:19:11.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:19:11 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1038: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 14 KiB/s wr, 29 op/s
Oct 08 10:19:11 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:19:11 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:19:11 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:19:11.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:19:12 compute-0 ceph-mon[73572]: pgmap v1038: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 14 KiB/s wr, 29 op/s
Oct 08 10:19:12 compute-0 nova_compute[262220]: 2025-10-08 10:19:12.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:19:13 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:19:13 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:19:13 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:19:13.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:19:13 compute-0 nova_compute[262220]: 2025-10-08 10:19:13.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:19:13 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1039: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 14 KiB/s wr, 29 op/s
Oct 08 10:19:13 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:19:13 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:19:13 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:19:13.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:19:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:19:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:19:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:19:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:14 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:19:14 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:19:14 compute-0 ceph-mon[73572]: pgmap v1039: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 14 KiB/s wr, 29 op/s
Oct 08 10:19:15 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:19:15 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:19:15 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:19:15.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:19:15 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1040: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 14 KiB/s wr, 29 op/s
Oct 08 10:19:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:19:15] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Oct 08 10:19:15 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:19:15] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Oct 08 10:19:15 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:19:15 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:19:15 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:19:15.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:19:16 compute-0 ceph-mon[73572]: pgmap v1040: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 14 KiB/s wr, 29 op/s
Oct 08 10:19:17 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:19:17 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:19:17 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:19:17.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:19:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:19:17.185Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:19:17 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1041: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 08 10:19:17 compute-0 nova_compute[262220]: 2025-10-08 10:19:17.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:19:17 compute-0 ceph-mon[73572]: pgmap v1041: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 08 10:19:17 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:19:17 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:19:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:19:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:19:17 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:19:17 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:19:17 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:19:17.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:19:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:19:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:19:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:19:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:19:18 compute-0 nova_compute[262220]: 2025-10-08 10:19:18.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:19:18 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:19:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:19:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:19:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:19:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:19 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:19:19 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:19:19 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:19:19 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:19:19.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:19:19 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:19:19 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1042: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Oct 08 10:19:19 compute-0 ceph-mon[73572]: pgmap v1042: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Oct 08 10:19:19 compute-0 podman[280857]: 2025-10-08 10:19:19.927179572 +0000 UTC m=+0.084561192 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:19:19 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:19:19 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:19:19 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:19:19.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:19:20 compute-0 ceph-mon[73572]: from='client.? 192.168.122.10:0/1229406039' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 08 10:19:20 compute-0 ceph-mon[73572]: from='client.? 192.168.122.10:0/1229406039' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 08 10:19:21 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:19:21 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:19:21 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:19:21.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:19:21 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1043: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:19:21 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:19:21 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:19:21 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:19:21.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:19:22 compute-0 ceph-mon[73572]: pgmap v1043: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:19:22 compute-0 nova_compute[262220]: 2025-10-08 10:19:22.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:19:23 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:19:23 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:19:23 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:19:23.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:19:23 compute-0 nova_compute[262220]: 2025-10-08 10:19:23.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:19:23 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1044: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:19:23 compute-0 ceph-mon[73572]: pgmap v1044: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:19:23 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:19:23 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:19:23 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:19:23.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:19:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:19:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:19:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:19:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:24 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:19:24 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:19:25 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:19:25 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:19:25 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:19:25.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:19:25 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1045: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:19:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:19:25] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Oct 08 10:19:25 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:19:25] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Oct 08 10:19:25 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:19:25 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:19:25 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:19:25.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:19:26 compute-0 ceph-mon[73572]: pgmap v1045: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:19:26 compute-0 nova_compute[262220]: 2025-10-08 10:19:26.852 2 DEBUG oslo_concurrency.lockutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:19:26 compute-0 nova_compute[262220]: 2025-10-08 10:19:26.852 2 DEBUG oslo_concurrency.lockutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:19:26 compute-0 podman[280891]: 2025-10-08 10:19:26.908970131 +0000 UTC m=+0.058511011 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 08 10:19:26 compute-0 nova_compute[262220]: 2025-10-08 10:19:26.910 2 DEBUG nova.compute.manager [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 08 10:19:26 compute-0 podman[280890]: 2025-10-08 10:19:26.939966001 +0000 UTC m=+0.089538602 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd)
Oct 08 10:19:27 compute-0 nova_compute[262220]: 2025-10-08 10:19:27.034 2 DEBUG oslo_concurrency.lockutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:19:27 compute-0 nova_compute[262220]: 2025-10-08 10:19:27.035 2 DEBUG oslo_concurrency.lockutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:19:27 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:19:27 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:19:27 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:19:27.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:19:27 compute-0 nova_compute[262220]: 2025-10-08 10:19:27.041 2 DEBUG nova.virt.hardware [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 08 10:19:27 compute-0 nova_compute[262220]: 2025-10-08 10:19:27.042 2 INFO nova.compute.claims [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Claim successful on node compute-0.ctlplane.example.com
Oct 08 10:19:27 compute-0 nova_compute[262220]: 2025-10-08 10:19:27.172 2 DEBUG oslo_concurrency.processutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:19:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:19:27.185Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:19:27 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1046: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:19:27 compute-0 nova_compute[262220]: 2025-10-08 10:19:27.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:19:27 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 08 10:19:27 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3324795878' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:19:27 compute-0 nova_compute[262220]: 2025-10-08 10:19:27.645 2 DEBUG oslo_concurrency.processutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:19:27 compute-0 nova_compute[262220]: 2025-10-08 10:19:27.651 2 DEBUG nova.compute.provider_tree [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 08 10:19:27 compute-0 nova_compute[262220]: 2025-10-08 10:19:27.670 2 DEBUG nova.scheduler.client.report [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 08 10:19:27 compute-0 nova_compute[262220]: 2025-10-08 10:19:27.706 2 DEBUG oslo_concurrency.lockutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:19:27 compute-0 nova_compute[262220]: 2025-10-08 10:19:27.707 2 DEBUG nova.compute.manager [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 08 10:19:27 compute-0 nova_compute[262220]: 2025-10-08 10:19:27.752 2 DEBUG nova.compute.manager [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 08 10:19:27 compute-0 nova_compute[262220]: 2025-10-08 10:19:27.753 2 DEBUG nova.network.neutron [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 08 10:19:27 compute-0 nova_compute[262220]: 2025-10-08 10:19:27.770 2 INFO nova.virt.libvirt.driver [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 08 10:19:27 compute-0 nova_compute[262220]: 2025-10-08 10:19:27.789 2 DEBUG nova.compute.manager [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 08 10:19:27 compute-0 nova_compute[262220]: 2025-10-08 10:19:27.881 2 DEBUG nova.compute.manager [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 08 10:19:27 compute-0 nova_compute[262220]: 2025-10-08 10:19:27.883 2 DEBUG nova.virt.libvirt.driver [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 08 10:19:27 compute-0 nova_compute[262220]: 2025-10-08 10:19:27.884 2 INFO nova.virt.libvirt.driver [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Creating image(s)
Oct 08 10:19:27 compute-0 nova_compute[262220]: 2025-10-08 10:19:27.924 2 DEBUG nova.storage.rbd_utils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] rbd image 7d19d2c6-6de1-4096-99e4-24b4265b9c09_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 08 10:19:27 compute-0 nova_compute[262220]: 2025-10-08 10:19:27.960 2 DEBUG nova.storage.rbd_utils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] rbd image 7d19d2c6-6de1-4096-99e4-24b4265b9c09_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 08 10:19:27 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:19:27 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:19:27 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:19:27.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:19:27 compute-0 nova_compute[262220]: 2025-10-08 10:19:27.993 2 DEBUG nova.storage.rbd_utils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] rbd image 7d19d2c6-6de1-4096-99e4-24b4265b9c09_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 08 10:19:27 compute-0 nova_compute[262220]: 2025-10-08 10:19:27.996 2 DEBUG oslo_concurrency.processutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3cde70359534d4758cf71011630bd1fb14a90c92 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:19:28 compute-0 nova_compute[262220]: 2025-10-08 10:19:28.053 2 DEBUG oslo_concurrency.processutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3cde70359534d4758cf71011630bd1fb14a90c92 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:19:28 compute-0 nova_compute[262220]: 2025-10-08 10:19:28.054 2 DEBUG oslo_concurrency.lockutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "3cde70359534d4758cf71011630bd1fb14a90c92" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:19:28 compute-0 nova_compute[262220]: 2025-10-08 10:19:28.055 2 DEBUG oslo_concurrency.lockutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "3cde70359534d4758cf71011630bd1fb14a90c92" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:19:28 compute-0 nova_compute[262220]: 2025-10-08 10:19:28.056 2 DEBUG oslo_concurrency.lockutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "3cde70359534d4758cf71011630bd1fb14a90c92" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:19:28 compute-0 nova_compute[262220]: 2025-10-08 10:19:28.086 2 DEBUG nova.storage.rbd_utils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] rbd image 7d19d2c6-6de1-4096-99e4-24b4265b9c09_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 08 10:19:28 compute-0 nova_compute[262220]: 2025-10-08 10:19:28.089 2 DEBUG oslo_concurrency.processutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/3cde70359534d4758cf71011630bd1fb14a90c92 7d19d2c6-6de1-4096-99e4-24b4265b9c09_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:19:28 compute-0 nova_compute[262220]: 2025-10-08 10:19:28.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:19:28 compute-0 nova_compute[262220]: 2025-10-08 10:19:28.350 2 DEBUG oslo_concurrency.processutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/3cde70359534d4758cf71011630bd1fb14a90c92 7d19d2c6-6de1-4096-99e4-24b4265b9c09_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.261s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:19:28 compute-0 nova_compute[262220]: 2025-10-08 10:19:28.430 2 DEBUG nova.storage.rbd_utils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] resizing rbd image 7d19d2c6-6de1-4096-99e4-24b4265b9c09_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 08 10:19:28 compute-0 ceph-mon[73572]: pgmap v1046: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:19:28 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3324795878' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:19:28 compute-0 nova_compute[262220]: 2025-10-08 10:19:28.535 2 DEBUG nova.policy [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd50b19166a7245e390a6e29682191263', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0edb2c85b88b4b168a4b2e8c5ed4a05c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 08 10:19:28 compute-0 nova_compute[262220]: 2025-10-08 10:19:28.541 2 DEBUG nova.objects.instance [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lazy-loading 'migration_context' on Instance uuid 7d19d2c6-6de1-4096-99e4-24b4265b9c09 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 08 10:19:28 compute-0 nova_compute[262220]: 2025-10-08 10:19:28.646 2 DEBUG nova.virt.libvirt.driver [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 08 10:19:28 compute-0 nova_compute[262220]: 2025-10-08 10:19:28.646 2 DEBUG nova.virt.libvirt.driver [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Ensure instance console log exists: /var/lib/nova/instances/7d19d2c6-6de1-4096-99e4-24b4265b9c09/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 08 10:19:28 compute-0 nova_compute[262220]: 2025-10-08 10:19:28.647 2 DEBUG oslo_concurrency.lockutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:19:28 compute-0 nova_compute[262220]: 2025-10-08 10:19:28.647 2 DEBUG oslo_concurrency.lockutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:19:28 compute-0 nova_compute[262220]: 2025-10-08 10:19:28.647 2 DEBUG oslo_concurrency.lockutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:19:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:19:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:19:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:19:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:29 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:19:29 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:19:29 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:19:29 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:19:29.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:19:29 compute-0 sudo[281121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:19:29 compute-0 sudo[281121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:19:29 compute-0 sudo[281121]: pam_unix(sudo:session): session closed for user root
Oct 08 10:19:29 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:19:29 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1047: 353 pgs: 353 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 08 10:19:29 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:19:29 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:19:29 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:19:29.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:19:30 compute-0 ceph-mon[73572]: pgmap v1047: 353 pgs: 353 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 08 10:19:30 compute-0 nova_compute[262220]: 2025-10-08 10:19:30.724 2 DEBUG nova.network.neutron [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Successfully created port: 29abf06b-1e1a-46cb-9cc1-7fa777795883 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 08 10:19:31 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:19:31 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:19:31 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:19:31.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:19:31 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1048: 353 pgs: 353 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 08 10:19:31 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:19:31 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:19:31 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:19:31.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:19:32 compute-0 nova_compute[262220]: 2025-10-08 10:19:32.511 2 DEBUG nova.network.neutron [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Successfully updated port: 29abf06b-1e1a-46cb-9cc1-7fa777795883 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 08 10:19:32 compute-0 nova_compute[262220]: 2025-10-08 10:19:32.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:19:32 compute-0 ceph-mon[73572]: pgmap v1048: 353 pgs: 353 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 08 10:19:32 compute-0 nova_compute[262220]: 2025-10-08 10:19:32.531 2 DEBUG oslo_concurrency.lockutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "refresh_cache-7d19d2c6-6de1-4096-99e4-24b4265b9c09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 08 10:19:32 compute-0 nova_compute[262220]: 2025-10-08 10:19:32.531 2 DEBUG oslo_concurrency.lockutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquired lock "refresh_cache-7d19d2c6-6de1-4096-99e4-24b4265b9c09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 08 10:19:32 compute-0 nova_compute[262220]: 2025-10-08 10:19:32.531 2 DEBUG nova.network.neutron [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 08 10:19:32 compute-0 nova_compute[262220]: 2025-10-08 10:19:32.667 2 DEBUG nova.compute.manager [req-4e2d0368-0e8a-489f-aa2c-93dd4a7774e4 req-7dc0b697-ecc2-4c17-976e-63662e534cb4 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Received event network-changed-29abf06b-1e1a-46cb-9cc1-7fa777795883 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 08 10:19:32 compute-0 nova_compute[262220]: 2025-10-08 10:19:32.667 2 DEBUG nova.compute.manager [req-4e2d0368-0e8a-489f-aa2c-93dd4a7774e4 req-7dc0b697-ecc2-4c17-976e-63662e534cb4 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Refreshing instance network info cache due to event network-changed-29abf06b-1e1a-46cb-9cc1-7fa777795883. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 08 10:19:32 compute-0 nova_compute[262220]: 2025-10-08 10:19:32.668 2 DEBUG oslo_concurrency.lockutils [req-4e2d0368-0e8a-489f-aa2c-93dd4a7774e4 req-7dc0b697-ecc2-4c17-976e-63662e534cb4 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "refresh_cache-7d19d2c6-6de1-4096-99e4-24b4265b9c09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 08 10:19:32 compute-0 nova_compute[262220]: 2025-10-08 10:19:32.781 2 DEBUG nova.network.neutron [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 08 10:19:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:19:32 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:19:33 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:19:33 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:19:33 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:19:33.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:19:33 compute-0 nova_compute[262220]: 2025-10-08 10:19:33.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:19:33 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1049: 353 pgs: 353 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 08 10:19:33 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:19:33 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:19:33 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:19:33 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:19:33.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:19:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:19:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:19:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:19:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:34 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:19:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:19:34 compute-0 nova_compute[262220]: 2025-10-08 10:19:34.485 2 DEBUG nova.network.neutron [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Updating instance_info_cache with network_info: [{"id": "29abf06b-1e1a-46cb-9cc1-7fa777795883", "address": "fa:16:3e:00:0d:2d", "network": {"id": "c18c7476-aaa8-4977-81b5-fb17e88446e2", "bridge": "br-int", "label": "tempest-network-smoke--1578716951", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29abf06b-1e", "ovs_interfaceid": "29abf06b-1e1a-46cb-9cc1-7fa777795883", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 08 10:19:34 compute-0 nova_compute[262220]: 2025-10-08 10:19:34.507 2 DEBUG oslo_concurrency.lockutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Releasing lock "refresh_cache-7d19d2c6-6de1-4096-99e4-24b4265b9c09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 08 10:19:34 compute-0 nova_compute[262220]: 2025-10-08 10:19:34.507 2 DEBUG nova.compute.manager [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Instance network_info: |[{"id": "29abf06b-1e1a-46cb-9cc1-7fa777795883", "address": "fa:16:3e:00:0d:2d", "network": {"id": "c18c7476-aaa8-4977-81b5-fb17e88446e2", "bridge": "br-int", "label": "tempest-network-smoke--1578716951", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29abf06b-1e", "ovs_interfaceid": "29abf06b-1e1a-46cb-9cc1-7fa777795883", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 08 10:19:34 compute-0 nova_compute[262220]: 2025-10-08 10:19:34.508 2 DEBUG oslo_concurrency.lockutils [req-4e2d0368-0e8a-489f-aa2c-93dd4a7774e4 req-7dc0b697-ecc2-4c17-976e-63662e534cb4 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquired lock "refresh_cache-7d19d2c6-6de1-4096-99e4-24b4265b9c09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 08 10:19:34 compute-0 nova_compute[262220]: 2025-10-08 10:19:34.508 2 DEBUG nova.network.neutron [req-4e2d0368-0e8a-489f-aa2c-93dd4a7774e4 req-7dc0b697-ecc2-4c17-976e-63662e534cb4 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Refreshing network info cache for port 29abf06b-1e1a-46cb-9cc1-7fa777795883 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 08 10:19:34 compute-0 nova_compute[262220]: 2025-10-08 10:19:34.511 2 DEBUG nova.virt.libvirt.driver [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Start _get_guest_xml network_info=[{"id": "29abf06b-1e1a-46cb-9cc1-7fa777795883", "address": "fa:16:3e:00:0d:2d", "network": {"id": "c18c7476-aaa8-4977-81b5-fb17e88446e2", "bridge": "br-int", "label": "tempest-network-smoke--1578716951", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29abf06b-1e", "ovs_interfaceid": "29abf06b-1e1a-46cb-9cc1-7fa777795883", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-08T10:08:49Z,direct_url=<?>,disk_format='qcow2',id=e5994bac-385d-4cfe-962e-386aa0559983,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9bebada0871a4efa9df99c6beff34c13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-08T10:08:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'encryption_secret_uuid': None, 'encryption_format': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'device_type': 'disk', 'size': 0, 'image_id': 'e5994bac-385d-4cfe-962e-386aa0559983'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 08 10:19:34 compute-0 nova_compute[262220]: 2025-10-08 10:19:34.516 2 WARNING nova.virt.libvirt.driver [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 10:19:34 compute-0 nova_compute[262220]: 2025-10-08 10:19:34.521 2 DEBUG nova.virt.libvirt.host [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 08 10:19:34 compute-0 nova_compute[262220]: 2025-10-08 10:19:34.522 2 DEBUG nova.virt.libvirt.host [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 08 10:19:34 compute-0 nova_compute[262220]: 2025-10-08 10:19:34.525 2 DEBUG nova.virt.libvirt.host [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 08 10:19:34 compute-0 nova_compute[262220]: 2025-10-08 10:19:34.525 2 DEBUG nova.virt.libvirt.host [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 08 10:19:34 compute-0 nova_compute[262220]: 2025-10-08 10:19:34.526 2 DEBUG nova.virt.libvirt.driver [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 08 10:19:34 compute-0 nova_compute[262220]: 2025-10-08 10:19:34.526 2 DEBUG nova.virt.hardware [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-08T10:08:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='461f98d6-ae65-4f86-8ae2-cc3cfaea2a46',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-08T10:08:49Z,direct_url=<?>,disk_format='qcow2',id=e5994bac-385d-4cfe-962e-386aa0559983,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9bebada0871a4efa9df99c6beff34c13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-08T10:08:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 08 10:19:34 compute-0 nova_compute[262220]: 2025-10-08 10:19:34.526 2 DEBUG nova.virt.hardware [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 08 10:19:34 compute-0 nova_compute[262220]: 2025-10-08 10:19:34.527 2 DEBUG nova.virt.hardware [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 08 10:19:34 compute-0 nova_compute[262220]: 2025-10-08 10:19:34.527 2 DEBUG nova.virt.hardware [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 08 10:19:34 compute-0 nova_compute[262220]: 2025-10-08 10:19:34.527 2 DEBUG nova.virt.hardware [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 08 10:19:34 compute-0 nova_compute[262220]: 2025-10-08 10:19:34.527 2 DEBUG nova.virt.hardware [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 08 10:19:34 compute-0 nova_compute[262220]: 2025-10-08 10:19:34.527 2 DEBUG nova.virt.hardware [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 08 10:19:34 compute-0 nova_compute[262220]: 2025-10-08 10:19:34.528 2 DEBUG nova.virt.hardware [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 08 10:19:34 compute-0 nova_compute[262220]: 2025-10-08 10:19:34.528 2 DEBUG nova.virt.hardware [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 08 10:19:34 compute-0 nova_compute[262220]: 2025-10-08 10:19:34.528 2 DEBUG nova.virt.hardware [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 08 10:19:34 compute-0 nova_compute[262220]: 2025-10-08 10:19:34.528 2 DEBUG nova.virt.hardware [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 08 10:19:34 compute-0 nova_compute[262220]: 2025-10-08 10:19:34.531 2 DEBUG oslo_concurrency.processutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:19:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 08 10:19:34 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/494934549' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 08 10:19:34 compute-0 nova_compute[262220]: 2025-10-08 10:19:34.976 2 DEBUG oslo_concurrency.processutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:19:35 compute-0 nova_compute[262220]: 2025-10-08 10:19:35.006 2 DEBUG nova.storage.rbd_utils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] rbd image 7d19d2c6-6de1-4096-99e4-24b4265b9c09_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 08 10:19:35 compute-0 nova_compute[262220]: 2025-10-08 10:19:35.012 2 DEBUG oslo_concurrency.processutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:19:35 compute-0 ceph-mon[73572]: pgmap v1049: 353 pgs: 353 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 08 10:19:35 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:19:35 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:19:35 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:19:35.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:19:35 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1050: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 08 10:19:35 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 08 10:19:35 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4057436752' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 08 10:19:35 compute-0 nova_compute[262220]: 2025-10-08 10:19:35.584 2 DEBUG oslo_concurrency.processutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:19:35 compute-0 nova_compute[262220]: 2025-10-08 10:19:35.586 2 DEBUG nova.virt.libvirt.vif [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-08T10:19:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1442491120',display_name='tempest-TestNetworkBasicOps-server-1442491120',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1442491120',id=11,image_ref='e5994bac-385d-4cfe-962e-386aa0559983',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA5zqA1Qj/FXMxdyzpBTW0ZXp5DxknDQcIVK3ARN25T6VayPziIvkKCLWAtPemraMv4byPsH7lpRR4PeiITQ6eibmU22T/5fhhxWj1Ai2d949LVQyVHFvTo1rGRRAeVdbw==',key_name='tempest-TestNetworkBasicOps-1126023314',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0edb2c85b88b4b168a4b2e8c5ed4a05c',ramdisk_id='',reservation_id='r-zjf5kwx6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e5994bac-385d-4cfe-962e-386aa0559983',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-139500885',owner_user_name='tempest-TestNetworkBasicOps-139500885-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-08T10:19:27Z,user_data=None,user_id='d50b19166a7245e390a6e29682191263',uuid=7d19d2c6-6de1-4096-99e4-24b4265b9c09,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "29abf06b-1e1a-46cb-9cc1-7fa777795883", "address": "fa:16:3e:00:0d:2d", "network": {"id": "c18c7476-aaa8-4977-81b5-fb17e88446e2", "bridge": "br-int", "label": "tempest-network-smoke--1578716951", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29abf06b-1e", "ovs_interfaceid": "29abf06b-1e1a-46cb-9cc1-7fa777795883", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 08 10:19:35 compute-0 nova_compute[262220]: 2025-10-08 10:19:35.587 2 DEBUG nova.network.os_vif_util [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converting VIF {"id": "29abf06b-1e1a-46cb-9cc1-7fa777795883", "address": "fa:16:3e:00:0d:2d", "network": {"id": "c18c7476-aaa8-4977-81b5-fb17e88446e2", "bridge": "br-int", "label": "tempest-network-smoke--1578716951", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29abf06b-1e", "ovs_interfaceid": "29abf06b-1e1a-46cb-9cc1-7fa777795883", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 08 10:19:35 compute-0 nova_compute[262220]: 2025-10-08 10:19:35.589 2 DEBUG nova.network.os_vif_util [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:0d:2d,bridge_name='br-int',has_traffic_filtering=True,id=29abf06b-1e1a-46cb-9cc1-7fa777795883,network=Network(c18c7476-aaa8-4977-81b5-fb17e88446e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29abf06b-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 08 10:19:35 compute-0 nova_compute[262220]: 2025-10-08 10:19:35.590 2 DEBUG nova.objects.instance [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lazy-loading 'pci_devices' on Instance uuid 7d19d2c6-6de1-4096-99e4-24b4265b9c09 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 08 10:19:35 compute-0 nova_compute[262220]: 2025-10-08 10:19:35.609 2 DEBUG nova.virt.libvirt.driver [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] End _get_guest_xml xml=<domain type="kvm">
Oct 08 10:19:35 compute-0 nova_compute[262220]:   <uuid>7d19d2c6-6de1-4096-99e4-24b4265b9c09</uuid>
Oct 08 10:19:35 compute-0 nova_compute[262220]:   <name>instance-0000000b</name>
Oct 08 10:19:35 compute-0 nova_compute[262220]:   <memory>131072</memory>
Oct 08 10:19:35 compute-0 nova_compute[262220]:   <vcpu>1</vcpu>
Oct 08 10:19:35 compute-0 nova_compute[262220]:   <metadata>
Oct 08 10:19:35 compute-0 nova_compute[262220]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 08 10:19:35 compute-0 nova_compute[262220]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 08 10:19:35 compute-0 nova_compute[262220]:       <nova:name>tempest-TestNetworkBasicOps-server-1442491120</nova:name>
Oct 08 10:19:35 compute-0 nova_compute[262220]:       <nova:creationTime>2025-10-08 10:19:34</nova:creationTime>
Oct 08 10:19:35 compute-0 nova_compute[262220]:       <nova:flavor name="m1.nano">
Oct 08 10:19:35 compute-0 nova_compute[262220]:         <nova:memory>128</nova:memory>
Oct 08 10:19:35 compute-0 nova_compute[262220]:         <nova:disk>1</nova:disk>
Oct 08 10:19:35 compute-0 nova_compute[262220]:         <nova:swap>0</nova:swap>
Oct 08 10:19:35 compute-0 nova_compute[262220]:         <nova:ephemeral>0</nova:ephemeral>
Oct 08 10:19:35 compute-0 nova_compute[262220]:         <nova:vcpus>1</nova:vcpus>
Oct 08 10:19:35 compute-0 nova_compute[262220]:       </nova:flavor>
Oct 08 10:19:35 compute-0 nova_compute[262220]:       <nova:owner>
Oct 08 10:19:35 compute-0 nova_compute[262220]:         <nova:user uuid="d50b19166a7245e390a6e29682191263">tempest-TestNetworkBasicOps-139500885-project-member</nova:user>
Oct 08 10:19:35 compute-0 nova_compute[262220]:         <nova:project uuid="0edb2c85b88b4b168a4b2e8c5ed4a05c">tempest-TestNetworkBasicOps-139500885</nova:project>
Oct 08 10:19:35 compute-0 nova_compute[262220]:       </nova:owner>
Oct 08 10:19:35 compute-0 nova_compute[262220]:       <nova:root type="image" uuid="e5994bac-385d-4cfe-962e-386aa0559983"/>
Oct 08 10:19:35 compute-0 nova_compute[262220]:       <nova:ports>
Oct 08 10:19:35 compute-0 nova_compute[262220]:         <nova:port uuid="29abf06b-1e1a-46cb-9cc1-7fa777795883">
Oct 08 10:19:35 compute-0 nova_compute[262220]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 08 10:19:35 compute-0 nova_compute[262220]:         </nova:port>
Oct 08 10:19:35 compute-0 nova_compute[262220]:       </nova:ports>
Oct 08 10:19:35 compute-0 nova_compute[262220]:     </nova:instance>
Oct 08 10:19:35 compute-0 nova_compute[262220]:   </metadata>
Oct 08 10:19:35 compute-0 nova_compute[262220]:   <sysinfo type="smbios">
Oct 08 10:19:35 compute-0 nova_compute[262220]:     <system>
Oct 08 10:19:35 compute-0 nova_compute[262220]:       <entry name="manufacturer">RDO</entry>
Oct 08 10:19:35 compute-0 nova_compute[262220]:       <entry name="product">OpenStack Compute</entry>
Oct 08 10:19:35 compute-0 nova_compute[262220]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 08 10:19:35 compute-0 nova_compute[262220]:       <entry name="serial">7d19d2c6-6de1-4096-99e4-24b4265b9c09</entry>
Oct 08 10:19:35 compute-0 nova_compute[262220]:       <entry name="uuid">7d19d2c6-6de1-4096-99e4-24b4265b9c09</entry>
Oct 08 10:19:35 compute-0 nova_compute[262220]:       <entry name="family">Virtual Machine</entry>
Oct 08 10:19:35 compute-0 nova_compute[262220]:     </system>
Oct 08 10:19:35 compute-0 nova_compute[262220]:   </sysinfo>
Oct 08 10:19:35 compute-0 nova_compute[262220]:   <os>
Oct 08 10:19:35 compute-0 nova_compute[262220]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 08 10:19:35 compute-0 nova_compute[262220]:     <boot dev="hd"/>
Oct 08 10:19:35 compute-0 nova_compute[262220]:     <smbios mode="sysinfo"/>
Oct 08 10:19:35 compute-0 nova_compute[262220]:   </os>
Oct 08 10:19:35 compute-0 nova_compute[262220]:   <features>
Oct 08 10:19:35 compute-0 nova_compute[262220]:     <acpi/>
Oct 08 10:19:35 compute-0 nova_compute[262220]:     <apic/>
Oct 08 10:19:35 compute-0 nova_compute[262220]:     <vmcoreinfo/>
Oct 08 10:19:35 compute-0 nova_compute[262220]:   </features>
Oct 08 10:19:35 compute-0 nova_compute[262220]:   <clock offset="utc">
Oct 08 10:19:35 compute-0 nova_compute[262220]:     <timer name="pit" tickpolicy="delay"/>
Oct 08 10:19:35 compute-0 nova_compute[262220]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 08 10:19:35 compute-0 nova_compute[262220]:     <timer name="hpet" present="no"/>
Oct 08 10:19:35 compute-0 nova_compute[262220]:   </clock>
Oct 08 10:19:35 compute-0 nova_compute[262220]:   <cpu mode="host-model" match="exact">
Oct 08 10:19:35 compute-0 nova_compute[262220]:     <topology sockets="1" cores="1" threads="1"/>
Oct 08 10:19:35 compute-0 nova_compute[262220]:   </cpu>
Oct 08 10:19:35 compute-0 nova_compute[262220]:   <devices>
Oct 08 10:19:35 compute-0 nova_compute[262220]:     <disk type="network" device="disk">
Oct 08 10:19:35 compute-0 nova_compute[262220]:       <driver type="raw" cache="none"/>
Oct 08 10:19:35 compute-0 nova_compute[262220]:       <source protocol="rbd" name="vms/7d19d2c6-6de1-4096-99e4-24b4265b9c09_disk">
Oct 08 10:19:35 compute-0 nova_compute[262220]:         <host name="192.168.122.100" port="6789"/>
Oct 08 10:19:35 compute-0 nova_compute[262220]:         <host name="192.168.122.102" port="6789"/>
Oct 08 10:19:35 compute-0 nova_compute[262220]:         <host name="192.168.122.101" port="6789"/>
Oct 08 10:19:35 compute-0 nova_compute[262220]:       </source>
Oct 08 10:19:35 compute-0 nova_compute[262220]:       <auth username="openstack">
Oct 08 10:19:35 compute-0 nova_compute[262220]:         <secret type="ceph" uuid="787292cc-8154-50c4-9e00-e9be3e817149"/>
Oct 08 10:19:35 compute-0 nova_compute[262220]:       </auth>
Oct 08 10:19:35 compute-0 nova_compute[262220]:       <target dev="vda" bus="virtio"/>
Oct 08 10:19:35 compute-0 nova_compute[262220]:     </disk>
Oct 08 10:19:35 compute-0 nova_compute[262220]:     <disk type="network" device="cdrom">
Oct 08 10:19:35 compute-0 nova_compute[262220]:       <driver type="raw" cache="none"/>
Oct 08 10:19:35 compute-0 nova_compute[262220]:       <source protocol="rbd" name="vms/7d19d2c6-6de1-4096-99e4-24b4265b9c09_disk.config">
Oct 08 10:19:35 compute-0 nova_compute[262220]:         <host name="192.168.122.100" port="6789"/>
Oct 08 10:19:35 compute-0 nova_compute[262220]:         <host name="192.168.122.102" port="6789"/>
Oct 08 10:19:35 compute-0 nova_compute[262220]:         <host name="192.168.122.101" port="6789"/>
Oct 08 10:19:35 compute-0 nova_compute[262220]:       </source>
Oct 08 10:19:35 compute-0 nova_compute[262220]:       <auth username="openstack">
Oct 08 10:19:35 compute-0 nova_compute[262220]:         <secret type="ceph" uuid="787292cc-8154-50c4-9e00-e9be3e817149"/>
Oct 08 10:19:35 compute-0 nova_compute[262220]:       </auth>
Oct 08 10:19:35 compute-0 nova_compute[262220]:       <target dev="sda" bus="sata"/>
Oct 08 10:19:35 compute-0 nova_compute[262220]:     </disk>
Oct 08 10:19:35 compute-0 nova_compute[262220]:     <interface type="ethernet">
Oct 08 10:19:35 compute-0 nova_compute[262220]:       <mac address="fa:16:3e:00:0d:2d"/>
Oct 08 10:19:35 compute-0 nova_compute[262220]:       <model type="virtio"/>
Oct 08 10:19:35 compute-0 nova_compute[262220]:       <driver name="vhost" rx_queue_size="512"/>
Oct 08 10:19:35 compute-0 nova_compute[262220]:       <mtu size="1442"/>
Oct 08 10:19:35 compute-0 nova_compute[262220]:       <target dev="tap29abf06b-1e"/>
Oct 08 10:19:35 compute-0 nova_compute[262220]:     </interface>
Oct 08 10:19:35 compute-0 nova_compute[262220]:     <serial type="pty">
Oct 08 10:19:35 compute-0 nova_compute[262220]:       <log file="/var/lib/nova/instances/7d19d2c6-6de1-4096-99e4-24b4265b9c09/console.log" append="off"/>
Oct 08 10:19:35 compute-0 nova_compute[262220]:     </serial>
Oct 08 10:19:35 compute-0 nova_compute[262220]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 08 10:19:35 compute-0 nova_compute[262220]:     <video>
Oct 08 10:19:35 compute-0 nova_compute[262220]:       <model type="virtio"/>
Oct 08 10:19:35 compute-0 nova_compute[262220]:     </video>
Oct 08 10:19:35 compute-0 nova_compute[262220]:     <input type="tablet" bus="usb"/>
Oct 08 10:19:35 compute-0 nova_compute[262220]:     <rng model="virtio">
Oct 08 10:19:35 compute-0 nova_compute[262220]:       <backend model="random">/dev/urandom</backend>
Oct 08 10:19:35 compute-0 nova_compute[262220]:     </rng>
Oct 08 10:19:35 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root"/>
Oct 08 10:19:35 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:19:35 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:19:35 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:19:35 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:19:35 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:19:35 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:19:35 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:19:35 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:19:35 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:19:35 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:19:35 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:19:35 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:19:35 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:19:35 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:19:35 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:19:35 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:19:35 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:19:35 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:19:35 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:19:35 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:19:35 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:19:35 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:19:35 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:19:35 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:19:35 compute-0 nova_compute[262220]:     <controller type="usb" index="0"/>
Oct 08 10:19:35 compute-0 nova_compute[262220]:     <memballoon model="virtio">
Oct 08 10:19:35 compute-0 nova_compute[262220]:       <stats period="10"/>
Oct 08 10:19:35 compute-0 nova_compute[262220]:     </memballoon>
Oct 08 10:19:35 compute-0 nova_compute[262220]:   </devices>
Oct 08 10:19:35 compute-0 nova_compute[262220]: </domain>
Oct 08 10:19:35 compute-0 nova_compute[262220]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 08 10:19:35 compute-0 nova_compute[262220]: 2025-10-08 10:19:35.611 2 DEBUG nova.compute.manager [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Preparing to wait for external event network-vif-plugged-29abf06b-1e1a-46cb-9cc1-7fa777795883 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 08 10:19:35 compute-0 nova_compute[262220]: 2025-10-08 10:19:35.611 2 DEBUG oslo_concurrency.lockutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:19:35 compute-0 nova_compute[262220]: 2025-10-08 10:19:35.611 2 DEBUG oslo_concurrency.lockutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:19:35 compute-0 nova_compute[262220]: 2025-10-08 10:19:35.612 2 DEBUG oslo_concurrency.lockutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:19:35 compute-0 nova_compute[262220]: 2025-10-08 10:19:35.613 2 DEBUG nova.virt.libvirt.vif [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-08T10:19:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1442491120',display_name='tempest-TestNetworkBasicOps-server-1442491120',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1442491120',id=11,image_ref='e5994bac-385d-4cfe-962e-386aa0559983',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA5zqA1Qj/FXMxdyzpBTW0ZXp5DxknDQcIVK3ARN25T6VayPziIvkKCLWAtPemraMv4byPsH7lpRR4PeiITQ6eibmU22T/5fhhxWj1Ai2d949LVQyVHFvTo1rGRRAeVdbw==',key_name='tempest-TestNetworkBasicOps-1126023314',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0edb2c85b88b4b168a4b2e8c5ed4a05c',ramdisk_id='',reservation_id='r-zjf5kwx6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e5994bac-385d-4cfe-962e-386aa0559983',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-139500885',owner_user_name='tempest-TestNetworkBasicOps-139500885-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-08T10:19:27Z,user_data=None,user_id='d50b19166a7245e390a6e29682191263',uuid=7d19d2c6-6de1-4096-99e4-24b4265b9c09,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "29abf06b-1e1a-46cb-9cc1-7fa777795883", "address": "fa:16:3e:00:0d:2d", "network": {"id": "c18c7476-aaa8-4977-81b5-fb17e88446e2", "bridge": "br-int", "label": "tempest-network-smoke--1578716951", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29abf06b-1e", "ovs_interfaceid": "29abf06b-1e1a-46cb-9cc1-7fa777795883", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 08 10:19:35 compute-0 nova_compute[262220]: 2025-10-08 10:19:35.613 2 DEBUG nova.network.os_vif_util [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converting VIF {"id": "29abf06b-1e1a-46cb-9cc1-7fa777795883", "address": "fa:16:3e:00:0d:2d", "network": {"id": "c18c7476-aaa8-4977-81b5-fb17e88446e2", "bridge": "br-int", "label": "tempest-network-smoke--1578716951", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29abf06b-1e", "ovs_interfaceid": "29abf06b-1e1a-46cb-9cc1-7fa777795883", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 08 10:19:35 compute-0 nova_compute[262220]: 2025-10-08 10:19:35.614 2 DEBUG nova.network.os_vif_util [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:0d:2d,bridge_name='br-int',has_traffic_filtering=True,id=29abf06b-1e1a-46cb-9cc1-7fa777795883,network=Network(c18c7476-aaa8-4977-81b5-fb17e88446e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29abf06b-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 08 10:19:35 compute-0 nova_compute[262220]: 2025-10-08 10:19:35.614 2 DEBUG os_vif [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:0d:2d,bridge_name='br-int',has_traffic_filtering=True,id=29abf06b-1e1a-46cb-9cc1-7fa777795883,network=Network(c18c7476-aaa8-4977-81b5-fb17e88446e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29abf06b-1e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 08 10:19:35 compute-0 nova_compute[262220]: 2025-10-08 10:19:35.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:19:35 compute-0 nova_compute[262220]: 2025-10-08 10:19:35.616 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 10:19:35 compute-0 nova_compute[262220]: 2025-10-08 10:19:35.616 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 10:19:35 compute-0 nova_compute[262220]: 2025-10-08 10:19:35.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:19:35 compute-0 nova_compute[262220]: 2025-10-08 10:19:35.621 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap29abf06b-1e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 10:19:35 compute-0 nova_compute[262220]: 2025-10-08 10:19:35.621 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap29abf06b-1e, col_values=(('external_ids', {'iface-id': '29abf06b-1e1a-46cb-9cc1-7fa777795883', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:00:0d:2d', 'vm-uuid': '7d19d2c6-6de1-4096-99e4-24b4265b9c09'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 10:19:35 compute-0 nova_compute[262220]: 2025-10-08 10:19:35.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:19:35 compute-0 NetworkManager[44872]: <info>  [1759918775.6241] manager: (tap29abf06b-1e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Oct 08 10:19:35 compute-0 nova_compute[262220]: 2025-10-08 10:19:35.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 08 10:19:35 compute-0 nova_compute[262220]: 2025-10-08 10:19:35.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:19:35 compute-0 nova_compute[262220]: 2025-10-08 10:19:35.632 2 INFO os_vif [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:0d:2d,bridge_name='br-int',has_traffic_filtering=True,id=29abf06b-1e1a-46cb-9cc1-7fa777795883,network=Network(c18c7476-aaa8-4977-81b5-fb17e88446e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29abf06b-1e')
Oct 08 10:19:35 compute-0 nova_compute[262220]: 2025-10-08 10:19:35.696 2 DEBUG nova.virt.libvirt.driver [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 08 10:19:35 compute-0 nova_compute[262220]: 2025-10-08 10:19:35.697 2 DEBUG nova.virt.libvirt.driver [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 08 10:19:35 compute-0 nova_compute[262220]: 2025-10-08 10:19:35.697 2 DEBUG nova.virt.libvirt.driver [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] No VIF found with MAC fa:16:3e:00:0d:2d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 08 10:19:35 compute-0 nova_compute[262220]: 2025-10-08 10:19:35.698 2 INFO nova.virt.libvirt.driver [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Using config drive
Oct 08 10:19:35 compute-0 nova_compute[262220]: 2025-10-08 10:19:35.731 2 DEBUG nova.storage.rbd_utils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] rbd image 7d19d2c6-6de1-4096-99e4-24b4265b9c09_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 08 10:19:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:19:35] "GET /metrics HTTP/1.1" 200 48478 "" "Prometheus/2.51.0"
Oct 08 10:19:35 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:19:35] "GET /metrics HTTP/1.1" 200 48478 "" "Prometheus/2.51.0"
Oct 08 10:19:35 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:19:35 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:19:35 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:19:35.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:19:36 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/494934549' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 08 10:19:36 compute-0 ceph-mon[73572]: pgmap v1050: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 08 10:19:36 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/4057436752' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 08 10:19:36 compute-0 nova_compute[262220]: 2025-10-08 10:19:36.059 2 DEBUG nova.network.neutron [req-4e2d0368-0e8a-489f-aa2c-93dd4a7774e4 req-7dc0b697-ecc2-4c17-976e-63662e534cb4 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Updated VIF entry in instance network info cache for port 29abf06b-1e1a-46cb-9cc1-7fa777795883. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 08 10:19:36 compute-0 nova_compute[262220]: 2025-10-08 10:19:36.060 2 DEBUG nova.network.neutron [req-4e2d0368-0e8a-489f-aa2c-93dd4a7774e4 req-7dc0b697-ecc2-4c17-976e-63662e534cb4 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Updating instance_info_cache with network_info: [{"id": "29abf06b-1e1a-46cb-9cc1-7fa777795883", "address": "fa:16:3e:00:0d:2d", "network": {"id": "c18c7476-aaa8-4977-81b5-fb17e88446e2", "bridge": "br-int", "label": "tempest-network-smoke--1578716951", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29abf06b-1e", "ovs_interfaceid": "29abf06b-1e1a-46cb-9cc1-7fa777795883", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 08 10:19:36 compute-0 nova_compute[262220]: 2025-10-08 10:19:36.076 2 DEBUG oslo_concurrency.lockutils [req-4e2d0368-0e8a-489f-aa2c-93dd4a7774e4 req-7dc0b697-ecc2-4c17-976e-63662e534cb4 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Releasing lock "refresh_cache-7d19d2c6-6de1-4096-99e4-24b4265b9c09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 08 10:19:36 compute-0 nova_compute[262220]: 2025-10-08 10:19:36.436 2 INFO nova.virt.libvirt.driver [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Creating config drive at /var/lib/nova/instances/7d19d2c6-6de1-4096-99e4-24b4265b9c09/disk.config
Oct 08 10:19:36 compute-0 nova_compute[262220]: 2025-10-08 10:19:36.441 2 DEBUG oslo_concurrency.processutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7d19d2c6-6de1-4096-99e4-24b4265b9c09/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0qvqv28s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:19:36 compute-0 nova_compute[262220]: 2025-10-08 10:19:36.573 2 DEBUG oslo_concurrency.processutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7d19d2c6-6de1-4096-99e4-24b4265b9c09/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0qvqv28s" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:19:36 compute-0 nova_compute[262220]: 2025-10-08 10:19:36.608 2 DEBUG nova.storage.rbd_utils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] rbd image 7d19d2c6-6de1-4096-99e4-24b4265b9c09_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 08 10:19:36 compute-0 nova_compute[262220]: 2025-10-08 10:19:36.612 2 DEBUG oslo_concurrency.processutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7d19d2c6-6de1-4096-99e4-24b4265b9c09/disk.config 7d19d2c6-6de1-4096-99e4-24b4265b9c09_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:19:36 compute-0 nova_compute[262220]: 2025-10-08 10:19:36.788 2 DEBUG oslo_concurrency.processutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7d19d2c6-6de1-4096-99e4-24b4265b9c09/disk.config 7d19d2c6-6de1-4096-99e4-24b4265b9c09_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:19:36 compute-0 nova_compute[262220]: 2025-10-08 10:19:36.789 2 INFO nova.virt.libvirt.driver [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Deleting local config drive /var/lib/nova/instances/7d19d2c6-6de1-4096-99e4-24b4265b9c09/disk.config because it was imported into RBD.
Oct 08 10:19:36 compute-0 systemd[1]: Starting libvirt secret daemon...
Oct 08 10:19:36 compute-0 systemd[1]: Started libvirt secret daemon.
Oct 08 10:19:36 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 08 10:19:36 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 08 10:19:36 compute-0 kernel: tap29abf06b-1e: entered promiscuous mode
Oct 08 10:19:36 compute-0 NetworkManager[44872]: <info>  [1759918776.8961] manager: (tap29abf06b-1e): new Tun device (/org/freedesktop/NetworkManager/Devices/44)
Oct 08 10:19:36 compute-0 nova_compute[262220]: 2025-10-08 10:19:36.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:19:36 compute-0 ovn_controller[153187]: 2025-10-08T10:19:36Z|00058|binding|INFO|Claiming lport 29abf06b-1e1a-46cb-9cc1-7fa777795883 for this chassis.
Oct 08 10:19:36 compute-0 ovn_controller[153187]: 2025-10-08T10:19:36Z|00059|binding|INFO|29abf06b-1e1a-46cb-9cc1-7fa777795883: Claiming fa:16:3e:00:0d:2d 10.100.0.8
Oct 08 10:19:36 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:19:36.919 163175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:0d:2d 10.100.0.8'], port_security=['fa:16:3e:00:0d:2d 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '7d19d2c6-6de1-4096-99e4-24b4265b9c09', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c18c7476-aaa8-4977-81b5-fb17e88446e2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0edb2c85b88b4b168a4b2e8c5ed4a05c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '19e068da-96ae-4c4d-8c61-2ea91c3392b7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1ff1baa8-ffa0-48d3-9c93-32e63e4450d8, chassis=[<ovs.db.idl.Row object at 0x7f191f105a60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f191f105a60>], logical_port=29abf06b-1e1a-46cb-9cc1-7fa777795883) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 08 10:19:36 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:19:36.920 163175 INFO neutron.agent.ovn.metadata.agent [-] Port 29abf06b-1e1a-46cb-9cc1-7fa777795883 in datapath c18c7476-aaa8-4977-81b5-fb17e88446e2 bound to our chassis
Oct 08 10:19:36 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:19:36.922 163175 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c18c7476-aaa8-4977-81b5-fb17e88446e2
Oct 08 10:19:36 compute-0 systemd-udevd[281306]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 10:19:36 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:19:36.935 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[fcb0bca7-9f2c-4751-be72-c3b29ed41703]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:19:36 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:19:36.936 163175 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc18c7476-a1 in ovnmeta-c18c7476-aaa8-4977-81b5-fb17e88446e2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 08 10:19:36 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:19:36.937 267781 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc18c7476-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 08 10:19:36 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:19:36.937 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[ef575866-8c57-44e2-b1fd-4fa3305662fb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:19:36 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:19:36.938 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[bb7d8536-7a7f-482f-884d-a7ed5b2e95d3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:19:36 compute-0 systemd-machined[216030]: New machine qemu-3-instance-0000000b.
Oct 08 10:19:36 compute-0 NetworkManager[44872]: <info>  [1759918776.9451] device (tap29abf06b-1e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 08 10:19:36 compute-0 NetworkManager[44872]: <info>  [1759918776.9460] device (tap29abf06b-1e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 08 10:19:36 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:19:36.957 163290 DEBUG oslo.privsep.daemon [-] privsep: reply[6471832a-e78d-4706-a998-9bb9df0c1f9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:19:36 compute-0 nova_compute[262220]: 2025-10-08 10:19:36.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:19:36 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-0000000b.
Oct 08 10:19:36 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:19:36.978 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[93f9d3aa-dd3a-41ee-a58b-a6d69e28dd8a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:19:36 compute-0 ovn_controller[153187]: 2025-10-08T10:19:36Z|00060|binding|INFO|Setting lport 29abf06b-1e1a-46cb-9cc1-7fa777795883 ovn-installed in OVS
Oct 08 10:19:36 compute-0 ovn_controller[153187]: 2025-10-08T10:19:36Z|00061|binding|INFO|Setting lport 29abf06b-1e1a-46cb-9cc1-7fa777795883 up in Southbound
Oct 08 10:19:36 compute-0 nova_compute[262220]: 2025-10-08 10:19:36.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:19:37 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:19:37.012 267799 DEBUG oslo.privsep.daemon [-] privsep: reply[89456e48-3177-48f1-b4b9-85599e84c561]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:19:37 compute-0 NetworkManager[44872]: <info>  [1759918777.0179] manager: (tapc18c7476-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/45)
Oct 08 10:19:37 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:19:37.016 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[fa53b7c8-f9f9-4413-a7e0-969a3c48d752]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:19:37 compute-0 systemd-udevd[281311]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 10:19:37 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:19:37 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:19:37 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:19:37.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:19:37 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:19:37.051 267799 DEBUG oslo.privsep.daemon [-] privsep: reply[52c4c23d-bfe6-4965-ad17-a7256ed516c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:19:37 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:19:37.053 267799 DEBUG oslo.privsep.daemon [-] privsep: reply[de94b646-07e2-4401-ae5c-ce25de9c93a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:19:37 compute-0 NetworkManager[44872]: <info>  [1759918777.0792] device (tapc18c7476-a0): carrier: link connected
Oct 08 10:19:37 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:19:37.085 267799 DEBUG oslo.privsep.daemon [-] privsep: reply[a997c89a-b35e-4542-bdcb-426e9be5690d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:19:37 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:19:37.103 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[6cebdbe6-80d1-4d63-9f95-76b48119cf1b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc18c7476-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:d8:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 473571, 'reachable_time': 17333, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281340, 'error': None, 'target': 'ovnmeta-c18c7476-aaa8-4977-81b5-fb17e88446e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:19:37 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:19:37.123 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[e761b673-2425-466c-8246-666cfa8876e7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe18:d8a7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 473571, 'tstamp': 473571}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281341, 'error': None, 'target': 'ovnmeta-c18c7476-aaa8-4977-81b5-fb17e88446e2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:19:37 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:19:37.141 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[e6537f43-cc12-454f-85a0-5c84df63d9fd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc18c7476-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:d8:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 473571, 'reachable_time': 17333, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 281342, 'error': None, 'target': 'ovnmeta-c18c7476-aaa8-4977-81b5-fb17e88446e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:19:37 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:19:37.173 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[7aeadf71-29c4-4127-b7ea-1e432f128361]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:19:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:19:37.186Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:19:37 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:19:37.239 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[71f16e09-ff6d-4832-855e-1eeb2c2967d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:19:37 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:19:37.241 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc18c7476-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 10:19:37 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:19:37.241 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 10:19:37 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:19:37.242 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc18c7476-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 10:19:37 compute-0 NetworkManager[44872]: <info>  [1759918777.2445] manager: (tapc18c7476-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Oct 08 10:19:37 compute-0 kernel: tapc18c7476-a0: entered promiscuous mode
Oct 08 10:19:37 compute-0 nova_compute[262220]: 2025-10-08 10:19:37.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:19:37 compute-0 nova_compute[262220]: 2025-10-08 10:19:37.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:19:37 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:19:37.248 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc18c7476-a0, col_values=(('external_ids', {'iface-id': '10afe0a1-7000-43ca-a48a-2022b8edbb06'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 10:19:37 compute-0 nova_compute[262220]: 2025-10-08 10:19:37.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:19:37 compute-0 ovn_controller[153187]: 2025-10-08T10:19:37Z|00062|binding|INFO|Releasing lport 10afe0a1-7000-43ca-a48a-2022b8edbb06 from this chassis (sb_readonly=0)
Oct 08 10:19:37 compute-0 nova_compute[262220]: 2025-10-08 10:19:37.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:19:37 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:19:37.264 163175 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c18c7476-aaa8-4977-81b5-fb17e88446e2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c18c7476-aaa8-4977-81b5-fb17e88446e2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 08 10:19:37 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:19:37.265 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[e395c04b-8b74-4eda-a298-b90d06d17a3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:19:37 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:19:37.266 163175 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 08 10:19:37 compute-0 ovn_metadata_agent[163169]: global
Oct 08 10:19:37 compute-0 ovn_metadata_agent[163169]:     log         /dev/log local0 debug
Oct 08 10:19:37 compute-0 ovn_metadata_agent[163169]:     log-tag     haproxy-metadata-proxy-c18c7476-aaa8-4977-81b5-fb17e88446e2
Oct 08 10:19:37 compute-0 ovn_metadata_agent[163169]:     user        root
Oct 08 10:19:37 compute-0 ovn_metadata_agent[163169]:     group       root
Oct 08 10:19:37 compute-0 ovn_metadata_agent[163169]:     maxconn     1024
Oct 08 10:19:37 compute-0 ovn_metadata_agent[163169]:     pidfile     /var/lib/neutron/external/pids/c18c7476-aaa8-4977-81b5-fb17e88446e2.pid.haproxy
Oct 08 10:19:37 compute-0 ovn_metadata_agent[163169]:     daemon
Oct 08 10:19:37 compute-0 ovn_metadata_agent[163169]: 
Oct 08 10:19:37 compute-0 ovn_metadata_agent[163169]: defaults
Oct 08 10:19:37 compute-0 ovn_metadata_agent[163169]:     log global
Oct 08 10:19:37 compute-0 ovn_metadata_agent[163169]:     mode http
Oct 08 10:19:37 compute-0 ovn_metadata_agent[163169]:     option httplog
Oct 08 10:19:37 compute-0 ovn_metadata_agent[163169]:     option dontlognull
Oct 08 10:19:37 compute-0 ovn_metadata_agent[163169]:     option http-server-close
Oct 08 10:19:37 compute-0 ovn_metadata_agent[163169]:     option forwardfor
Oct 08 10:19:37 compute-0 ovn_metadata_agent[163169]:     retries                 3
Oct 08 10:19:37 compute-0 ovn_metadata_agent[163169]:     timeout http-request    30s
Oct 08 10:19:37 compute-0 ovn_metadata_agent[163169]:     timeout connect         30s
Oct 08 10:19:37 compute-0 ovn_metadata_agent[163169]:     timeout client          32s
Oct 08 10:19:37 compute-0 ovn_metadata_agent[163169]:     timeout server          32s
Oct 08 10:19:37 compute-0 ovn_metadata_agent[163169]:     timeout http-keep-alive 30s
Oct 08 10:19:37 compute-0 ovn_metadata_agent[163169]: 
Oct 08 10:19:37 compute-0 ovn_metadata_agent[163169]: 
Oct 08 10:19:37 compute-0 ovn_metadata_agent[163169]: listen listener
Oct 08 10:19:37 compute-0 ovn_metadata_agent[163169]:     bind 169.254.169.254:80
Oct 08 10:19:37 compute-0 ovn_metadata_agent[163169]:     server metadata /var/lib/neutron/metadata_proxy
Oct 08 10:19:37 compute-0 ovn_metadata_agent[163169]:     http-request add-header X-OVN-Network-ID c18c7476-aaa8-4977-81b5-fb17e88446e2
Oct 08 10:19:37 compute-0 ovn_metadata_agent[163169]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 08 10:19:37 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:19:37.266 163175 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c18c7476-aaa8-4977-81b5-fb17e88446e2', 'env', 'PROCESS_TAG=haproxy-c18c7476-aaa8-4977-81b5-fb17e88446e2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c18c7476-aaa8-4977-81b5-fb17e88446e2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 08 10:19:37 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1051: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 08 10:19:37 compute-0 podman[281417]: 2025-10-08 10:19:37.756812435 +0000 UTC m=+0.101084745 container create 7c14962a86f46dbbdab05db1187661162c285fa5905925d5090b257adb980bce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c18c7476-aaa8-4977-81b5-fb17e88446e2, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 08 10:19:37 compute-0 podman[281417]: 2025-10-08 10:19:37.679294252 +0000 UTC m=+0.023566562 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 08 10:19:37 compute-0 systemd[1]: Started libpod-conmon-7c14962a86f46dbbdab05db1187661162c285fa5905925d5090b257adb980bce.scope.
Oct 08 10:19:37 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:19:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/669b7976e7c613a7666c66b557e5e70955b0380381cfc69b3da6fa8e03ce9e5e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 08 10:19:37 compute-0 podman[281417]: 2025-10-08 10:19:37.94184591 +0000 UTC m=+0.286118250 container init 7c14962a86f46dbbdab05db1187661162c285fa5905925d5090b257adb980bce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c18c7476-aaa8-4977-81b5-fb17e88446e2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 08 10:19:37 compute-0 podman[281417]: 2025-10-08 10:19:37.948897088 +0000 UTC m=+0.293169408 container start 7c14962a86f46dbbdab05db1187661162c285fa5905925d5090b257adb980bce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c18c7476-aaa8-4977-81b5-fb17e88446e2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 08 10:19:37 compute-0 neutron-haproxy-ovnmeta-c18c7476-aaa8-4977-81b5-fb17e88446e2[281434]: [NOTICE]   (281438) : New worker (281441) forked
Oct 08 10:19:37 compute-0 neutron-haproxy-ovnmeta-c18c7476-aaa8-4977-81b5-fb17e88446e2[281434]: [NOTICE]   (281438) : Loading success.
Oct 08 10:19:38 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:19:38 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:19:38 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:19:38.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:19:38 compute-0 nova_compute[262220]: 2025-10-08 10:19:38.020 2 DEBUG nova.compute.manager [req-12d60aa9-c85e-4810-be5e-a1100d08d2cd req-ac6331dd-f224-4f53-8db7-8679ac79f73e 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Received event network-vif-plugged-29abf06b-1e1a-46cb-9cc1-7fa777795883 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 08 10:19:38 compute-0 nova_compute[262220]: 2025-10-08 10:19:38.022 2 DEBUG oslo_concurrency.lockutils [req-12d60aa9-c85e-4810-be5e-a1100d08d2cd req-ac6331dd-f224-4f53-8db7-8679ac79f73e 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:19:38 compute-0 nova_compute[262220]: 2025-10-08 10:19:38.022 2 DEBUG oslo_concurrency.lockutils [req-12d60aa9-c85e-4810-be5e-a1100d08d2cd req-ac6331dd-f224-4f53-8db7-8679ac79f73e 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:19:38 compute-0 nova_compute[262220]: 2025-10-08 10:19:38.022 2 DEBUG oslo_concurrency.lockutils [req-12d60aa9-c85e-4810-be5e-a1100d08d2cd req-ac6331dd-f224-4f53-8db7-8679ac79f73e 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:19:38 compute-0 nova_compute[262220]: 2025-10-08 10:19:38.023 2 DEBUG nova.compute.manager [req-12d60aa9-c85e-4810-be5e-a1100d08d2cd req-ac6331dd-f224-4f53-8db7-8679ac79f73e 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Processing event network-vif-plugged-29abf06b-1e1a-46cb-9cc1-7fa777795883 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 08 10:19:38 compute-0 nova_compute[262220]: 2025-10-08 10:19:38.133 2 DEBUG nova.compute.manager [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 08 10:19:38 compute-0 nova_compute[262220]: 2025-10-08 10:19:38.134 2 DEBUG nova.virt.driver [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Emitting event <LifecycleEvent: 1759918778.1329854, 7d19d2c6-6de1-4096-99e4-24b4265b9c09 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 08 10:19:38 compute-0 nova_compute[262220]: 2025-10-08 10:19:38.135 2 INFO nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] VM Started (Lifecycle Event)
Oct 08 10:19:38 compute-0 nova_compute[262220]: 2025-10-08 10:19:38.138 2 DEBUG nova.virt.libvirt.driver [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 08 10:19:38 compute-0 nova_compute[262220]: 2025-10-08 10:19:38.147 2 INFO nova.virt.libvirt.driver [-] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Instance spawned successfully.
Oct 08 10:19:38 compute-0 nova_compute[262220]: 2025-10-08 10:19:38.149 2 DEBUG nova.virt.libvirt.driver [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 08 10:19:38 compute-0 nova_compute[262220]: 2025-10-08 10:19:38.155 2 DEBUG nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 08 10:19:38 compute-0 nova_compute[262220]: 2025-10-08 10:19:38.157 2 DEBUG nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 08 10:19:38 compute-0 nova_compute[262220]: 2025-10-08 10:19:38.171 2 DEBUG nova.virt.libvirt.driver [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 08 10:19:38 compute-0 nova_compute[262220]: 2025-10-08 10:19:38.172 2 DEBUG nova.virt.libvirt.driver [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 08 10:19:38 compute-0 nova_compute[262220]: 2025-10-08 10:19:38.172 2 DEBUG nova.virt.libvirt.driver [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 08 10:19:38 compute-0 nova_compute[262220]: 2025-10-08 10:19:38.172 2 DEBUG nova.virt.libvirt.driver [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 08 10:19:38 compute-0 nova_compute[262220]: 2025-10-08 10:19:38.173 2 DEBUG nova.virt.libvirt.driver [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 08 10:19:38 compute-0 nova_compute[262220]: 2025-10-08 10:19:38.173 2 DEBUG nova.virt.libvirt.driver [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 08 10:19:38 compute-0 nova_compute[262220]: 2025-10-08 10:19:38.183 2 INFO nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 08 10:19:38 compute-0 nova_compute[262220]: 2025-10-08 10:19:38.183 2 DEBUG nova.virt.driver [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Emitting event <LifecycleEvent: 1759918778.1331172, 7d19d2c6-6de1-4096-99e4-24b4265b9c09 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 08 10:19:38 compute-0 nova_compute[262220]: 2025-10-08 10:19:38.183 2 INFO nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] VM Paused (Lifecycle Event)
Oct 08 10:19:38 compute-0 nova_compute[262220]: 2025-10-08 10:19:38.209 2 DEBUG nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 08 10:19:38 compute-0 nova_compute[262220]: 2025-10-08 10:19:38.213 2 DEBUG nova.virt.driver [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Emitting event <LifecycleEvent: 1759918778.1372397, 7d19d2c6-6de1-4096-99e4-24b4265b9c09 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 08 10:19:38 compute-0 nova_compute[262220]: 2025-10-08 10:19:38.213 2 INFO nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] VM Resumed (Lifecycle Event)
Oct 08 10:19:38 compute-0 nova_compute[262220]: 2025-10-08 10:19:38.235 2 DEBUG nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 08 10:19:38 compute-0 nova_compute[262220]: 2025-10-08 10:19:38.239 2 DEBUG nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 08 10:19:38 compute-0 nova_compute[262220]: 2025-10-08 10:19:38.242 2 INFO nova.compute.manager [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Took 10.36 seconds to spawn the instance on the hypervisor.
Oct 08 10:19:38 compute-0 nova_compute[262220]: 2025-10-08 10:19:38.243 2 DEBUG nova.compute.manager [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 08 10:19:38 compute-0 nova_compute[262220]: 2025-10-08 10:19:38.259 2 INFO nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 08 10:19:38 compute-0 nova_compute[262220]: 2025-10-08 10:19:38.316 2 INFO nova.compute.manager [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Took 11.31 seconds to build instance.
Oct 08 10:19:38 compute-0 nova_compute[262220]: 2025-10-08 10:19:38.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:19:38 compute-0 nova_compute[262220]: 2025-10-08 10:19:38.344 2 DEBUG oslo_concurrency.lockutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.491s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:19:38 compute-0 ceph-mon[73572]: pgmap v1051: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 08 10:19:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:19:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:19:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:19:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:39 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:19:39 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:19:39 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:19:39 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:19:39.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:19:39 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:19:39 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1052: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Oct 08 10:19:39 compute-0 podman[281451]: 2025-10-08 10:19:39.908865456 +0000 UTC m=+0.065295109 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:19:40 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:19:40 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:19:40 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:19:40.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:19:40 compute-0 nova_compute[262220]: 2025-10-08 10:19:40.120 2 DEBUG nova.compute.manager [req-91f62958-9760-4588-b93d-4861031fbb62 req-06e34ce3-c586-4cb9-86b5-afe34fdba1af 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Received event network-vif-plugged-29abf06b-1e1a-46cb-9cc1-7fa777795883 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 08 10:19:40 compute-0 nova_compute[262220]: 2025-10-08 10:19:40.120 2 DEBUG oslo_concurrency.lockutils [req-91f62958-9760-4588-b93d-4861031fbb62 req-06e34ce3-c586-4cb9-86b5-afe34fdba1af 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:19:40 compute-0 nova_compute[262220]: 2025-10-08 10:19:40.120 2 DEBUG oslo_concurrency.lockutils [req-91f62958-9760-4588-b93d-4861031fbb62 req-06e34ce3-c586-4cb9-86b5-afe34fdba1af 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:19:40 compute-0 nova_compute[262220]: 2025-10-08 10:19:40.121 2 DEBUG oslo_concurrency.lockutils [req-91f62958-9760-4588-b93d-4861031fbb62 req-06e34ce3-c586-4cb9-86b5-afe34fdba1af 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:19:40 compute-0 nova_compute[262220]: 2025-10-08 10:19:40.121 2 DEBUG nova.compute.manager [req-91f62958-9760-4588-b93d-4861031fbb62 req-06e34ce3-c586-4cb9-86b5-afe34fdba1af 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] No waiting events found dispatching network-vif-plugged-29abf06b-1e1a-46cb-9cc1-7fa777795883 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 08 10:19:40 compute-0 nova_compute[262220]: 2025-10-08 10:19:40.121 2 WARNING nova.compute.manager [req-91f62958-9760-4588-b93d-4861031fbb62 req-06e34ce3-c586-4cb9-86b5-afe34fdba1af 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Received unexpected event network-vif-plugged-29abf06b-1e1a-46cb-9cc1-7fa777795883 for instance with vm_state active and task_state None.
Oct 08 10:19:40 compute-0 ceph-mon[73572]: pgmap v1052: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Oct 08 10:19:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=infra.usagestats t=2025-10-08T10:19:40.478312924Z level=info msg="Usage stats are ready to report"
Oct 08 10:19:40 compute-0 nova_compute[262220]: 2025-10-08 10:19:40.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:19:41 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:19:41 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:19:41 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:19:41.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:19:41 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1053: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 8.4 KiB/s rd, 12 KiB/s wr, 10 op/s
Oct 08 10:19:41 compute-0 nova_compute[262220]: 2025-10-08 10:19:41.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:19:41 compute-0 nova_compute[262220]: 2025-10-08 10:19:41.888 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 08 10:19:42 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:19:42 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:19:42 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:19:42.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:19:42 compute-0 ceph-mon[73572]: pgmap v1053: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 8.4 KiB/s rd, 12 KiB/s wr, 10 op/s
Oct 08 10:19:42 compute-0 NetworkManager[44872]: <info>  [1759918782.8475] manager: (patch-provnet-5eda3e1f-dd4f-4c7d-b5fb-d8c0d9996d6d-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Oct 08 10:19:42 compute-0 NetworkManager[44872]: <info>  [1759918782.8488] manager: (patch-br-int-to-provnet-5eda3e1f-dd4f-4c7d-b5fb-d8c0d9996d6d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Oct 08 10:19:42 compute-0 nova_compute[262220]: 2025-10-08 10:19:42.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:19:42 compute-0 ovn_controller[153187]: 2025-10-08T10:19:42Z|00063|binding|INFO|Releasing lport 10afe0a1-7000-43ca-a48a-2022b8edbb06 from this chassis (sb_readonly=0)
Oct 08 10:19:42 compute-0 nova_compute[262220]: 2025-10-08 10:19:42.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:19:42 compute-0 ovn_controller[153187]: 2025-10-08T10:19:42Z|00064|binding|INFO|Releasing lport 10afe0a1-7000-43ca-a48a-2022b8edbb06 from this chassis (sb_readonly=0)
Oct 08 10:19:42 compute-0 nova_compute[262220]: 2025-10-08 10:19:42.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:19:43 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:19:43 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:19:43 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:19:43.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:19:43 compute-0 nova_compute[262220]: 2025-10-08 10:19:43.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:19:43 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1054: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 8.4 KiB/s rd, 12 KiB/s wr, 10 op/s
Oct 08 10:19:43 compute-0 nova_compute[262220]: 2025-10-08 10:19:43.619 2 DEBUG nova.compute.manager [req-2a91aad3-0a62-4f7b-b0d7-bbd1ce05ae11 req-478e4d67-69ce-482b-ba5f-f6b80e2eb840 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Received event network-changed-29abf06b-1e1a-46cb-9cc1-7fa777795883 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 08 10:19:43 compute-0 nova_compute[262220]: 2025-10-08 10:19:43.620 2 DEBUG nova.compute.manager [req-2a91aad3-0a62-4f7b-b0d7-bbd1ce05ae11 req-478e4d67-69ce-482b-ba5f-f6b80e2eb840 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Refreshing instance network info cache due to event network-changed-29abf06b-1e1a-46cb-9cc1-7fa777795883. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 08 10:19:43 compute-0 nova_compute[262220]: 2025-10-08 10:19:43.620 2 DEBUG oslo_concurrency.lockutils [req-2a91aad3-0a62-4f7b-b0d7-bbd1ce05ae11 req-478e4d67-69ce-482b-ba5f-f6b80e2eb840 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "refresh_cache-7d19d2c6-6de1-4096-99e4-24b4265b9c09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 08 10:19:43 compute-0 nova_compute[262220]: 2025-10-08 10:19:43.620 2 DEBUG oslo_concurrency.lockutils [req-2a91aad3-0a62-4f7b-b0d7-bbd1ce05ae11 req-478e4d67-69ce-482b-ba5f-f6b80e2eb840 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquired lock "refresh_cache-7d19d2c6-6de1-4096-99e4-24b4265b9c09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 08 10:19:43 compute-0 nova_compute[262220]: 2025-10-08 10:19:43.620 2 DEBUG nova.network.neutron [req-2a91aad3-0a62-4f7b-b0d7-bbd1ce05ae11 req-478e4d67-69ce-482b-ba5f-f6b80e2eb840 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Refreshing network info cache for port 29abf06b-1e1a-46cb-9cc1-7fa777795883 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 08 10:19:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:19:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:19:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:19:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:44 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:19:44 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:19:44 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:19:44 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:19:44.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:19:44 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:19:44 compute-0 ceph-mon[73572]: pgmap v1054: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 8.4 KiB/s rd, 12 KiB/s wr, 10 op/s
Oct 08 10:19:44 compute-0 nova_compute[262220]: 2025-10-08 10:19:44.648 2 DEBUG nova.network.neutron [req-2a91aad3-0a62-4f7b-b0d7-bbd1ce05ae11 req-478e4d67-69ce-482b-ba5f-f6b80e2eb840 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Updated VIF entry in instance network info cache for port 29abf06b-1e1a-46cb-9cc1-7fa777795883. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 08 10:19:44 compute-0 nova_compute[262220]: 2025-10-08 10:19:44.649 2 DEBUG nova.network.neutron [req-2a91aad3-0a62-4f7b-b0d7-bbd1ce05ae11 req-478e4d67-69ce-482b-ba5f-f6b80e2eb840 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Updating instance_info_cache with network_info: [{"id": "29abf06b-1e1a-46cb-9cc1-7fa777795883", "address": "fa:16:3e:00:0d:2d", "network": {"id": "c18c7476-aaa8-4977-81b5-fb17e88446e2", "bridge": "br-int", "label": "tempest-network-smoke--1578716951", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29abf06b-1e", "ovs_interfaceid": "29abf06b-1e1a-46cb-9cc1-7fa777795883", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 08 10:19:44 compute-0 nova_compute[262220]: 2025-10-08 10:19:44.666 2 DEBUG oslo_concurrency.lockutils [req-2a91aad3-0a62-4f7b-b0d7-bbd1ce05ae11 req-478e4d67-69ce-482b-ba5f-f6b80e2eb840 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Releasing lock "refresh_cache-7d19d2c6-6de1-4096-99e4-24b4265b9c09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 08 10:19:45 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:19:45 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:19:45 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:19:45.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:19:45 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1055: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Oct 08 10:19:45 compute-0 nova_compute[262220]: 2025-10-08 10:19:45.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:19:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:19:45] "GET /metrics HTTP/1.1" 200 48478 "" "Prometheus/2.51.0"
Oct 08 10:19:45 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:19:45] "GET /metrics HTTP/1.1" 200 48478 "" "Prometheus/2.51.0"
Oct 08 10:19:46 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:19:46 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:19:46 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:19:46.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:19:46 compute-0 ceph-mon[73572]: pgmap v1055: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Oct 08 10:19:46 compute-0 nova_compute[262220]: 2025-10-08 10:19:46.905 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:19:47 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:19:47 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:19:47 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:19:47.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:19:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:19:47.187Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:19:47 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1056: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 08 10:19:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:19:47
Oct 08 10:19:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 08 10:19:47 compute-0 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct 08 10:19:47 compute-0 ceph-mgr[73869]: [balancer INFO root] pools ['backups', 'images', 'default.rgw.log', 'vms', 'cephfs.cephfs.meta', 'default.rgw.meta', 'volumes', '.rgw.root', '.mgr', 'cephfs.cephfs.data', '.nfs', 'default.rgw.control']
Oct 08 10:19:47 compute-0 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct 08 10:19:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:19:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:19:47 compute-0 nova_compute[262220]: 2025-10-08 10:19:47.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:19:47 compute-0 nova_compute[262220]: 2025-10-08 10:19:47.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:19:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:19:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:19:48 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:19:48 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:19:48 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:19:48.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:19:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct 08 10:19:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:19:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:19:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:19:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:19:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 08 10:19:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:19:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:19:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 08 10:19:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:19:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Oct 08 10:19:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:19:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:19:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:19:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:19:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:19:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 08 10:19:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:19:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 08 10:19:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:19:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:19:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:19:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 08 10:19:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:19:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 08 10:19:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:19:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:19:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:19:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 08 10:19:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:19:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 10:19:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:19:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:19:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:19:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 08 10:19:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:19:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:19:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:19:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:19:48 compute-0 nova_compute[262220]: 2025-10-08 10:19:48.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:19:48 compute-0 ceph-mon[73572]: pgmap v1056: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 08 10:19:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:19:48 compute-0 ceph-mgr[73869]: [dashboard INFO request] [192.168.122.100:34350] [POST] [200] [0.002s] [4.0B] [4da8c34a-8050-4a36-a28d-df46569e208a] /api/prometheus_receiver
Oct 08 10:19:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:19:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:19:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:19:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:49 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:19:49 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:19:49 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:19:49 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:19:49.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:19:49 compute-0 sudo[281482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:19:49 compute-0 sudo[281482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:19:49 compute-0 sudo[281482]: pam_unix(sudo:session): session closed for user root
Oct 08 10:19:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:19:49 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1057: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Oct 08 10:19:49 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/1880697100' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:19:49 compute-0 nova_compute[262220]: 2025-10-08 10:19:49.882 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:19:49 compute-0 nova_compute[262220]: 2025-10-08 10:19:49.885 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:19:49 compute-0 nova_compute[262220]: 2025-10-08 10:19:49.885 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 08 10:19:49 compute-0 nova_compute[262220]: 2025-10-08 10:19:49.886 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 08 10:19:50 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:19:50 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:19:50 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:19:50.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:19:50 compute-0 nova_compute[262220]: 2025-10-08 10:19:50.507 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "refresh_cache-7d19d2c6-6de1-4096-99e4-24b4265b9c09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 08 10:19:50 compute-0 nova_compute[262220]: 2025-10-08 10:19:50.507 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquired lock "refresh_cache-7d19d2c6-6de1-4096-99e4-24b4265b9c09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 08 10:19:50 compute-0 nova_compute[262220]: 2025-10-08 10:19:50.507 2 DEBUG nova.network.neutron [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 08 10:19:50 compute-0 nova_compute[262220]: 2025-10-08 10:19:50.507 2 DEBUG nova.objects.instance [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7d19d2c6-6de1-4096-99e4-24b4265b9c09 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 08 10:19:50 compute-0 ceph-mon[73572]: pgmap v1057: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Oct 08 10:19:50 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/1346075137' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:19:50 compute-0 nova_compute[262220]: 2025-10-08 10:19:50.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:19:50 compute-0 podman[281509]: 2025-10-08 10:19:50.920465569 +0000 UTC m=+0.084836901 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 08 10:19:50 compute-0 nova_compute[262220]: 2025-10-08 10:19:50.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:19:50 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:19:50.992 163175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2a:d6:ca', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'b2:8b:1e:40:84:a3'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 08 10:19:50 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:19:50.993 163175 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 08 10:19:51 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:19:51 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:19:51 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:19:51.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:19:51 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1058: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 65 op/s
Oct 08 10:19:51 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/2678946387' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:19:51 compute-0 ovn_controller[153187]: 2025-10-08T10:19:51Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:00:0d:2d 10.100.0.8
Oct 08 10:19:51 compute-0 ovn_controller[153187]: 2025-10-08T10:19:51Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:00:0d:2d 10.100.0.8
Oct 08 10:19:51 compute-0 nova_compute[262220]: 2025-10-08 10:19:51.694 2 DEBUG nova.network.neutron [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Updating instance_info_cache with network_info: [{"id": "29abf06b-1e1a-46cb-9cc1-7fa777795883", "address": "fa:16:3e:00:0d:2d", "network": {"id": "c18c7476-aaa8-4977-81b5-fb17e88446e2", "bridge": "br-int", "label": "tempest-network-smoke--1578716951", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29abf06b-1e", "ovs_interfaceid": "29abf06b-1e1a-46cb-9cc1-7fa777795883", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 08 10:19:51 compute-0 nova_compute[262220]: 2025-10-08 10:19:51.715 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Releasing lock "refresh_cache-7d19d2c6-6de1-4096-99e4-24b4265b9c09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 08 10:19:51 compute-0 nova_compute[262220]: 2025-10-08 10:19:51.716 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 08 10:19:51 compute-0 nova_compute[262220]: 2025-10-08 10:19:51.716 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:19:51 compute-0 nova_compute[262220]: 2025-10-08 10:19:51.716 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:19:51 compute-0 nova_compute[262220]: 2025-10-08 10:19:51.818 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:19:51 compute-0 nova_compute[262220]: 2025-10-08 10:19:51.819 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:19:51 compute-0 nova_compute[262220]: 2025-10-08 10:19:51.819 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:19:51 compute-0 nova_compute[262220]: 2025-10-08 10:19:51.820 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 08 10:19:51 compute-0 nova_compute[262220]: 2025-10-08 10:19:51.820 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:19:52 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:19:52 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:19:52 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:19:52.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:19:52 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 08 10:19:52 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2581363615' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:19:52 compute-0 nova_compute[262220]: 2025-10-08 10:19:52.260 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:19:52 compute-0 nova_compute[262220]: 2025-10-08 10:19:52.376 2 DEBUG nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 08 10:19:52 compute-0 nova_compute[262220]: 2025-10-08 10:19:52.377 2 DEBUG nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 08 10:19:52 compute-0 ceph-mon[73572]: pgmap v1058: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 65 op/s
Oct 08 10:19:52 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/2581363615' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:19:52 compute-0 nova_compute[262220]: 2025-10-08 10:19:52.624 2 WARNING nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 10:19:52 compute-0 nova_compute[262220]: 2025-10-08 10:19:52.626 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4333MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 08 10:19:52 compute-0 nova_compute[262220]: 2025-10-08 10:19:52.626 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:19:52 compute-0 nova_compute[262220]: 2025-10-08 10:19:52.627 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:19:52 compute-0 nova_compute[262220]: 2025-10-08 10:19:52.744 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Instance 7d19d2c6-6de1-4096-99e4-24b4265b9c09 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 08 10:19:52 compute-0 nova_compute[262220]: 2025-10-08 10:19:52.745 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 08 10:19:52 compute-0 nova_compute[262220]: 2025-10-08 10:19:52.745 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 08 10:19:52 compute-0 nova_compute[262220]: 2025-10-08 10:19:52.793 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Refreshing inventories for resource provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 08 10:19:52 compute-0 nova_compute[262220]: 2025-10-08 10:19:52.852 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Updating ProviderTree inventory for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 08 10:19:52 compute-0 nova_compute[262220]: 2025-10-08 10:19:52.853 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Updating inventory in ProviderTree for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 08 10:19:52 compute-0 nova_compute[262220]: 2025-10-08 10:19:52.867 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Refreshing aggregate associations for resource provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 08 10:19:52 compute-0 nova_compute[262220]: 2025-10-08 10:19:52.890 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Refreshing trait associations for resource provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2, traits: HW_CPU_X86_CLMUL,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_FMA3,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ACCELERATORS,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE41,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AMD_SVM,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_RESCUE_BFV,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SVM,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,HW_CPU_X86_SSE2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 08 10:19:52 compute-0 nova_compute[262220]: 2025-10-08 10:19:52.919 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:19:53 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:19:53 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:19:53 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:19:53.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:19:53 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 08 10:19:53 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2726016370' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:19:53 compute-0 nova_compute[262220]: 2025-10-08 10:19:53.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:19:53 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1059: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 65 op/s
Oct 08 10:19:53 compute-0 nova_compute[262220]: 2025-10-08 10:19:53.352 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:19:53 compute-0 nova_compute[262220]: 2025-10-08 10:19:53.357 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 08 10:19:53 compute-0 nova_compute[262220]: 2025-10-08 10:19:53.373 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 08 10:19:53 compute-0 nova_compute[262220]: 2025-10-08 10:19:53.395 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 08 10:19:53 compute-0 nova_compute[262220]: 2025-10-08 10:19:53.395 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.768s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:19:53 compute-0 nova_compute[262220]: 2025-10-08 10:19:53.396 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:19:53 compute-0 nova_compute[262220]: 2025-10-08 10:19:53.396 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 08 10:19:53 compute-0 nova_compute[262220]: 2025-10-08 10:19:53.412 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 08 10:19:53 compute-0 nova_compute[262220]: 2025-10-08 10:19:53.413 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:19:53 compute-0 nova_compute[262220]: 2025-10-08 10:19:53.594 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:19:53 compute-0 nova_compute[262220]: 2025-10-08 10:19:53.595 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 08 10:19:53 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/656967693' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:19:53 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/2726016370' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:19:53 compute-0 ceph-mon[73572]: pgmap v1059: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 65 op/s
Oct 08 10:19:53 compute-0 nova_compute[262220]: 2025-10-08 10:19:53.888 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:19:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:54 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:19:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:54 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:19:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:54 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:19:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:54 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:19:54 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:19:54 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:19:54 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:19:54.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:19:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:19:54 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/3392990420' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:19:54 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/2813248791' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 08 10:19:54 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/4290630077' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 08 10:19:55 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:19:55 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:19:55 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:19:55.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:19:55 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1060: 353 pgs: 353 active+clean; 167 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 156 op/s
Oct 08 10:19:55 compute-0 ceph-mon[73572]: pgmap v1060: 353 pgs: 353 active+clean; 167 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 156 op/s
Oct 08 10:19:55 compute-0 nova_compute[262220]: 2025-10-08 10:19:55.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:19:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:19:55] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Oct 08 10:19:55 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:19:55] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Oct 08 10:19:56 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:19:56 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:19:56 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:19:56.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:19:56 compute-0 sudo[281588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:19:56 compute-0 sudo[281588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:19:56 compute-0 sudo[281588]: pam_unix(sudo:session): session closed for user root
Oct 08 10:19:56 compute-0 sudo[281613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 08 10:19:56 compute-0 sudo[281613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:19:57 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:19:57 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:19:57 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:19:57.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:19:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:19:57.189Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:19:57 compute-0 sudo[281613]: pam_unix(sudo:session): session closed for user root
Oct 08 10:19:57 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 10:19:57 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:19:57 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 08 10:19:57 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 10:19:57 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1061: 353 pgs: 353 active+clean; 167 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 346 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Oct 08 10:19:57 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 08 10:19:57 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:19:57 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 08 10:19:57 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:19:57 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 08 10:19:57 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 10:19:57 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 08 10:19:57 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 10:19:57 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 10:19:57 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:19:57 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:19:57 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 10:19:57 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:19:57 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:19:57 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 10:19:57 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 10:19:57 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:19:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:19:57.418 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:19:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:19:57.418 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:19:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:19:57.419 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:19:57 compute-0 sudo[281669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:19:57 compute-0 sudo[281669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:19:57 compute-0 sudo[281669]: pam_unix(sudo:session): session closed for user root
Oct 08 10:19:57 compute-0 sudo[281707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 08 10:19:57 compute-0 sudo[281707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:19:57 compute-0 podman[281693]: 2025-10-08 10:19:57.511828921 +0000 UTC m=+0.064994780 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 08 10:19:57 compute-0 podman[281694]: 2025-10-08 10:19:57.526572867 +0000 UTC m=+0.079416265 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 08 10:19:57 compute-0 podman[281794]: 2025-10-08 10:19:57.887930285 +0000 UTC m=+0.042204334 container create 567cca5d9ec0a5bc08ec0e2ab838f639f62abf066e051dcf08f10927dfddf025 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:19:57 compute-0 systemd[1]: Started libpod-conmon-567cca5d9ec0a5bc08ec0e2ab838f639f62abf066e051dcf08f10927dfddf025.scope.
Oct 08 10:19:57 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:19:57 compute-0 podman[281794]: 2025-10-08 10:19:57.865675327 +0000 UTC m=+0.019949396 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:19:57 compute-0 podman[281794]: 2025-10-08 10:19:57.974816211 +0000 UTC m=+0.129090300 container init 567cca5d9ec0a5bc08ec0e2ab838f639f62abf066e051dcf08f10927dfddf025 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_tesla, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 10:19:57 compute-0 podman[281794]: 2025-10-08 10:19:57.98219981 +0000 UTC m=+0.136473859 container start 567cca5d9ec0a5bc08ec0e2ab838f639f62abf066e051dcf08f10927dfddf025 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct 08 10:19:57 compute-0 podman[281794]: 2025-10-08 10:19:57.986148957 +0000 UTC m=+0.140423036 container attach 567cca5d9ec0a5bc08ec0e2ab838f639f62abf066e051dcf08f10927dfddf025 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct 08 10:19:57 compute-0 vibrant_tesla[281810]: 167 167
Oct 08 10:19:57 compute-0 systemd[1]: libpod-567cca5d9ec0a5bc08ec0e2ab838f639f62abf066e051dcf08f10927dfddf025.scope: Deactivated successfully.
Oct 08 10:19:57 compute-0 podman[281794]: 2025-10-08 10:19:57.987952906 +0000 UTC m=+0.142226955 container died 567cca5d9ec0a5bc08ec0e2ab838f639f62abf066e051dcf08f10927dfddf025 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 08 10:19:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-50b6c399de0b013e2e4308e6876fd8939c4d39863206f096db4b003ee8c0d619-merged.mount: Deactivated successfully.
Oct 08 10:19:58 compute-0 podman[281794]: 2025-10-08 10:19:58.032940759 +0000 UTC m=+0.187214808 container remove 567cca5d9ec0a5bc08ec0e2ab838f639f62abf066e051dcf08f10927dfddf025 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 08 10:19:58 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:19:58 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:19:58 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:19:58.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:19:58 compute-0 systemd[1]: libpod-conmon-567cca5d9ec0a5bc08ec0e2ab838f639f62abf066e051dcf08f10927dfddf025.scope: Deactivated successfully.
Oct 08 10:19:58 compute-0 podman[281833]: 2025-10-08 10:19:58.183749228 +0000 UTC m=+0.039379152 container create e410e570645a9695b25898e571c69db89ecf70c72e654391cd12501a2e0cfb79 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_poincare, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 10:19:58 compute-0 systemd[1]: Started libpod-conmon-e410e570645a9695b25898e571c69db89ecf70c72e654391cd12501a2e0cfb79.scope.
Oct 08 10:19:58 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:19:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee1692a2fadb5875bb508feaaa9ef426776b063f86e18decec2af0355f7d71cf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:19:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee1692a2fadb5875bb508feaaa9ef426776b063f86e18decec2af0355f7d71cf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:19:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee1692a2fadb5875bb508feaaa9ef426776b063f86e18decec2af0355f7d71cf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:19:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee1692a2fadb5875bb508feaaa9ef426776b063f86e18decec2af0355f7d71cf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:19:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee1692a2fadb5875bb508feaaa9ef426776b063f86e18decec2af0355f7d71cf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 10:19:58 compute-0 podman[281833]: 2025-10-08 10:19:58.258706268 +0000 UTC m=+0.114336222 container init e410e570645a9695b25898e571c69db89ecf70c72e654391cd12501a2e0cfb79 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_poincare, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 08 10:19:58 compute-0 podman[281833]: 2025-10-08 10:19:58.167585805 +0000 UTC m=+0.023215749 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:19:58 compute-0 podman[281833]: 2025-10-08 10:19:58.273060382 +0000 UTC m=+0.128690306 container start e410e570645a9695b25898e571c69db89ecf70c72e654391cd12501a2e0cfb79 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_poincare, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct 08 10:19:58 compute-0 podman[281833]: 2025-10-08 10:19:58.278395314 +0000 UTC m=+0.134025258 container attach e410e570645a9695b25898e571c69db89ecf70c72e654391cd12501a2e0cfb79 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_poincare, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct 08 10:19:58 compute-0 nova_compute[262220]: 2025-10-08 10:19:58.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:19:58 compute-0 ceph-mon[73572]: pgmap v1061: 353 pgs: 353 active+clean; 167 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 346 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Oct 08 10:19:58 compute-0 magical_poincare[281850]: --> passed data devices: 0 physical, 1 LVM
Oct 08 10:19:58 compute-0 magical_poincare[281850]: --> All data devices are unavailable
Oct 08 10:19:58 compute-0 systemd[1]: libpod-e410e570645a9695b25898e571c69db89ecf70c72e654391cd12501a2e0cfb79.scope: Deactivated successfully.
Oct 08 10:19:58 compute-0 podman[281833]: 2025-10-08 10:19:58.592612701 +0000 UTC m=+0.448242645 container died e410e570645a9695b25898e571c69db89ecf70c72e654391cd12501a2e0cfb79 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_poincare, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Oct 08 10:19:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee1692a2fadb5875bb508feaaa9ef426776b063f86e18decec2af0355f7d71cf-merged.mount: Deactivated successfully.
Oct 08 10:19:58 compute-0 podman[281833]: 2025-10-08 10:19:58.647264915 +0000 UTC m=+0.502894839 container remove e410e570645a9695b25898e571c69db89ecf70c72e654391cd12501a2e0cfb79 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_poincare, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Oct 08 10:19:58 compute-0 systemd[1]: libpod-conmon-e410e570645a9695b25898e571c69db89ecf70c72e654391cd12501a2e0cfb79.scope: Deactivated successfully.
Oct 08 10:19:58 compute-0 sudo[281707]: pam_unix(sudo:session): session closed for user root
Oct 08 10:19:58 compute-0 sudo[281880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:19:58 compute-0 sudo[281880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:19:58 compute-0 sudo[281880]: pam_unix(sudo:session): session closed for user root
Oct 08 10:19:58 compute-0 sudo[281905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- lvm list --format json
Oct 08 10:19:58 compute-0 sudo[281905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:19:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:19:58.844Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:19:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:19:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:19:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:19:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:59 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:19:59 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:19:59 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:19:59 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:19:59.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:19:59 compute-0 podman[281974]: 2025-10-08 10:19:59.231942355 +0000 UTC m=+0.038298788 container create e146052553288dc8e161aab44af24abe147d3336ac7aa7bfc117a8a696caea1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 08 10:19:59 compute-0 systemd[1]: Started libpod-conmon-e146052553288dc8e161aab44af24abe147d3336ac7aa7bfc117a8a696caea1b.scope.
Oct 08 10:19:59 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:19:59 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1062: 353 pgs: 353 active+clean; 167 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 167 op/s
Oct 08 10:19:59 compute-0 podman[281974]: 2025-10-08 10:19:59.303772835 +0000 UTC m=+0.110129288 container init e146052553288dc8e161aab44af24abe147d3336ac7aa7bfc117a8a696caea1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_sinoussi, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 08 10:19:59 compute-0 podman[281974]: 2025-10-08 10:19:59.214657566 +0000 UTC m=+0.021014029 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:19:59 compute-0 podman[281974]: 2025-10-08 10:19:59.310276144 +0000 UTC m=+0.116632577 container start e146052553288dc8e161aab44af24abe147d3336ac7aa7bfc117a8a696caea1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True)
Oct 08 10:19:59 compute-0 podman[281974]: 2025-10-08 10:19:59.313277671 +0000 UTC m=+0.119634114 container attach e146052553288dc8e161aab44af24abe147d3336ac7aa7bfc117a8a696caea1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:19:59 compute-0 loving_sinoussi[281990]: 167 167
Oct 08 10:19:59 compute-0 systemd[1]: libpod-e146052553288dc8e161aab44af24abe147d3336ac7aa7bfc117a8a696caea1b.scope: Deactivated successfully.
Oct 08 10:19:59 compute-0 podman[281974]: 2025-10-08 10:19:59.316506595 +0000 UTC m=+0.122863028 container died e146052553288dc8e161aab44af24abe147d3336ac7aa7bfc117a8a696caea1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:19:59 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:19:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-f6b1cc7940ab16c10bce76c2623833643e3a08eedbeb16dc4a3707743970c9f8-merged.mount: Deactivated successfully.
Oct 08 10:19:59 compute-0 podman[281974]: 2025-10-08 10:19:59.353766649 +0000 UTC m=+0.160123082 container remove e146052553288dc8e161aab44af24abe147d3336ac7aa7bfc117a8a696caea1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Oct 08 10:19:59 compute-0 systemd[1]: libpod-conmon-e146052553288dc8e161aab44af24abe147d3336ac7aa7bfc117a8a696caea1b.scope: Deactivated successfully.
Oct 08 10:19:59 compute-0 podman[282014]: 2025-10-08 10:19:59.51236032 +0000 UTC m=+0.040687225 container create bc363d00e776aaf57306b37cd9f76f063fea8e59af2cb46c4d87e5cf55bf5887 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:19:59 compute-0 systemd[1]: Started libpod-conmon-bc363d00e776aaf57306b37cd9f76f063fea8e59af2cb46c4d87e5cf55bf5887.scope.
Oct 08 10:19:59 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:19:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3daeb8f8d70b37c9da0d1a191dffd58d48a5d16f1955f552bab203d9eaf8bef4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:19:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3daeb8f8d70b37c9da0d1a191dffd58d48a5d16f1955f552bab203d9eaf8bef4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:19:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3daeb8f8d70b37c9da0d1a191dffd58d48a5d16f1955f552bab203d9eaf8bef4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:19:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3daeb8f8d70b37c9da0d1a191dffd58d48a5d16f1955f552bab203d9eaf8bef4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:19:59 compute-0 podman[282014]: 2025-10-08 10:19:59.494761751 +0000 UTC m=+0.023088666 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:19:59 compute-0 podman[282014]: 2025-10-08 10:19:59.604725062 +0000 UTC m=+0.133051967 container init bc363d00e776aaf57306b37cd9f76f063fea8e59af2cb46c4d87e5cf55bf5887 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 10:19:59 compute-0 podman[282014]: 2025-10-08 10:19:59.611842253 +0000 UTC m=+0.140169138 container start bc363d00e776aaf57306b37cd9f76f063fea8e59af2cb46c4d87e5cf55bf5887 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_mahavira, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:19:59 compute-0 podman[282014]: 2025-10-08 10:19:59.615726077 +0000 UTC m=+0.144052982 container attach bc363d00e776aaf57306b37cd9f76f063fea8e59af2cb46c4d87e5cf55bf5887 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:19:59 compute-0 condescending_mahavira[282030]: {
Oct 08 10:19:59 compute-0 condescending_mahavira[282030]:     "1": [
Oct 08 10:19:59 compute-0 condescending_mahavira[282030]:         {
Oct 08 10:19:59 compute-0 condescending_mahavira[282030]:             "devices": [
Oct 08 10:19:59 compute-0 condescending_mahavira[282030]:                 "/dev/loop3"
Oct 08 10:19:59 compute-0 condescending_mahavira[282030]:             ],
Oct 08 10:19:59 compute-0 condescending_mahavira[282030]:             "lv_name": "ceph_lv0",
Oct 08 10:19:59 compute-0 condescending_mahavira[282030]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:19:59 compute-0 condescending_mahavira[282030]:             "lv_size": "21470642176",
Oct 08 10:19:59 compute-0 condescending_mahavira[282030]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 08 10:19:59 compute-0 condescending_mahavira[282030]:             "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 10:19:59 compute-0 condescending_mahavira[282030]:             "name": "ceph_lv0",
Oct 08 10:19:59 compute-0 condescending_mahavira[282030]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:19:59 compute-0 condescending_mahavira[282030]:             "tags": {
Oct 08 10:19:59 compute-0 condescending_mahavira[282030]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:19:59 compute-0 condescending_mahavira[282030]:                 "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 10:19:59 compute-0 condescending_mahavira[282030]:                 "ceph.cephx_lockbox_secret": "",
Oct 08 10:19:59 compute-0 condescending_mahavira[282030]:                 "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct 08 10:19:59 compute-0 condescending_mahavira[282030]:                 "ceph.cluster_name": "ceph",
Oct 08 10:19:59 compute-0 condescending_mahavira[282030]:                 "ceph.crush_device_class": "",
Oct 08 10:19:59 compute-0 condescending_mahavira[282030]:                 "ceph.encrypted": "0",
Oct 08 10:19:59 compute-0 condescending_mahavira[282030]:                 "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct 08 10:19:59 compute-0 condescending_mahavira[282030]:                 "ceph.osd_id": "1",
Oct 08 10:19:59 compute-0 condescending_mahavira[282030]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 08 10:19:59 compute-0 condescending_mahavira[282030]:                 "ceph.type": "block",
Oct 08 10:19:59 compute-0 condescending_mahavira[282030]:                 "ceph.vdo": "0",
Oct 08 10:19:59 compute-0 condescending_mahavira[282030]:                 "ceph.with_tpm": "0"
Oct 08 10:19:59 compute-0 condescending_mahavira[282030]:             },
Oct 08 10:19:59 compute-0 condescending_mahavira[282030]:             "type": "block",
Oct 08 10:19:59 compute-0 condescending_mahavira[282030]:             "vg_name": "ceph_vg0"
Oct 08 10:19:59 compute-0 condescending_mahavira[282030]:         }
Oct 08 10:19:59 compute-0 condescending_mahavira[282030]:     ]
Oct 08 10:19:59 compute-0 condescending_mahavira[282030]: }
Oct 08 10:19:59 compute-0 systemd[1]: libpod-bc363d00e776aaf57306b37cd9f76f063fea8e59af2cb46c4d87e5cf55bf5887.scope: Deactivated successfully.
Oct 08 10:19:59 compute-0 podman[282014]: 2025-10-08 10:19:59.908836053 +0000 UTC m=+0.437162928 container died bc363d00e776aaf57306b37cd9f76f063fea8e59af2cb46c4d87e5cf55bf5887 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_mahavira, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 08 10:19:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-3daeb8f8d70b37c9da0d1a191dffd58d48a5d16f1955f552bab203d9eaf8bef4-merged.mount: Deactivated successfully.
Oct 08 10:19:59 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:19:59.995 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26869918-b723-425c-a2e1-0d697f3d0fec, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 10:20:00 compute-0 ceph-mon[73572]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1 failed cephadm daemon(s)
Oct 08 10:20:00 compute-0 ceph-mon[73572]: log_channel(cluster) log [WRN] : [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
Oct 08 10:20:00 compute-0 ceph-mon[73572]: log_channel(cluster) log [WRN] :     daemon nfs.cephfs.0.0.compute-1.lgtqnn on compute-1 is in error state
Oct 08 10:20:00 compute-0 podman[282014]: 2025-10-08 10:20:00.015426034 +0000 UTC m=+0.543752939 container remove bc363d00e776aaf57306b37cd9f76f063fea8e59af2cb46c4d87e5cf55bf5887 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_mahavira, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 08 10:20:00 compute-0 systemd[1]: libpod-conmon-bc363d00e776aaf57306b37cd9f76f063fea8e59af2cb46c4d87e5cf55bf5887.scope: Deactivated successfully.
Oct 08 10:20:00 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:20:00 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:20:00 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:20:00.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:20:00 compute-0 sudo[281905]: pam_unix(sudo:session): session closed for user root
Oct 08 10:20:00 compute-0 sudo[282052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:20:00 compute-0 sudo[282052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:20:00 compute-0 sudo[282052]: pam_unix(sudo:session): session closed for user root
Oct 08 10:20:00 compute-0 sudo[282077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- raw list --format json
Oct 08 10:20:00 compute-0 sudo[282077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:20:00 compute-0 ceph-mon[73572]: pgmap v1062: 353 pgs: 353 active+clean; 167 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 167 op/s
Oct 08 10:20:00 compute-0 ceph-mon[73572]: Health detail: HEALTH_WARN 1 failed cephadm daemon(s)
Oct 08 10:20:00 compute-0 ceph-mon[73572]: [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
Oct 08 10:20:00 compute-0 ceph-mon[73572]:     daemon nfs.cephfs.0.0.compute-1.lgtqnn on compute-1 is in error state
Oct 08 10:20:00 compute-0 podman[282144]: 2025-10-08 10:20:00.561171567 +0000 UTC m=+0.035823998 container create 831ab956a855ada7f512d59aec65d943b94e85e7949c64ca14b912ac486d6456 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_beaver, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:20:00 compute-0 systemd[1]: Started libpod-conmon-831ab956a855ada7f512d59aec65d943b94e85e7949c64ca14b912ac486d6456.scope.
Oct 08 10:20:00 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:20:00 compute-0 podman[282144]: 2025-10-08 10:20:00.633562115 +0000 UTC m=+0.108214576 container init 831ab956a855ada7f512d59aec65d943b94e85e7949c64ca14b912ac486d6456 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_beaver, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 08 10:20:00 compute-0 podman[282144]: 2025-10-08 10:20:00.640381834 +0000 UTC m=+0.115034265 container start 831ab956a855ada7f512d59aec65d943b94e85e7949c64ca14b912ac486d6456 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:20:00 compute-0 podman[282144]: 2025-10-08 10:20:00.546230814 +0000 UTC m=+0.020883265 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:20:00 compute-0 podman[282144]: 2025-10-08 10:20:00.643378971 +0000 UTC m=+0.118031402 container attach 831ab956a855ada7f512d59aec65d943b94e85e7949c64ca14b912ac486d6456 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_beaver, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Oct 08 10:20:00 compute-0 vibrant_beaver[282161]: 167 167
Oct 08 10:20:00 compute-0 systemd[1]: libpod-831ab956a855ada7f512d59aec65d943b94e85e7949c64ca14b912ac486d6456.scope: Deactivated successfully.
Oct 08 10:20:00 compute-0 podman[282144]: 2025-10-08 10:20:00.646352177 +0000 UTC m=+0.121004628 container died 831ab956a855ada7f512d59aec65d943b94e85e7949c64ca14b912ac486d6456 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_beaver, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:20:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5a74ca4c9711c5213c7647ddaa7db7488ff9f08d8f860324120153e7f0e74e6-merged.mount: Deactivated successfully.
Oct 08 10:20:00 compute-0 nova_compute[262220]: 2025-10-08 10:20:00.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:20:00 compute-0 podman[282144]: 2025-10-08 10:20:00.677834464 +0000 UTC m=+0.152486895 container remove 831ab956a855ada7f512d59aec65d943b94e85e7949c64ca14b912ac486d6456 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_beaver, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 10:20:00 compute-0 systemd[1]: libpod-conmon-831ab956a855ada7f512d59aec65d943b94e85e7949c64ca14b912ac486d6456.scope: Deactivated successfully.
Oct 08 10:20:00 compute-0 podman[282185]: 2025-10-08 10:20:00.838361907 +0000 UTC m=+0.038643999 container create 6cffc1d7f4231e7e511ba90fe8c4e2dceb92213b10f47da54e22835ded1ef2ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Oct 08 10:20:00 compute-0 systemd[1]: Started libpod-conmon-6cffc1d7f4231e7e511ba90fe8c4e2dceb92213b10f47da54e22835ded1ef2ad.scope.
Oct 08 10:20:00 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:20:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce08e4d9deb60ddc5424bec8df7b856521f46c91011f60575f0f867c29282780/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:20:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce08e4d9deb60ddc5424bec8df7b856521f46c91011f60575f0f867c29282780/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:20:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce08e4d9deb60ddc5424bec8df7b856521f46c91011f60575f0f867c29282780/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:20:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce08e4d9deb60ddc5424bec8df7b856521f46c91011f60575f0f867c29282780/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:20:00 compute-0 podman[282185]: 2025-10-08 10:20:00.821114191 +0000 UTC m=+0.021396303 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:20:00 compute-0 podman[282185]: 2025-10-08 10:20:00.921270275 +0000 UTC m=+0.121552397 container init 6cffc1d7f4231e7e511ba90fe8c4e2dceb92213b10f47da54e22835ded1ef2ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_matsumoto, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 10:20:00 compute-0 podman[282185]: 2025-10-08 10:20:00.931879207 +0000 UTC m=+0.132161299 container start 6cffc1d7f4231e7e511ba90fe8c4e2dceb92213b10f47da54e22835ded1ef2ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_matsumoto, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 08 10:20:00 compute-0 podman[282185]: 2025-10-08 10:20:00.935107391 +0000 UTC m=+0.135389483 container attach 6cffc1d7f4231e7e511ba90fe8c4e2dceb92213b10f47da54e22835ded1ef2ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 08 10:20:01 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:20:01 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:20:01 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:20:01.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:20:01 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1063: 353 pgs: 353 active+clean; 167 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 166 op/s
Oct 08 10:20:01 compute-0 lvm[282277]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 08 10:20:01 compute-0 lvm[282277]: VG ceph_vg0 finished
Oct 08 10:20:01 compute-0 youthful_matsumoto[282202]: {}
Oct 08 10:20:01 compute-0 systemd[1]: libpod-6cffc1d7f4231e7e511ba90fe8c4e2dceb92213b10f47da54e22835ded1ef2ad.scope: Deactivated successfully.
Oct 08 10:20:01 compute-0 podman[282185]: 2025-10-08 10:20:01.669234577 +0000 UTC m=+0.869516669 container died 6cffc1d7f4231e7e511ba90fe8c4e2dceb92213b10f47da54e22835ded1ef2ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 08 10:20:01 compute-0 systemd[1]: libpod-6cffc1d7f4231e7e511ba90fe8c4e2dceb92213b10f47da54e22835ded1ef2ad.scope: Consumed 1.130s CPU time.
Oct 08 10:20:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce08e4d9deb60ddc5424bec8df7b856521f46c91011f60575f0f867c29282780-merged.mount: Deactivated successfully.
Oct 08 10:20:01 compute-0 podman[282185]: 2025-10-08 10:20:01.709023011 +0000 UTC m=+0.909305093 container remove 6cffc1d7f4231e7e511ba90fe8c4e2dceb92213b10f47da54e22835ded1ef2ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_matsumoto, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Oct 08 10:20:01 compute-0 systemd[1]: libpod-conmon-6cffc1d7f4231e7e511ba90fe8c4e2dceb92213b10f47da54e22835ded1ef2ad.scope: Deactivated successfully.
Oct 08 10:20:01 compute-0 sudo[282077]: pam_unix(sudo:session): session closed for user root
Oct 08 10:20:01 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 10:20:01 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:20:01 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 10:20:01 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:20:01 compute-0 sudo[282294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 08 10:20:01 compute-0 sudo[282294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:20:01 compute-0 sudo[282294]: pam_unix(sudo:session): session closed for user root
Oct 08 10:20:02 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:20:02 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:20:02 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:20:02.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:20:02 compute-0 ceph-mon[73572]: pgmap v1063: 353 pgs: 353 active+clean; 167 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 166 op/s
Oct 08 10:20:02 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:20:02 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:20:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:20:02 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:20:03 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:20:03 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:20:03 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:20:03.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:20:03 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1064: 353 pgs: 353 active+clean; 167 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 166 op/s
Oct 08 10:20:03 compute-0 nova_compute[262220]: 2025-10-08 10:20:03.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:20:03 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:20:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:20:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:20:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:20:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:04 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:20:04 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:20:04 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:20:04 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:20:04.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:20:04 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:20:04 compute-0 ceph-mon[73572]: pgmap v1064: 353 pgs: 353 active+clean; 167 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 166 op/s
Oct 08 10:20:05 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:20:05 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:20:05 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:20:05.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:20:05 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1065: 353 pgs: 353 active+clean; 167 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 166 op/s
Oct 08 10:20:05 compute-0 nova_compute[262220]: 2025-10-08 10:20:05.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:20:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:20:05] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Oct 08 10:20:05 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:20:05] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Oct 08 10:20:06 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:20:06 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:20:06 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:20:06.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:20:06 compute-0 ceph-mon[73572]: pgmap v1065: 353 pgs: 353 active+clean; 167 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 166 op/s
Oct 08 10:20:07 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:20:07 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:20:07 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:20:07.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:20:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:20:07.189Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:20:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:20:07.190Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:20:07 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1066: 353 pgs: 353 active+clean; 167 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 28 KiB/s wr, 75 op/s
Oct 08 10:20:08 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:20:08 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:20:08 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:20:08.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:20:08 compute-0 nova_compute[262220]: 2025-10-08 10:20:08.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:20:08 compute-0 ceph-mon[73572]: pgmap v1066: 353 pgs: 353 active+clean; 167 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 28 KiB/s wr, 75 op/s
Oct 08 10:20:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:20:08.846Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:20:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:20:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:20:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:20:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:09 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:20:09 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:20:09 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:20:09 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:20:09.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:20:09 compute-0 sudo[282327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:20:09 compute-0 sudo[282327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:20:09 compute-0 sudo[282327]: pam_unix(sudo:session): session closed for user root
Oct 08 10:20:09 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1067: 353 pgs: 353 active+clean; 188 MiB data, 360 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 122 op/s
Oct 08 10:20:09 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:20:10 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:20:10 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:20:10 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:20:10.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:20:10 compute-0 ceph-mon[73572]: pgmap v1067: 353 pgs: 353 active+clean; 188 MiB data, 360 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 122 op/s
Oct 08 10:20:10 compute-0 nova_compute[262220]: 2025-10-08 10:20:10.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:20:10 compute-0 podman[282353]: 2025-10-08 10:20:10.950555587 +0000 UTC m=+0.089799200 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid)
Oct 08 10:20:11 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:20:11 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:20:11 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:20:11.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:20:11 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1068: 353 pgs: 353 active+clean; 188 MiB data, 360 MiB used, 60 GiB / 60 GiB avail; 265 KiB/s rd, 2.0 MiB/s wr, 47 op/s
Oct 08 10:20:12 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:20:12 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:20:12 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:20:12.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:20:12 compute-0 ceph-mon[73572]: pgmap v1068: 353 pgs: 353 active+clean; 188 MiB data, 360 MiB used, 60 GiB / 60 GiB avail; 265 KiB/s rd, 2.0 MiB/s wr, 47 op/s
Oct 08 10:20:13 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:20:13 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:20:13 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:20:13.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:20:13 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1069: 353 pgs: 353 active+clean; 188 MiB data, 360 MiB used, 60 GiB / 60 GiB avail; 265 KiB/s rd, 2.0 MiB/s wr, 47 op/s
Oct 08 10:20:13 compute-0 nova_compute[262220]: 2025-10-08 10:20:13.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:20:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:20:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:20:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:20:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:14 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:20:14 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:20:14 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:20:14 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:20:14.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:20:14 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:20:14 compute-0 ceph-mon[73572]: pgmap v1069: 353 pgs: 353 active+clean; 188 MiB data, 360 MiB used, 60 GiB / 60 GiB avail; 265 KiB/s rd, 2.0 MiB/s wr, 47 op/s
Oct 08 10:20:15 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:20:15 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:20:15 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:20:15.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:20:15 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1070: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 08 10:20:15 compute-0 nova_compute[262220]: 2025-10-08 10:20:15.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:20:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:20:15] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Oct 08 10:20:15 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:20:15] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Oct 08 10:20:16 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:20:16 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:20:16 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:20:16.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:20:16 compute-0 ceph-mon[73572]: pgmap v1070: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 08 10:20:16 compute-0 nova_compute[262220]: 2025-10-08 10:20:16.709 2 INFO nova.compute.manager [None req-903dcb2d-0f0e-48f8-b59f-d833809f74e0 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Get console output
Oct 08 10:20:16 compute-0 nova_compute[262220]: 2025-10-08 10:20:16.715 631 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 08 10:20:17 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:20:17 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:20:17 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:20:17.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:20:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:20:17.191Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:20:17 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1071: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 08 10:20:17 compute-0 ceph-mon[73572]: pgmap v1071: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 08 10:20:17 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:20:17 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:20:17 compute-0 nova_compute[262220]: 2025-10-08 10:20:17.918 2 DEBUG nova.compute.manager [req-b652370b-0fa0-4c89-99db-a3b956f1a813 req-6c5e239c-d2da-48e9-92b6-6818ac9de2aa 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Received event network-changed-29abf06b-1e1a-46cb-9cc1-7fa777795883 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 08 10:20:17 compute-0 nova_compute[262220]: 2025-10-08 10:20:17.918 2 DEBUG nova.compute.manager [req-b652370b-0fa0-4c89-99db-a3b956f1a813 req-6c5e239c-d2da-48e9-92b6-6818ac9de2aa 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Refreshing instance network info cache due to event network-changed-29abf06b-1e1a-46cb-9cc1-7fa777795883. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 08 10:20:17 compute-0 nova_compute[262220]: 2025-10-08 10:20:17.918 2 DEBUG oslo_concurrency.lockutils [req-b652370b-0fa0-4c89-99db-a3b956f1a813 req-6c5e239c-d2da-48e9-92b6-6818ac9de2aa 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "refresh_cache-7d19d2c6-6de1-4096-99e4-24b4265b9c09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 08 10:20:17 compute-0 nova_compute[262220]: 2025-10-08 10:20:17.918 2 DEBUG oslo_concurrency.lockutils [req-b652370b-0fa0-4c89-99db-a3b956f1a813 req-6c5e239c-d2da-48e9-92b6-6818ac9de2aa 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquired lock "refresh_cache-7d19d2c6-6de1-4096-99e4-24b4265b9c09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 08 10:20:17 compute-0 nova_compute[262220]: 2025-10-08 10:20:17.918 2 DEBUG nova.network.neutron [req-b652370b-0fa0-4c89-99db-a3b956f1a813 req-6c5e239c-d2da-48e9-92b6-6818ac9de2aa 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Refreshing network info cache for port 29abf06b-1e1a-46cb-9cc1-7fa777795883 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 08 10:20:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:20:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:20:18 compute-0 nova_compute[262220]: 2025-10-08 10:20:18.021 2 DEBUG nova.compute.manager [req-81f9c2e6-39a7-4e48-b277-c32135c4ae07 req-648e11aa-fd03-4fd8-869d-f3ac2947b6df 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Received event network-vif-unplugged-29abf06b-1e1a-46cb-9cc1-7fa777795883 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 08 10:20:18 compute-0 nova_compute[262220]: 2025-10-08 10:20:18.021 2 DEBUG oslo_concurrency.lockutils [req-81f9c2e6-39a7-4e48-b277-c32135c4ae07 req-648e11aa-fd03-4fd8-869d-f3ac2947b6df 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:20:18 compute-0 nova_compute[262220]: 2025-10-08 10:20:18.022 2 DEBUG oslo_concurrency.lockutils [req-81f9c2e6-39a7-4e48-b277-c32135c4ae07 req-648e11aa-fd03-4fd8-869d-f3ac2947b6df 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:20:18 compute-0 nova_compute[262220]: 2025-10-08 10:20:18.022 2 DEBUG oslo_concurrency.lockutils [req-81f9c2e6-39a7-4e48-b277-c32135c4ae07 req-648e11aa-fd03-4fd8-869d-f3ac2947b6df 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:20:18 compute-0 nova_compute[262220]: 2025-10-08 10:20:18.022 2 DEBUG nova.compute.manager [req-81f9c2e6-39a7-4e48-b277-c32135c4ae07 req-648e11aa-fd03-4fd8-869d-f3ac2947b6df 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] No waiting events found dispatching network-vif-unplugged-29abf06b-1e1a-46cb-9cc1-7fa777795883 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 08 10:20:18 compute-0 nova_compute[262220]: 2025-10-08 10:20:18.022 2 WARNING nova.compute.manager [req-81f9c2e6-39a7-4e48-b277-c32135c4ae07 req-648e11aa-fd03-4fd8-869d-f3ac2947b6df 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Received unexpected event network-vif-unplugged-29abf06b-1e1a-46cb-9cc1-7fa777795883 for instance with vm_state active and task_state None.
Oct 08 10:20:18 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:20:18 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:20:18 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:20:18.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:20:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:20:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:20:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:20:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:20:18 compute-0 nova_compute[262220]: 2025-10-08 10:20:18.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:20:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:20:18.847Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:20:18 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:20:18 compute-0 nova_compute[262220]: 2025-10-08 10:20:18.960 2 INFO nova.compute.manager [None req-daf7a5dd-7ec8-485f-bb94-4a0835f57953 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Get console output
Oct 08 10:20:18 compute-0 nova_compute[262220]: 2025-10-08 10:20:18.965 631 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 08 10:20:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:20:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:20:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:20:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:19 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:20:19 compute-0 nova_compute[262220]: 2025-10-08 10:20:19.069 2 DEBUG nova.network.neutron [req-b652370b-0fa0-4c89-99db-a3b956f1a813 req-6c5e239c-d2da-48e9-92b6-6818ac9de2aa 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Updated VIF entry in instance network info cache for port 29abf06b-1e1a-46cb-9cc1-7fa777795883. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 08 10:20:19 compute-0 nova_compute[262220]: 2025-10-08 10:20:19.070 2 DEBUG nova.network.neutron [req-b652370b-0fa0-4c89-99db-a3b956f1a813 req-6c5e239c-d2da-48e9-92b6-6818ac9de2aa 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Updating instance_info_cache with network_info: [{"id": "29abf06b-1e1a-46cb-9cc1-7fa777795883", "address": "fa:16:3e:00:0d:2d", "network": {"id": "c18c7476-aaa8-4977-81b5-fb17e88446e2", "bridge": "br-int", "label": "tempest-network-smoke--1578716951", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29abf06b-1e", "ovs_interfaceid": "29abf06b-1e1a-46cb-9cc1-7fa777795883", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 08 10:20:19 compute-0 nova_compute[262220]: 2025-10-08 10:20:19.086 2 DEBUG oslo_concurrency.lockutils [req-b652370b-0fa0-4c89-99db-a3b956f1a813 req-6c5e239c-d2da-48e9-92b6-6818ac9de2aa 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Releasing lock "refresh_cache-7d19d2c6-6de1-4096-99e4-24b4265b9c09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 08 10:20:19 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:20:19 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:20:19 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:20:19.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:20:19 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1072: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 335 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 08 10:20:19 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:20:19 compute-0 ceph-mon[73572]: pgmap v1072: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 335 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 08 10:20:20 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:20:20 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:20:20 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:20:20.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:20:20 compute-0 nova_compute[262220]: 2025-10-08 10:20:20.110 2 DEBUG nova.compute.manager [req-7294a379-e189-4365-8c0a-d5f0f0a537ef req-5a622126-858a-4b7e-b085-d2122ec1ca78 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Received event network-vif-plugged-29abf06b-1e1a-46cb-9cc1-7fa777795883 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 08 10:20:20 compute-0 nova_compute[262220]: 2025-10-08 10:20:20.111 2 DEBUG oslo_concurrency.lockutils [req-7294a379-e189-4365-8c0a-d5f0f0a537ef req-5a622126-858a-4b7e-b085-d2122ec1ca78 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:20:20 compute-0 nova_compute[262220]: 2025-10-08 10:20:20.111 2 DEBUG oslo_concurrency.lockutils [req-7294a379-e189-4365-8c0a-d5f0f0a537ef req-5a622126-858a-4b7e-b085-d2122ec1ca78 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:20:20 compute-0 nova_compute[262220]: 2025-10-08 10:20:20.111 2 DEBUG oslo_concurrency.lockutils [req-7294a379-e189-4365-8c0a-d5f0f0a537ef req-5a622126-858a-4b7e-b085-d2122ec1ca78 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:20:20 compute-0 nova_compute[262220]: 2025-10-08 10:20:20.112 2 DEBUG nova.compute.manager [req-7294a379-e189-4365-8c0a-d5f0f0a537ef req-5a622126-858a-4b7e-b085-d2122ec1ca78 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] No waiting events found dispatching network-vif-plugged-29abf06b-1e1a-46cb-9cc1-7fa777795883 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 08 10:20:20 compute-0 nova_compute[262220]: 2025-10-08 10:20:20.112 2 WARNING nova.compute.manager [req-7294a379-e189-4365-8c0a-d5f0f0a537ef req-5a622126-858a-4b7e-b085-d2122ec1ca78 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Received unexpected event network-vif-plugged-29abf06b-1e1a-46cb-9cc1-7fa777795883 for instance with vm_state active and task_state None.
Oct 08 10:20:20 compute-0 nova_compute[262220]: 2025-10-08 10:20:20.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:20:21 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:20:21 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:20:21 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:20:21.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:20:21 compute-0 nova_compute[262220]: 2025-10-08 10:20:21.104 2 INFO nova.compute.manager [None req-01e54040-c2ee-4c9a-ba77-327543b8aaf4 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Get console output
Oct 08 10:20:21 compute-0 nova_compute[262220]: 2025-10-08 10:20:21.107 631 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 08 10:20:21 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1073: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 107 KiB/s wr, 19 op/s
Oct 08 10:20:21 compute-0 podman[282384]: 2025-10-08 10:20:21.96524768 +0000 UTC m=+0.122485046 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 08 10:20:22 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:20:22 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:20:22 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:20:22.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:20:22 compute-0 nova_compute[262220]: 2025-10-08 10:20:22.240 2 DEBUG nova.compute.manager [req-f9d9c56f-fa48-4bb2-bbf5-a9f1705a1bd0 req-7185fd02-4e5c-4236-ad01-4de2fbe3ef10 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Received event network-changed-29abf06b-1e1a-46cb-9cc1-7fa777795883 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 08 10:20:22 compute-0 nova_compute[262220]: 2025-10-08 10:20:22.241 2 DEBUG nova.compute.manager [req-f9d9c56f-fa48-4bb2-bbf5-a9f1705a1bd0 req-7185fd02-4e5c-4236-ad01-4de2fbe3ef10 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Refreshing instance network info cache due to event network-changed-29abf06b-1e1a-46cb-9cc1-7fa777795883. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 08 10:20:22 compute-0 nova_compute[262220]: 2025-10-08 10:20:22.241 2 DEBUG oslo_concurrency.lockutils [req-f9d9c56f-fa48-4bb2-bbf5-a9f1705a1bd0 req-7185fd02-4e5c-4236-ad01-4de2fbe3ef10 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "refresh_cache-7d19d2c6-6de1-4096-99e4-24b4265b9c09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 08 10:20:22 compute-0 nova_compute[262220]: 2025-10-08 10:20:22.242 2 DEBUG oslo_concurrency.lockutils [req-f9d9c56f-fa48-4bb2-bbf5-a9f1705a1bd0 req-7185fd02-4e5c-4236-ad01-4de2fbe3ef10 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquired lock "refresh_cache-7d19d2c6-6de1-4096-99e4-24b4265b9c09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 08 10:20:22 compute-0 nova_compute[262220]: 2025-10-08 10:20:22.242 2 DEBUG nova.network.neutron [req-f9d9c56f-fa48-4bb2-bbf5-a9f1705a1bd0 req-7185fd02-4e5c-4236-ad01-4de2fbe3ef10 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Refreshing network info cache for port 29abf06b-1e1a-46cb-9cc1-7fa777795883 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 08 10:20:22 compute-0 ceph-mon[73572]: pgmap v1073: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 107 KiB/s wr, 19 op/s
Oct 08 10:20:23 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:20:23 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:20:23 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:20:23.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:20:23 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1074: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 107 KiB/s wr, 19 op/s
Oct 08 10:20:23 compute-0 nova_compute[262220]: 2025-10-08 10:20:23.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:20:23 compute-0 nova_compute[262220]: 2025-10-08 10:20:23.608 2 DEBUG nova.network.neutron [req-f9d9c56f-fa48-4bb2-bbf5-a9f1705a1bd0 req-7185fd02-4e5c-4236-ad01-4de2fbe3ef10 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Updated VIF entry in instance network info cache for port 29abf06b-1e1a-46cb-9cc1-7fa777795883. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 08 10:20:23 compute-0 nova_compute[262220]: 2025-10-08 10:20:23.609 2 DEBUG nova.network.neutron [req-f9d9c56f-fa48-4bb2-bbf5-a9f1705a1bd0 req-7185fd02-4e5c-4236-ad01-4de2fbe3ef10 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Updating instance_info_cache with network_info: [{"id": "29abf06b-1e1a-46cb-9cc1-7fa777795883", "address": "fa:16:3e:00:0d:2d", "network": {"id": "c18c7476-aaa8-4977-81b5-fb17e88446e2", "bridge": "br-int", "label": "tempest-network-smoke--1578716951", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29abf06b-1e", "ovs_interfaceid": "29abf06b-1e1a-46cb-9cc1-7fa777795883", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 08 10:20:23 compute-0 nova_compute[262220]: 2025-10-08 10:20:23.637 2 DEBUG oslo_concurrency.lockutils [req-f9d9c56f-fa48-4bb2-bbf5-a9f1705a1bd0 req-7185fd02-4e5c-4236-ad01-4de2fbe3ef10 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Releasing lock "refresh_cache-7d19d2c6-6de1-4096-99e4-24b4265b9c09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 08 10:20:23 compute-0 nova_compute[262220]: 2025-10-08 10:20:23.638 2 DEBUG nova.compute.manager [req-f9d9c56f-fa48-4bb2-bbf5-a9f1705a1bd0 req-7185fd02-4e5c-4236-ad01-4de2fbe3ef10 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Received event network-vif-plugged-29abf06b-1e1a-46cb-9cc1-7fa777795883 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 08 10:20:23 compute-0 nova_compute[262220]: 2025-10-08 10:20:23.638 2 DEBUG oslo_concurrency.lockutils [req-f9d9c56f-fa48-4bb2-bbf5-a9f1705a1bd0 req-7185fd02-4e5c-4236-ad01-4de2fbe3ef10 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:20:23 compute-0 nova_compute[262220]: 2025-10-08 10:20:23.638 2 DEBUG oslo_concurrency.lockutils [req-f9d9c56f-fa48-4bb2-bbf5-a9f1705a1bd0 req-7185fd02-4e5c-4236-ad01-4de2fbe3ef10 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:20:23 compute-0 nova_compute[262220]: 2025-10-08 10:20:23.638 2 DEBUG oslo_concurrency.lockutils [req-f9d9c56f-fa48-4bb2-bbf5-a9f1705a1bd0 req-7185fd02-4e5c-4236-ad01-4de2fbe3ef10 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:20:23 compute-0 nova_compute[262220]: 2025-10-08 10:20:23.638 2 DEBUG nova.compute.manager [req-f9d9c56f-fa48-4bb2-bbf5-a9f1705a1bd0 req-7185fd02-4e5c-4236-ad01-4de2fbe3ef10 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] No waiting events found dispatching network-vif-plugged-29abf06b-1e1a-46cb-9cc1-7fa777795883 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 08 10:20:23 compute-0 nova_compute[262220]: 2025-10-08 10:20:23.639 2 WARNING nova.compute.manager [req-f9d9c56f-fa48-4bb2-bbf5-a9f1705a1bd0 req-7185fd02-4e5c-4236-ad01-4de2fbe3ef10 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Received unexpected event network-vif-plugged-29abf06b-1e1a-46cb-9cc1-7fa777795883 for instance with vm_state active and task_state None.
Oct 08 10:20:23 compute-0 nova_compute[262220]: 2025-10-08 10:20:23.639 2 DEBUG nova.compute.manager [req-f9d9c56f-fa48-4bb2-bbf5-a9f1705a1bd0 req-7185fd02-4e5c-4236-ad01-4de2fbe3ef10 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Received event network-vif-plugged-29abf06b-1e1a-46cb-9cc1-7fa777795883 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 08 10:20:23 compute-0 nova_compute[262220]: 2025-10-08 10:20:23.639 2 DEBUG oslo_concurrency.lockutils [req-f9d9c56f-fa48-4bb2-bbf5-a9f1705a1bd0 req-7185fd02-4e5c-4236-ad01-4de2fbe3ef10 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:20:23 compute-0 nova_compute[262220]: 2025-10-08 10:20:23.639 2 DEBUG oslo_concurrency.lockutils [req-f9d9c56f-fa48-4bb2-bbf5-a9f1705a1bd0 req-7185fd02-4e5c-4236-ad01-4de2fbe3ef10 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:20:23 compute-0 nova_compute[262220]: 2025-10-08 10:20:23.639 2 DEBUG oslo_concurrency.lockutils [req-f9d9c56f-fa48-4bb2-bbf5-a9f1705a1bd0 req-7185fd02-4e5c-4236-ad01-4de2fbe3ef10 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:20:23 compute-0 nova_compute[262220]: 2025-10-08 10:20:23.640 2 DEBUG nova.compute.manager [req-f9d9c56f-fa48-4bb2-bbf5-a9f1705a1bd0 req-7185fd02-4e5c-4236-ad01-4de2fbe3ef10 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] No waiting events found dispatching network-vif-plugged-29abf06b-1e1a-46cb-9cc1-7fa777795883 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 08 10:20:23 compute-0 nova_compute[262220]: 2025-10-08 10:20:23.640 2 WARNING nova.compute.manager [req-f9d9c56f-fa48-4bb2-bbf5-a9f1705a1bd0 req-7185fd02-4e5c-4236-ad01-4de2fbe3ef10 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Received unexpected event network-vif-plugged-29abf06b-1e1a-46cb-9cc1-7fa777795883 for instance with vm_state active and task_state None.
Oct 08 10:20:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:24 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:20:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:24 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:20:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:24 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:20:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:24 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:20:24 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:20:24 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:20:24 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:20:24.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:20:24 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:20:24 compute-0 ceph-mon[73572]: pgmap v1074: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 107 KiB/s wr, 19 op/s
Oct 08 10:20:25 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:20:25 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:20:25 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:20:25.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:20:25 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1075: 353 pgs: 353 active+clean; 121 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 90 KiB/s rd, 115 KiB/s wr, 48 op/s
Oct 08 10:20:25 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/1958511337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:20:25 compute-0 nova_compute[262220]: 2025-10-08 10:20:25.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:20:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:20:25] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Oct 08 10:20:25 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:20:25] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Oct 08 10:20:26 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:20:26 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:20:26 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:20:26.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:20:26 compute-0 ceph-mon[73572]: pgmap v1075: 353 pgs: 353 active+clean; 121 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 90 KiB/s rd, 115 KiB/s wr, 48 op/s
Oct 08 10:20:27 compute-0 nova_compute[262220]: 2025-10-08 10:20:27.063 2 DEBUG nova.compute.manager [req-d2768907-b7a3-4388-aa89-8465df05fc40 req-9a5d7aa7-f224-48c0-813b-c96c74dd3a83 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Received event network-changed-29abf06b-1e1a-46cb-9cc1-7fa777795883 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 08 10:20:27 compute-0 nova_compute[262220]: 2025-10-08 10:20:27.064 2 DEBUG nova.compute.manager [req-d2768907-b7a3-4388-aa89-8465df05fc40 req-9a5d7aa7-f224-48c0-813b-c96c74dd3a83 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Refreshing instance network info cache due to event network-changed-29abf06b-1e1a-46cb-9cc1-7fa777795883. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 08 10:20:27 compute-0 nova_compute[262220]: 2025-10-08 10:20:27.064 2 DEBUG oslo_concurrency.lockutils [req-d2768907-b7a3-4388-aa89-8465df05fc40 req-9a5d7aa7-f224-48c0-813b-c96c74dd3a83 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "refresh_cache-7d19d2c6-6de1-4096-99e4-24b4265b9c09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 08 10:20:27 compute-0 nova_compute[262220]: 2025-10-08 10:20:27.064 2 DEBUG oslo_concurrency.lockutils [req-d2768907-b7a3-4388-aa89-8465df05fc40 req-9a5d7aa7-f224-48c0-813b-c96c74dd3a83 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquired lock "refresh_cache-7d19d2c6-6de1-4096-99e4-24b4265b9c09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 08 10:20:27 compute-0 nova_compute[262220]: 2025-10-08 10:20:27.065 2 DEBUG nova.network.neutron [req-d2768907-b7a3-4388-aa89-8465df05fc40 req-9a5d7aa7-f224-48c0-813b-c96c74dd3a83 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Refreshing network info cache for port 29abf06b-1e1a-46cb-9cc1-7fa777795883 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 08 10:20:27 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:20:27 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:20:27 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:20:27.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:20:27 compute-0 nova_compute[262220]: 2025-10-08 10:20:27.159 2 DEBUG oslo_concurrency.lockutils [None req-7e45c488-eafb-4589-be0d-fcc6c5026fca d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:20:27 compute-0 nova_compute[262220]: 2025-10-08 10:20:27.160 2 DEBUG oslo_concurrency.lockutils [None req-7e45c488-eafb-4589-be0d-fcc6c5026fca d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:20:27 compute-0 nova_compute[262220]: 2025-10-08 10:20:27.160 2 DEBUG oslo_concurrency.lockutils [None req-7e45c488-eafb-4589-be0d-fcc6c5026fca d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:20:27 compute-0 nova_compute[262220]: 2025-10-08 10:20:27.161 2 DEBUG oslo_concurrency.lockutils [None req-7e45c488-eafb-4589-be0d-fcc6c5026fca d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:20:27 compute-0 nova_compute[262220]: 2025-10-08 10:20:27.161 2 DEBUG oslo_concurrency.lockutils [None req-7e45c488-eafb-4589-be0d-fcc6c5026fca d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:20:27 compute-0 nova_compute[262220]: 2025-10-08 10:20:27.163 2 INFO nova.compute.manager [None req-7e45c488-eafb-4589-be0d-fcc6c5026fca d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Terminating instance
Oct 08 10:20:27 compute-0 nova_compute[262220]: 2025-10-08 10:20:27.164 2 DEBUG nova.compute.manager [None req-7e45c488-eafb-4589-be0d-fcc6c5026fca d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 08 10:20:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:20:27.192Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:20:27 compute-0 kernel: tap29abf06b-1e (unregistering): left promiscuous mode
Oct 08 10:20:27 compute-0 NetworkManager[44872]: <info>  [1759918827.2178] device (tap29abf06b-1e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 08 10:20:27 compute-0 ovn_controller[153187]: 2025-10-08T10:20:27Z|00065|binding|INFO|Releasing lport 29abf06b-1e1a-46cb-9cc1-7fa777795883 from this chassis (sb_readonly=0)
Oct 08 10:20:27 compute-0 nova_compute[262220]: 2025-10-08 10:20:27.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:20:27 compute-0 ovn_controller[153187]: 2025-10-08T10:20:27Z|00066|binding|INFO|Setting lport 29abf06b-1e1a-46cb-9cc1-7fa777795883 down in Southbound
Oct 08 10:20:27 compute-0 ovn_controller[153187]: 2025-10-08T10:20:27Z|00067|binding|INFO|Removing iface tap29abf06b-1e ovn-installed in OVS
Oct 08 10:20:27 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:20:27.233 163175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:0d:2d 10.100.0.8'], port_security=['fa:16:3e:00:0d:2d 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '7d19d2c6-6de1-4096-99e4-24b4265b9c09', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c18c7476-aaa8-4977-81b5-fb17e88446e2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0edb2c85b88b4b168a4b2e8c5ed4a05c', 'neutron:revision_number': '8', 'neutron:security_group_ids': '19e068da-96ae-4c4d-8c61-2ea91c3392b7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1ff1baa8-ffa0-48d3-9c93-32e63e4450d8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f191f105a60>], logical_port=29abf06b-1e1a-46cb-9cc1-7fa777795883) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f191f105a60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 08 10:20:27 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:20:27.234 163175 INFO neutron.agent.ovn.metadata.agent [-] Port 29abf06b-1e1a-46cb-9cc1-7fa777795883 in datapath c18c7476-aaa8-4977-81b5-fb17e88446e2 unbound from our chassis
Oct 08 10:20:27 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:20:27.235 163175 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c18c7476-aaa8-4977-81b5-fb17e88446e2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 08 10:20:27 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:20:27.236 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[fe8b8c7a-f4a0-4a83-9aba-0ec67089aafe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:20:27 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:20:27.237 163175 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c18c7476-aaa8-4977-81b5-fb17e88446e2 namespace which is not needed anymore
Oct 08 10:20:27 compute-0 nova_compute[262220]: 2025-10-08 10:20:27.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:20:27 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Oct 08 10:20:27 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d0000000b.scope: Consumed 14.435s CPU time.
Oct 08 10:20:27 compute-0 systemd-machined[216030]: Machine qemu-3-instance-0000000b terminated.
Oct 08 10:20:27 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1076: 353 pgs: 353 active+clean; 121 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 20 KiB/s wr, 30 op/s
Oct 08 10:20:27 compute-0 nova_compute[262220]: 2025-10-08 10:20:27.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:20:27 compute-0 nova_compute[262220]: 2025-10-08 10:20:27.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:20:27 compute-0 neutron-haproxy-ovnmeta-c18c7476-aaa8-4977-81b5-fb17e88446e2[281434]: [NOTICE]   (281438) : haproxy version is 2.8.14-c23fe91
Oct 08 10:20:27 compute-0 neutron-haproxy-ovnmeta-c18c7476-aaa8-4977-81b5-fb17e88446e2[281434]: [NOTICE]   (281438) : path to executable is /usr/sbin/haproxy
Oct 08 10:20:27 compute-0 neutron-haproxy-ovnmeta-c18c7476-aaa8-4977-81b5-fb17e88446e2[281434]: [WARNING]  (281438) : Exiting Master process...
Oct 08 10:20:27 compute-0 neutron-haproxy-ovnmeta-c18c7476-aaa8-4977-81b5-fb17e88446e2[281434]: [ALERT]    (281438) : Current worker (281441) exited with code 143 (Terminated)
Oct 08 10:20:27 compute-0 neutron-haproxy-ovnmeta-c18c7476-aaa8-4977-81b5-fb17e88446e2[281434]: [WARNING]  (281438) : All workers exited. Exiting... (0)
Oct 08 10:20:27 compute-0 systemd[1]: libpod-7c14962a86f46dbbdab05db1187661162c285fa5905925d5090b257adb980bce.scope: Deactivated successfully.
Oct 08 10:20:27 compute-0 nova_compute[262220]: 2025-10-08 10:20:27.407 2 INFO nova.virt.libvirt.driver [-] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Instance destroyed successfully.
Oct 08 10:20:27 compute-0 nova_compute[262220]: 2025-10-08 10:20:27.407 2 DEBUG nova.objects.instance [None req-7e45c488-eafb-4589-be0d-fcc6c5026fca d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lazy-loading 'resources' on Instance uuid 7d19d2c6-6de1-4096-99e4-24b4265b9c09 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 08 10:20:27 compute-0 podman[282443]: 2025-10-08 10:20:27.409804259 +0000 UTC m=+0.074467685 container died 7c14962a86f46dbbdab05db1187661162c285fa5905925d5090b257adb980bce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c18c7476-aaa8-4977-81b5-fb17e88446e2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 08 10:20:27 compute-0 nova_compute[262220]: 2025-10-08 10:20:27.411 2 DEBUG nova.compute.manager [req-1965d4c1-f1f9-476a-afe1-3645972e0680 req-51388a12-8dc3-441a-abdc-25405c00f300 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Received event network-vif-unplugged-29abf06b-1e1a-46cb-9cc1-7fa777795883 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 08 10:20:27 compute-0 nova_compute[262220]: 2025-10-08 10:20:27.411 2 DEBUG oslo_concurrency.lockutils [req-1965d4c1-f1f9-476a-afe1-3645972e0680 req-51388a12-8dc3-441a-abdc-25405c00f300 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:20:27 compute-0 nova_compute[262220]: 2025-10-08 10:20:27.411 2 DEBUG oslo_concurrency.lockutils [req-1965d4c1-f1f9-476a-afe1-3645972e0680 req-51388a12-8dc3-441a-abdc-25405c00f300 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:20:27 compute-0 nova_compute[262220]: 2025-10-08 10:20:27.411 2 DEBUG oslo_concurrency.lockutils [req-1965d4c1-f1f9-476a-afe1-3645972e0680 req-51388a12-8dc3-441a-abdc-25405c00f300 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:20:27 compute-0 nova_compute[262220]: 2025-10-08 10:20:27.411 2 DEBUG nova.compute.manager [req-1965d4c1-f1f9-476a-afe1-3645972e0680 req-51388a12-8dc3-441a-abdc-25405c00f300 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] No waiting events found dispatching network-vif-unplugged-29abf06b-1e1a-46cb-9cc1-7fa777795883 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 08 10:20:27 compute-0 nova_compute[262220]: 2025-10-08 10:20:27.412 2 DEBUG nova.compute.manager [req-1965d4c1-f1f9-476a-afe1-3645972e0680 req-51388a12-8dc3-441a-abdc-25405c00f300 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Received event network-vif-unplugged-29abf06b-1e1a-46cb-9cc1-7fa777795883 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 08 10:20:27 compute-0 nova_compute[262220]: 2025-10-08 10:20:27.426 2 DEBUG nova.virt.libvirt.vif [None req-7e45c488-eafb-4589-be0d-fcc6c5026fca d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-08T10:19:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1442491120',display_name='tempest-TestNetworkBasicOps-server-1442491120',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1442491120',id=11,image_ref='e5994bac-385d-4cfe-962e-386aa0559983',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA5zqA1Qj/FXMxdyzpBTW0ZXp5DxknDQcIVK3ARN25T6VayPziIvkKCLWAtPemraMv4byPsH7lpRR4PeiITQ6eibmU22T/5fhhxWj1Ai2d949LVQyVHFvTo1rGRRAeVdbw==',key_name='tempest-TestNetworkBasicOps-1126023314',keypairs=<?>,launch_index=0,launched_at=2025-10-08T10:19:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0edb2c85b88b4b168a4b2e8c5ed4a05c',ramdisk_id='',reservation_id='r-zjf5kwx6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e5994bac-385d-4cfe-962e-386aa0559983',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-139500885',owner_user_name='tempest-TestNetworkBasicOps-139500885-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-08T10:19:38Z,user_data=None,user_id='d50b19166a7245e390a6e29682191263',uuid=7d19d2c6-6de1-4096-99e4-24b4265b9c09,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "29abf06b-1e1a-46cb-9cc1-7fa777795883", "address": "fa:16:3e:00:0d:2d", "network": {"id": "c18c7476-aaa8-4977-81b5-fb17e88446e2", "bridge": "br-int", "label": "tempest-network-smoke--1578716951", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29abf06b-1e", "ovs_interfaceid": "29abf06b-1e1a-46cb-9cc1-7fa777795883", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 08 10:20:27 compute-0 nova_compute[262220]: 2025-10-08 10:20:27.426 2 DEBUG nova.network.os_vif_util [None req-7e45c488-eafb-4589-be0d-fcc6c5026fca d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converting VIF {"id": "29abf06b-1e1a-46cb-9cc1-7fa777795883", "address": "fa:16:3e:00:0d:2d", "network": {"id": "c18c7476-aaa8-4977-81b5-fb17e88446e2", "bridge": "br-int", "label": "tempest-network-smoke--1578716951", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29abf06b-1e", "ovs_interfaceid": "29abf06b-1e1a-46cb-9cc1-7fa777795883", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 08 10:20:27 compute-0 nova_compute[262220]: 2025-10-08 10:20:27.427 2 DEBUG nova.network.os_vif_util [None req-7e45c488-eafb-4589-be0d-fcc6c5026fca d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:00:0d:2d,bridge_name='br-int',has_traffic_filtering=True,id=29abf06b-1e1a-46cb-9cc1-7fa777795883,network=Network(c18c7476-aaa8-4977-81b5-fb17e88446e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29abf06b-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 08 10:20:27 compute-0 nova_compute[262220]: 2025-10-08 10:20:27.427 2 DEBUG os_vif [None req-7e45c488-eafb-4589-be0d-fcc6c5026fca d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:00:0d:2d,bridge_name='br-int',has_traffic_filtering=True,id=29abf06b-1e1a-46cb-9cc1-7fa777795883,network=Network(c18c7476-aaa8-4977-81b5-fb17e88446e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29abf06b-1e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 08 10:20:27 compute-0 nova_compute[262220]: 2025-10-08 10:20:27.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:20:27 compute-0 nova_compute[262220]: 2025-10-08 10:20:27.429 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap29abf06b-1e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 10:20:27 compute-0 nova_compute[262220]: 2025-10-08 10:20:27.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:20:27 compute-0 nova_compute[262220]: 2025-10-08 10:20:27.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 08 10:20:27 compute-0 nova_compute[262220]: 2025-10-08 10:20:27.437 2 INFO os_vif [None req-7e45c488-eafb-4589-be0d-fcc6c5026fca d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:00:0d:2d,bridge_name='br-int',has_traffic_filtering=True,id=29abf06b-1e1a-46cb-9cc1-7fa777795883,network=Network(c18c7476-aaa8-4977-81b5-fb17e88446e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29abf06b-1e')
Oct 08 10:20:27 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7c14962a86f46dbbdab05db1187661162c285fa5905925d5090b257adb980bce-userdata-shm.mount: Deactivated successfully.
Oct 08 10:20:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-669b7976e7c613a7666c66b557e5e70955b0380381cfc69b3da6fa8e03ce9e5e-merged.mount: Deactivated successfully.
Oct 08 10:20:27 compute-0 podman[282443]: 2025-10-08 10:20:27.460182696 +0000 UTC m=+0.124846132 container cleanup 7c14962a86f46dbbdab05db1187661162c285fa5905925d5090b257adb980bce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c18c7476-aaa8-4977-81b5-fb17e88446e2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 08 10:20:27 compute-0 systemd[1]: libpod-conmon-7c14962a86f46dbbdab05db1187661162c285fa5905925d5090b257adb980bce.scope: Deactivated successfully.
Oct 08 10:20:27 compute-0 podman[282494]: 2025-10-08 10:20:27.525359601 +0000 UTC m=+0.045113838 container remove 7c14962a86f46dbbdab05db1187661162c285fa5905925d5090b257adb980bce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c18c7476-aaa8-4977-81b5-fb17e88446e2, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 08 10:20:27 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:20:27.533 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[c8782e29-0356-470b-a1b5-46bb3cc7f8d3]: (4, ('Wed Oct  8 10:20:27 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c18c7476-aaa8-4977-81b5-fb17e88446e2 (7c14962a86f46dbbdab05db1187661162c285fa5905925d5090b257adb980bce)\n7c14962a86f46dbbdab05db1187661162c285fa5905925d5090b257adb980bce\nWed Oct  8 10:20:27 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c18c7476-aaa8-4977-81b5-fb17e88446e2 (7c14962a86f46dbbdab05db1187661162c285fa5905925d5090b257adb980bce)\n7c14962a86f46dbbdab05db1187661162c285fa5905925d5090b257adb980bce\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:20:27 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:20:27.534 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[59d5ecb5-e7e0-4802-9962-b13c7aa6c870]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:20:27 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:20:27.536 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc18c7476-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 10:20:27 compute-0 nova_compute[262220]: 2025-10-08 10:20:27.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:20:27 compute-0 kernel: tapc18c7476-a0: left promiscuous mode
Oct 08 10:20:27 compute-0 nova_compute[262220]: 2025-10-08 10:20:27.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:20:27 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:20:27.555 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[cc2924e1-6223-44d3-b044-e0af098fad5b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:20:27 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:20:27.585 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[fb36f929-910d-4191-b441-9e7a087cadff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:20:27 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:20:27.586 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[f4839136-33bc-41fb-a15d-7558ac29da04]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:20:27 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:20:27.602 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[83e6d160-a2ea-4dfb-aecc-b32873148039]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 473563, 'reachable_time': 16957, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282534, 'error': None, 'target': 'ovnmeta-c18c7476-aaa8-4977-81b5-fb17e88446e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:20:27 compute-0 systemd[1]: run-netns-ovnmeta\x2dc18c7476\x2daaa8\x2d4977\x2d81b5\x2dfb17e88446e2.mount: Deactivated successfully.
Oct 08 10:20:27 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:20:27.608 163290 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c18c7476-aaa8-4977-81b5-fb17e88446e2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 08 10:20:27 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:20:27.608 163290 DEBUG oslo.privsep.daemon [-] privsep: reply[f8f94e5d-5710-425f-88f8-1e7a5966a985]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:20:27 compute-0 podman[282513]: 2025-10-08 10:20:27.627927373 +0000 UTC m=+0.055120651 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 08 10:20:27 compute-0 podman[282512]: 2025-10-08 10:20:27.627959434 +0000 UTC m=+0.057151506 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 08 10:20:27 compute-0 nova_compute[262220]: 2025-10-08 10:20:27.927 2 INFO nova.virt.libvirt.driver [None req-7e45c488-eafb-4589-be0d-fcc6c5026fca d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Deleting instance files /var/lib/nova/instances/7d19d2c6-6de1-4096-99e4-24b4265b9c09_del
Oct 08 10:20:27 compute-0 nova_compute[262220]: 2025-10-08 10:20:27.927 2 INFO nova.virt.libvirt.driver [None req-7e45c488-eafb-4589-be0d-fcc6c5026fca d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Deletion of /var/lib/nova/instances/7d19d2c6-6de1-4096-99e4-24b4265b9c09_del complete
Oct 08 10:20:27 compute-0 nova_compute[262220]: 2025-10-08 10:20:27.989 2 INFO nova.compute.manager [None req-7e45c488-eafb-4589-be0d-fcc6c5026fca d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Took 0.82 seconds to destroy the instance on the hypervisor.
Oct 08 10:20:27 compute-0 nova_compute[262220]: 2025-10-08 10:20:27.991 2 DEBUG oslo.service.loopingcall [None req-7e45c488-eafb-4589-be0d-fcc6c5026fca d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 08 10:20:27 compute-0 nova_compute[262220]: 2025-10-08 10:20:27.992 2 DEBUG nova.compute.manager [-] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 08 10:20:27 compute-0 nova_compute[262220]: 2025-10-08 10:20:27.992 2 DEBUG nova.network.neutron [-] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 08 10:20:28 compute-0 nova_compute[262220]: 2025-10-08 10:20:28.054 2 DEBUG nova.network.neutron [req-d2768907-b7a3-4388-aa89-8465df05fc40 req-9a5d7aa7-f224-48c0-813b-c96c74dd3a83 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Updated VIF entry in instance network info cache for port 29abf06b-1e1a-46cb-9cc1-7fa777795883. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 08 10:20:28 compute-0 nova_compute[262220]: 2025-10-08 10:20:28.055 2 DEBUG nova.network.neutron [req-d2768907-b7a3-4388-aa89-8465df05fc40 req-9a5d7aa7-f224-48c0-813b-c96c74dd3a83 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Updating instance_info_cache with network_info: [{"id": "29abf06b-1e1a-46cb-9cc1-7fa777795883", "address": "fa:16:3e:00:0d:2d", "network": {"id": "c18c7476-aaa8-4977-81b5-fb17e88446e2", "bridge": "br-int", "label": "tempest-network-smoke--1578716951", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29abf06b-1e", "ovs_interfaceid": "29abf06b-1e1a-46cb-9cc1-7fa777795883", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 08 10:20:28 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:20:28 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:20:28 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:20:28.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:20:28 compute-0 nova_compute[262220]: 2025-10-08 10:20:28.131 2 DEBUG oslo_concurrency.lockutils [req-d2768907-b7a3-4388-aa89-8465df05fc40 req-9a5d7aa7-f224-48c0-813b-c96c74dd3a83 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Releasing lock "refresh_cache-7d19d2c6-6de1-4096-99e4-24b4265b9c09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 08 10:20:28 compute-0 nova_compute[262220]: 2025-10-08 10:20:28.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:20:28 compute-0 ceph-mon[73572]: pgmap v1076: 353 pgs: 353 active+clean; 121 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 20 KiB/s wr, 30 op/s
Oct 08 10:20:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:20:28.848Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:20:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:29 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:20:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:29 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:20:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:29 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:20:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:29 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:20:29 compute-0 nova_compute[262220]: 2025-10-08 10:20:29.061 2 DEBUG nova.network.neutron [-] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 08 10:20:29 compute-0 nova_compute[262220]: 2025-10-08 10:20:29.078 2 INFO nova.compute.manager [-] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Took 1.09 seconds to deallocate network for instance.
Oct 08 10:20:29 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:20:29 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:20:29 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:20:29.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:20:29 compute-0 nova_compute[262220]: 2025-10-08 10:20:29.136 2 DEBUG oslo_concurrency.lockutils [None req-7e45c488-eafb-4589-be0d-fcc6c5026fca d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:20:29 compute-0 nova_compute[262220]: 2025-10-08 10:20:29.136 2 DEBUG oslo_concurrency.lockutils [None req-7e45c488-eafb-4589-be0d-fcc6c5026fca d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:20:29 compute-0 nova_compute[262220]: 2025-10-08 10:20:29.157 2 DEBUG nova.compute.manager [req-09d8b5dc-3cb3-404f-8808-b8c3e57bc3f6 req-fc96b914-cbfe-42d3-8ee6-89fa9919a85f 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Received event network-vif-deleted-29abf06b-1e1a-46cb-9cc1-7fa777795883 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 08 10:20:29 compute-0 nova_compute[262220]: 2025-10-08 10:20:29.180 2 DEBUG oslo_concurrency.processutils [None req-7e45c488-eafb-4589-be0d-fcc6c5026fca d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:20:29 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1077: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 21 KiB/s wr, 58 op/s
Oct 08 10:20:29 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:20:29 compute-0 sudo[282557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:20:29 compute-0 sudo[282557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:20:29 compute-0 sudo[282557]: pam_unix(sudo:session): session closed for user root
Oct 08 10:20:29 compute-0 nova_compute[262220]: 2025-10-08 10:20:29.506 2 DEBUG nova.compute.manager [req-6aab1618-a591-42aa-8505-af4af28e607b req-60990b22-523f-44e9-8258-9ca8c473cfe6 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Received event network-vif-plugged-29abf06b-1e1a-46cb-9cc1-7fa777795883 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 08 10:20:29 compute-0 nova_compute[262220]: 2025-10-08 10:20:29.507 2 DEBUG oslo_concurrency.lockutils [req-6aab1618-a591-42aa-8505-af4af28e607b req-60990b22-523f-44e9-8258-9ca8c473cfe6 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:20:29 compute-0 nova_compute[262220]: 2025-10-08 10:20:29.507 2 DEBUG oslo_concurrency.lockutils [req-6aab1618-a591-42aa-8505-af4af28e607b req-60990b22-523f-44e9-8258-9ca8c473cfe6 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:20:29 compute-0 nova_compute[262220]: 2025-10-08 10:20:29.508 2 DEBUG oslo_concurrency.lockutils [req-6aab1618-a591-42aa-8505-af4af28e607b req-60990b22-523f-44e9-8258-9ca8c473cfe6 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:20:29 compute-0 nova_compute[262220]: 2025-10-08 10:20:29.508 2 DEBUG nova.compute.manager [req-6aab1618-a591-42aa-8505-af4af28e607b req-60990b22-523f-44e9-8258-9ca8c473cfe6 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] No waiting events found dispatching network-vif-plugged-29abf06b-1e1a-46cb-9cc1-7fa777795883 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 08 10:20:29 compute-0 nova_compute[262220]: 2025-10-08 10:20:29.509 2 WARNING nova.compute.manager [req-6aab1618-a591-42aa-8505-af4af28e607b req-60990b22-523f-44e9-8258-9ca8c473cfe6 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Received unexpected event network-vif-plugged-29abf06b-1e1a-46cb-9cc1-7fa777795883 for instance with vm_state deleted and task_state None.
Oct 08 10:20:29 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 08 10:20:29 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/287948722' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:20:29 compute-0 nova_compute[262220]: 2025-10-08 10:20:29.678 2 DEBUG oslo_concurrency.processutils [None req-7e45c488-eafb-4589-be0d-fcc6c5026fca d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:20:29 compute-0 nova_compute[262220]: 2025-10-08 10:20:29.685 2 DEBUG nova.compute.provider_tree [None req-7e45c488-eafb-4589-be0d-fcc6c5026fca d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 08 10:20:29 compute-0 nova_compute[262220]: 2025-10-08 10:20:29.705 2 DEBUG nova.scheduler.client.report [None req-7e45c488-eafb-4589-be0d-fcc6c5026fca d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 08 10:20:29 compute-0 nova_compute[262220]: 2025-10-08 10:20:29.758 2 DEBUG oslo_concurrency.lockutils [None req-7e45c488-eafb-4589-be0d-fcc6c5026fca d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:20:29 compute-0 nova_compute[262220]: 2025-10-08 10:20:29.820 2 INFO nova.scheduler.client.report [None req-7e45c488-eafb-4589-be0d-fcc6c5026fca d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Deleted allocations for instance 7d19d2c6-6de1-4096-99e4-24b4265b9c09
Oct 08 10:20:29 compute-0 nova_compute[262220]: 2025-10-08 10:20:29.890 2 DEBUG oslo_concurrency.lockutils [None req-7e45c488-eafb-4589-be0d-fcc6c5026fca d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:20:30 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:20:30 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:20:30 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:20:30.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:20:30 compute-0 ceph-mon[73572]: pgmap v1077: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 21 KiB/s wr, 58 op/s
Oct 08 10:20:30 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/287948722' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:20:31 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:20:31 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:20:31 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:20:31.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:20:31 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1078: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 9.0 KiB/s wr, 57 op/s
Oct 08 10:20:32 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:20:32 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:20:32 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:20:32.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:20:32 compute-0 nova_compute[262220]: 2025-10-08 10:20:32.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:20:32 compute-0 ceph-mon[73572]: pgmap v1078: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 9.0 KiB/s wr, 57 op/s
Oct 08 10:20:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:20:32 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:20:32 compute-0 nova_compute[262220]: 2025-10-08 10:20:32.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:20:33 compute-0 nova_compute[262220]: 2025-10-08 10:20:33.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:20:33 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:20:33 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:20:33 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:20:33.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:20:33 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1079: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 9.0 KiB/s wr, 57 op/s
Oct 08 10:20:33 compute-0 nova_compute[262220]: 2025-10-08 10:20:33.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:20:33 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:20:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:20:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:20:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:20:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:34 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:20:34 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:20:34 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:20:34 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:20:34.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:20:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:20:34 compute-0 ceph-mon[73572]: pgmap v1079: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 9.0 KiB/s wr, 57 op/s
Oct 08 10:20:35 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:20:35 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:20:35 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:20:35.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:20:35 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1080: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 9.0 KiB/s wr, 57 op/s
Oct 08 10:20:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:20:35] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 08 10:20:35 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:20:35] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 08 10:20:36 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:20:36 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:20:36 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:20:36.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:20:36 compute-0 ceph-mon[73572]: pgmap v1080: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 9.0 KiB/s wr, 57 op/s
Oct 08 10:20:37 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:20:37 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:20:37 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:20:37.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:20:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:20:37.193Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:20:37 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1081: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 08 10:20:37 compute-0 nova_compute[262220]: 2025-10-08 10:20:37.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:20:38 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:20:38 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:20:38 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:20:38.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:20:38 compute-0 nova_compute[262220]: 2025-10-08 10:20:38.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:20:38 compute-0 ceph-mon[73572]: pgmap v1081: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 08 10:20:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:20:38.848Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:20:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:20:38.849Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:20:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:20:38.849Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:20:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:20:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:20:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:20:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:39 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:20:39 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:20:39 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:20:39 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:20:39.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:20:39 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1082: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Oct 08 10:20:39 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:20:40 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:20:40 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:20:40 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:20:40.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:20:40 compute-0 ceph-mon[73572]: pgmap v1082: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Oct 08 10:20:41 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:20:41 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:20:41 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:20:41.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:20:41 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1083: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:20:41 compute-0 ceph-mon[73572]: pgmap v1083: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:20:41 compute-0 podman[282616]: 2025-10-08 10:20:41.93161239 +0000 UTC m=+0.089055687 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid)
Oct 08 10:20:42 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:20:42 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:20:42 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:20:42.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:20:42 compute-0 nova_compute[262220]: 2025-10-08 10:20:42.406 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759918827.4045172, 7d19d2c6-6de1-4096-99e4-24b4265b9c09 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 08 10:20:42 compute-0 nova_compute[262220]: 2025-10-08 10:20:42.406 2 INFO nova.compute.manager [-] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] VM Stopped (Lifecycle Event)
Oct 08 10:20:42 compute-0 nova_compute[262220]: 2025-10-08 10:20:42.426 2 DEBUG nova.compute.manager [None req-f4ce2563-ce14-4ff3-98d9-7722612ae4fa - - - - - -] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 08 10:20:42 compute-0 nova_compute[262220]: 2025-10-08 10:20:42.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:20:43 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:20:43 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:20:43 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:20:43.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:20:43 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1084: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:20:43 compute-0 nova_compute[262220]: 2025-10-08 10:20:43.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:20:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:20:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:20:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:20:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:44 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:20:44 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:20:44 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:20:44 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:20:44.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:20:44 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:20:44 compute-0 ceph-mon[73572]: pgmap v1084: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:20:45 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:20:45 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:20:45 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:20:45.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:20:45 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1085: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:20:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:20:45] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 08 10:20:45 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:20:45] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 08 10:20:46 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:20:46 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:20:46 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:20:46.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:20:46 compute-0 ceph-mon[73572]: pgmap v1085: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:20:46 compute-0 nova_compute[262220]: 2025-10-08 10:20:46.882 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:20:47 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:20:47 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:20:47 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:20:47.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:20:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:20:47.194Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:20:47 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1086: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:20:47 compute-0 nova_compute[262220]: 2025-10-08 10:20:47.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:20:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:20:47
Oct 08 10:20:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 08 10:20:47 compute-0 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct 08 10:20:47 compute-0 ceph-mgr[73869]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', '.mgr', 'vms', 'volumes', 'default.rgw.control', '.rgw.root', 'default.rgw.meta', '.nfs', 'cephfs.cephfs.meta', 'images', 'default.rgw.log']
Oct 08 10:20:47 compute-0 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct 08 10:20:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:20:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:20:47 compute-0 nova_compute[262220]: 2025-10-08 10:20:47.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:20:47 compute-0 nova_compute[262220]: 2025-10-08 10:20:47.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:20:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:20:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:20:48 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:20:48 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:20:48 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:20:48.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:20:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:20:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:20:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:20:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:20:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct 08 10:20:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:20:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 08 10:20:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:20:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 10:20:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:20:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:20:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:20:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:20:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:20:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 08 10:20:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:20:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 08 10:20:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:20:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:20:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:20:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 08 10:20:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:20:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 08 10:20:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:20:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:20:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:20:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 08 10:20:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:20:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 10:20:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 08 10:20:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:20:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:20:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:20:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:20:48 compute-0 nova_compute[262220]: 2025-10-08 10:20:48.290 2 DEBUG oslo_concurrency.lockutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "20ffb86b-b5ba-4818-82e4-14a755c48807" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:20:48 compute-0 nova_compute[262220]: 2025-10-08 10:20:48.290 2 DEBUG oslo_concurrency.lockutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "20ffb86b-b5ba-4818-82e4-14a755c48807" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:20:48 compute-0 nova_compute[262220]: 2025-10-08 10:20:48.304 2 DEBUG nova.compute.manager [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 08 10:20:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 08 10:20:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:20:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:20:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:20:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:20:48 compute-0 nova_compute[262220]: 2025-10-08 10:20:48.365 2 DEBUG oslo_concurrency.lockutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:20:48 compute-0 nova_compute[262220]: 2025-10-08 10:20:48.366 2 DEBUG oslo_concurrency.lockutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:20:48 compute-0 nova_compute[262220]: 2025-10-08 10:20:48.371 2 DEBUG nova.virt.hardware [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 08 10:20:48 compute-0 nova_compute[262220]: 2025-10-08 10:20:48.371 2 INFO nova.compute.claims [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Claim successful on node compute-0.ctlplane.example.com
Oct 08 10:20:48 compute-0 ceph-mon[73572]: pgmap v1086: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:20:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:20:48 compute-0 nova_compute[262220]: 2025-10-08 10:20:48.462 2 DEBUG oslo_concurrency.processutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:20:48 compute-0 nova_compute[262220]: 2025-10-08 10:20:48.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:20:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:20:48.850Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:20:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:20:48.850Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:20:48 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 08 10:20:48 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4234880885' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:20:48 compute-0 nova_compute[262220]: 2025-10-08 10:20:48.923 2 DEBUG oslo_concurrency.processutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:20:48 compute-0 nova_compute[262220]: 2025-10-08 10:20:48.928 2 DEBUG nova.compute.provider_tree [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 08 10:20:48 compute-0 nova_compute[262220]: 2025-10-08 10:20:48.977 2 DEBUG nova.scheduler.client.report [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 08 10:20:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:20:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:20:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:20:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:49 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:20:49 compute-0 nova_compute[262220]: 2025-10-08 10:20:49.047 2 DEBUG oslo_concurrency.lockutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:20:49 compute-0 nova_compute[262220]: 2025-10-08 10:20:49.047 2 DEBUG nova.compute.manager [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 08 10:20:49 compute-0 nova_compute[262220]: 2025-10-08 10:20:49.122 2 DEBUG nova.compute.manager [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 08 10:20:49 compute-0 nova_compute[262220]: 2025-10-08 10:20:49.123 2 DEBUG nova.network.neutron [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 08 10:20:49 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:20:49 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:20:49 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:20:49.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:20:49 compute-0 nova_compute[262220]: 2025-10-08 10:20:49.159 2 INFO nova.virt.libvirt.driver [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 08 10:20:49 compute-0 nova_compute[262220]: 2025-10-08 10:20:49.181 2 DEBUG nova.compute.manager [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 08 10:20:49 compute-0 nova_compute[262220]: 2025-10-08 10:20:49.283 2 DEBUG nova.policy [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd50b19166a7245e390a6e29682191263', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0edb2c85b88b4b168a4b2e8c5ed4a05c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 08 10:20:49 compute-0 nova_compute[262220]: 2025-10-08 10:20:49.288 2 DEBUG nova.compute.manager [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 08 10:20:49 compute-0 nova_compute[262220]: 2025-10-08 10:20:49.289 2 DEBUG nova.virt.libvirt.driver [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 08 10:20:49 compute-0 nova_compute[262220]: 2025-10-08 10:20:49.290 2 INFO nova.virt.libvirt.driver [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Creating image(s)
Oct 08 10:20:49 compute-0 nova_compute[262220]: 2025-10-08 10:20:49.320 2 DEBUG nova.storage.rbd_utils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] rbd image 20ffb86b-b5ba-4818-82e4-14a755c48807_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 08 10:20:49 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1087: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:20:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:20:49 compute-0 nova_compute[262220]: 2025-10-08 10:20:49.348 2 DEBUG nova.storage.rbd_utils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] rbd image 20ffb86b-b5ba-4818-82e4-14a755c48807_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 08 10:20:49 compute-0 nova_compute[262220]: 2025-10-08 10:20:49.379 2 DEBUG nova.storage.rbd_utils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] rbd image 20ffb86b-b5ba-4818-82e4-14a755c48807_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 08 10:20:49 compute-0 nova_compute[262220]: 2025-10-08 10:20:49.384 2 DEBUG oslo_concurrency.processutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3cde70359534d4758cf71011630bd1fb14a90c92 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:20:49 compute-0 sudo[282719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:20:49 compute-0 sudo[282719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:20:49 compute-0 sudo[282719]: pam_unix(sudo:session): session closed for user root
Oct 08 10:20:49 compute-0 nova_compute[262220]: 2025-10-08 10:20:49.451 2 DEBUG oslo_concurrency.processutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3cde70359534d4758cf71011630bd1fb14a90c92 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:20:49 compute-0 nova_compute[262220]: 2025-10-08 10:20:49.452 2 DEBUG oslo_concurrency.lockutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "3cde70359534d4758cf71011630bd1fb14a90c92" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:20:49 compute-0 nova_compute[262220]: 2025-10-08 10:20:49.453 2 DEBUG oslo_concurrency.lockutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "3cde70359534d4758cf71011630bd1fb14a90c92" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:20:49 compute-0 nova_compute[262220]: 2025-10-08 10:20:49.453 2 DEBUG oslo_concurrency.lockutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "3cde70359534d4758cf71011630bd1fb14a90c92" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:20:49 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/4234880885' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:20:49 compute-0 nova_compute[262220]: 2025-10-08 10:20:49.482 2 DEBUG nova.storage.rbd_utils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] rbd image 20ffb86b-b5ba-4818-82e4-14a755c48807_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 08 10:20:49 compute-0 nova_compute[262220]: 2025-10-08 10:20:49.486 2 DEBUG oslo_concurrency.processutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/3cde70359534d4758cf71011630bd1fb14a90c92 20ffb86b-b5ba-4818-82e4-14a755c48807_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:20:49 compute-0 nova_compute[262220]: 2025-10-08 10:20:49.788 2 DEBUG oslo_concurrency.processutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/3cde70359534d4758cf71011630bd1fb14a90c92 20ffb86b-b5ba-4818-82e4-14a755c48807_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.301s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:20:49 compute-0 nova_compute[262220]: 2025-10-08 10:20:49.870 2 DEBUG nova.storage.rbd_utils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] resizing rbd image 20ffb86b-b5ba-4818-82e4-14a755c48807_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 08 10:20:49 compute-0 nova_compute[262220]: 2025-10-08 10:20:49.904 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:20:49 compute-0 nova_compute[262220]: 2025-10-08 10:20:49.985 2 DEBUG nova.objects.instance [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lazy-loading 'migration_context' on Instance uuid 20ffb86b-b5ba-4818-82e4-14a755c48807 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 08 10:20:50 compute-0 nova_compute[262220]: 2025-10-08 10:20:50.025 2 DEBUG nova.virt.libvirt.driver [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 08 10:20:50 compute-0 nova_compute[262220]: 2025-10-08 10:20:50.025 2 DEBUG nova.virt.libvirt.driver [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Ensure instance console log exists: /var/lib/nova/instances/20ffb86b-b5ba-4818-82e4-14a755c48807/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 08 10:20:50 compute-0 nova_compute[262220]: 2025-10-08 10:20:50.026 2 DEBUG oslo_concurrency.lockutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:20:50 compute-0 nova_compute[262220]: 2025-10-08 10:20:50.026 2 DEBUG oslo_concurrency.lockutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:20:50 compute-0 nova_compute[262220]: 2025-10-08 10:20:50.026 2 DEBUG oslo_concurrency.lockutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:20:50 compute-0 nova_compute[262220]: 2025-10-08 10:20:50.068 2 DEBUG nova.network.neutron [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Successfully created port: 754d5578-d995-4502-af66-b164dfdf1189 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 08 10:20:50 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:20:50 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:20:50 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:20:50.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:20:50 compute-0 ceph-mon[73572]: pgmap v1087: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:20:50 compute-0 nova_compute[262220]: 2025-10-08 10:20:50.882 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:20:50 compute-0 nova_compute[262220]: 2025-10-08 10:20:50.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:20:50 compute-0 nova_compute[262220]: 2025-10-08 10:20:50.886 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 08 10:20:50 compute-0 nova_compute[262220]: 2025-10-08 10:20:50.887 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 08 10:20:50 compute-0 nova_compute[262220]: 2025-10-08 10:20:50.912 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 08 10:20:50 compute-0 nova_compute[262220]: 2025-10-08 10:20:50.913 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 08 10:20:50 compute-0 nova_compute[262220]: 2025-10-08 10:20:50.913 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:20:50 compute-0 nova_compute[262220]: 2025-10-08 10:20:50.913 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:20:50 compute-0 nova_compute[262220]: 2025-10-08 10:20:50.959 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:20:50 compute-0 nova_compute[262220]: 2025-10-08 10:20:50.960 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:20:50 compute-0 nova_compute[262220]: 2025-10-08 10:20:50.960 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:20:50 compute-0 nova_compute[262220]: 2025-10-08 10:20:50.961 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 08 10:20:50 compute-0 nova_compute[262220]: 2025-10-08 10:20:50.961 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:20:51 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:20:51 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:20:51 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:20:51.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:20:51 compute-0 nova_compute[262220]: 2025-10-08 10:20:51.152 2 DEBUG nova.network.neutron [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Successfully updated port: 754d5578-d995-4502-af66-b164dfdf1189 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 08 10:20:51 compute-0 nova_compute[262220]: 2025-10-08 10:20:51.173 2 DEBUG oslo_concurrency.lockutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "refresh_cache-20ffb86b-b5ba-4818-82e4-14a755c48807" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 08 10:20:51 compute-0 nova_compute[262220]: 2025-10-08 10:20:51.174 2 DEBUG oslo_concurrency.lockutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquired lock "refresh_cache-20ffb86b-b5ba-4818-82e4-14a755c48807" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 08 10:20:51 compute-0 nova_compute[262220]: 2025-10-08 10:20:51.174 2 DEBUG nova.network.neutron [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 08 10:20:51 compute-0 nova_compute[262220]: 2025-10-08 10:20:51.247 2 DEBUG nova.compute.manager [req-9e28075a-3aa8-48c9-9d35-5095a1b2805b req-d89bb926-5361-48c4-b3f2-e665275fa2b6 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Received event network-changed-754d5578-d995-4502-af66-b164dfdf1189 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 08 10:20:51 compute-0 nova_compute[262220]: 2025-10-08 10:20:51.248 2 DEBUG nova.compute.manager [req-9e28075a-3aa8-48c9-9d35-5095a1b2805b req-d89bb926-5361-48c4-b3f2-e665275fa2b6 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Refreshing instance network info cache due to event network-changed-754d5578-d995-4502-af66-b164dfdf1189. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 08 10:20:51 compute-0 nova_compute[262220]: 2025-10-08 10:20:51.248 2 DEBUG oslo_concurrency.lockutils [req-9e28075a-3aa8-48c9-9d35-5095a1b2805b req-d89bb926-5361-48c4-b3f2-e665275fa2b6 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "refresh_cache-20ffb86b-b5ba-4818-82e4-14a755c48807" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 08 10:20:51 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1088: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:20:51 compute-0 nova_compute[262220]: 2025-10-08 10:20:51.360 2 DEBUG nova.network.neutron [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 08 10:20:51 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 08 10:20:51 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2859324008' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:20:51 compute-0 nova_compute[262220]: 2025-10-08 10:20:51.406 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:20:51 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/2859324008' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:20:51 compute-0 nova_compute[262220]: 2025-10-08 10:20:51.582 2 WARNING nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 10:20:51 compute-0 nova_compute[262220]: 2025-10-08 10:20:51.583 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4536MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 08 10:20:51 compute-0 nova_compute[262220]: 2025-10-08 10:20:51.584 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:20:51 compute-0 nova_compute[262220]: 2025-10-08 10:20:51.584 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:20:51 compute-0 nova_compute[262220]: 2025-10-08 10:20:51.677 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Instance 20ffb86b-b5ba-4818-82e4-14a755c48807 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 08 10:20:51 compute-0 nova_compute[262220]: 2025-10-08 10:20:51.678 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 08 10:20:51 compute-0 nova_compute[262220]: 2025-10-08 10:20:51.678 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 08 10:20:51 compute-0 nova_compute[262220]: 2025-10-08 10:20:51.705 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:20:52 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:20:52 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:20:52 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:20:52.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:20:52 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 08 10:20:52 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2913649116' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:20:52 compute-0 nova_compute[262220]: 2025-10-08 10:20:52.180 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:20:52 compute-0 nova_compute[262220]: 2025-10-08 10:20:52.188 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 08 10:20:52 compute-0 nova_compute[262220]: 2025-10-08 10:20:52.217 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 08 10:20:52 compute-0 nova_compute[262220]: 2025-10-08 10:20:52.370 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 08 10:20:52 compute-0 nova_compute[262220]: 2025-10-08 10:20:52.370 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:20:52 compute-0 nova_compute[262220]: 2025-10-08 10:20:52.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:20:52 compute-0 ceph-mon[73572]: pgmap v1088: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:20:52 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/2272610649' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:20:52 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/2913649116' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:20:52 compute-0 podman[282906]: 2025-10-08 10:20:52.920243651 +0000 UTC m=+0.081030497 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible)
Oct 08 10:20:53 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:20:53 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:20:53 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:20:53.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:20:53 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1089: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:20:53 compute-0 nova_compute[262220]: 2025-10-08 10:20:53.344 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:20:53 compute-0 nova_compute[262220]: 2025-10-08 10:20:53.344 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 08 10:20:53 compute-0 nova_compute[262220]: 2025-10-08 10:20:53.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:20:53 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/3659403123' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:20:53 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/917134438' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:20:53 compute-0 nova_compute[262220]: 2025-10-08 10:20:53.679 2 DEBUG nova.network.neutron [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Updating instance_info_cache with network_info: [{"id": "754d5578-d995-4502-af66-b164dfdf1189", "address": "fa:16:3e:5d:e9:f4", "network": {"id": "84428682-9eff-4658-a105-8c0d1de9c87f", "bridge": "br-int", "label": "tempest-network-smoke--1945963860", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap754d5578-d9", "ovs_interfaceid": "754d5578-d995-4502-af66-b164dfdf1189", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 08 10:20:53 compute-0 nova_compute[262220]: 2025-10-08 10:20:53.757 2 DEBUG oslo_concurrency.lockutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Releasing lock "refresh_cache-20ffb86b-b5ba-4818-82e4-14a755c48807" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 08 10:20:53 compute-0 nova_compute[262220]: 2025-10-08 10:20:53.758 2 DEBUG nova.compute.manager [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Instance network_info: |[{"id": "754d5578-d995-4502-af66-b164dfdf1189", "address": "fa:16:3e:5d:e9:f4", "network": {"id": "84428682-9eff-4658-a105-8c0d1de9c87f", "bridge": "br-int", "label": "tempest-network-smoke--1945963860", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap754d5578-d9", "ovs_interfaceid": "754d5578-d995-4502-af66-b164dfdf1189", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 08 10:20:53 compute-0 nova_compute[262220]: 2025-10-08 10:20:53.758 2 DEBUG oslo_concurrency.lockutils [req-9e28075a-3aa8-48c9-9d35-5095a1b2805b req-d89bb926-5361-48c4-b3f2-e665275fa2b6 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquired lock "refresh_cache-20ffb86b-b5ba-4818-82e4-14a755c48807" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 08 10:20:53 compute-0 nova_compute[262220]: 2025-10-08 10:20:53.758 2 DEBUG nova.network.neutron [req-9e28075a-3aa8-48c9-9d35-5095a1b2805b req-d89bb926-5361-48c4-b3f2-e665275fa2b6 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Refreshing network info cache for port 754d5578-d995-4502-af66-b164dfdf1189 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 08 10:20:53 compute-0 nova_compute[262220]: 2025-10-08 10:20:53.760 2 DEBUG nova.virt.libvirt.driver [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Start _get_guest_xml network_info=[{"id": "754d5578-d995-4502-af66-b164dfdf1189", "address": "fa:16:3e:5d:e9:f4", "network": {"id": "84428682-9eff-4658-a105-8c0d1de9c87f", "bridge": "br-int", "label": "tempest-network-smoke--1945963860", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap754d5578-d9", "ovs_interfaceid": "754d5578-d995-4502-af66-b164dfdf1189", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-08T10:08:49Z,direct_url=<?>,disk_format='qcow2',id=e5994bac-385d-4cfe-962e-386aa0559983,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9bebada0871a4efa9df99c6beff34c13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-08T10:08:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'encryption_secret_uuid': None, 'encryption_format': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'device_type': 'disk', 'size': 0, 'image_id': 'e5994bac-385d-4cfe-962e-386aa0559983'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 08 10:20:53 compute-0 nova_compute[262220]: 2025-10-08 10:20:53.764 2 WARNING nova.virt.libvirt.driver [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 10:20:53 compute-0 nova_compute[262220]: 2025-10-08 10:20:53.769 2 DEBUG nova.virt.libvirt.host [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 08 10:20:53 compute-0 nova_compute[262220]: 2025-10-08 10:20:53.770 2 DEBUG nova.virt.libvirt.host [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 08 10:20:53 compute-0 nova_compute[262220]: 2025-10-08 10:20:53.773 2 DEBUG nova.virt.libvirt.host [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 08 10:20:53 compute-0 nova_compute[262220]: 2025-10-08 10:20:53.774 2 DEBUG nova.virt.libvirt.host [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 08 10:20:53 compute-0 nova_compute[262220]: 2025-10-08 10:20:53.774 2 DEBUG nova.virt.libvirt.driver [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 08 10:20:53 compute-0 nova_compute[262220]: 2025-10-08 10:20:53.775 2 DEBUG nova.virt.hardware [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-08T10:08:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='461f98d6-ae65-4f86-8ae2-cc3cfaea2a46',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-08T10:08:49Z,direct_url=<?>,disk_format='qcow2',id=e5994bac-385d-4cfe-962e-386aa0559983,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9bebada0871a4efa9df99c6beff34c13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-08T10:08:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 08 10:20:53 compute-0 nova_compute[262220]: 2025-10-08 10:20:53.775 2 DEBUG nova.virt.hardware [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 08 10:20:53 compute-0 nova_compute[262220]: 2025-10-08 10:20:53.776 2 DEBUG nova.virt.hardware [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 08 10:20:53 compute-0 nova_compute[262220]: 2025-10-08 10:20:53.776 2 DEBUG nova.virt.hardware [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 08 10:20:53 compute-0 nova_compute[262220]: 2025-10-08 10:20:53.776 2 DEBUG nova.virt.hardware [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 08 10:20:53 compute-0 nova_compute[262220]: 2025-10-08 10:20:53.776 2 DEBUG nova.virt.hardware [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 08 10:20:53 compute-0 nova_compute[262220]: 2025-10-08 10:20:53.777 2 DEBUG nova.virt.hardware [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 08 10:20:53 compute-0 nova_compute[262220]: 2025-10-08 10:20:53.777 2 DEBUG nova.virt.hardware [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 08 10:20:53 compute-0 nova_compute[262220]: 2025-10-08 10:20:53.778 2 DEBUG nova.virt.hardware [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 08 10:20:53 compute-0 nova_compute[262220]: 2025-10-08 10:20:53.778 2 DEBUG nova.virt.hardware [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 08 10:20:53 compute-0 nova_compute[262220]: 2025-10-08 10:20:53.779 2 DEBUG nova.virt.hardware [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 08 10:20:53 compute-0 nova_compute[262220]: 2025-10-08 10:20:53.782 2 DEBUG oslo_concurrency.processutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:20:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:20:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:20:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:20:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:54 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:20:54 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:20:54 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:20:54 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:20:54.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:20:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 08 10:20:54 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1935607548' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 08 10:20:54 compute-0 nova_compute[262220]: 2025-10-08 10:20:54.250 2 DEBUG oslo_concurrency.processutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:20:54 compute-0 nova_compute[262220]: 2025-10-08 10:20:54.277 2 DEBUG nova.storage.rbd_utils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] rbd image 20ffb86b-b5ba-4818-82e4-14a755c48807_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 08 10:20:54 compute-0 nova_compute[262220]: 2025-10-08 10:20:54.281 2 DEBUG oslo_concurrency.processutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:20:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:20:54 compute-0 ceph-mon[73572]: pgmap v1089: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:20:54 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/983316407' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:20:54 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/1935607548' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 08 10:20:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 08 10:20:54 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/838441368' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 08 10:20:54 compute-0 nova_compute[262220]: 2025-10-08 10:20:54.708 2 DEBUG oslo_concurrency.processutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:20:54 compute-0 nova_compute[262220]: 2025-10-08 10:20:54.709 2 DEBUG nova.virt.libvirt.vif [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-08T10:20:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-451384508',display_name='tempest-TestNetworkBasicOps-server-451384508',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-451384508',id=13,image_ref='e5994bac-385d-4cfe-962e-386aa0559983',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKzAAbR+LebFHZ4MQpbXVINvQrQE4iZi3jhjlRa4bUuBuh7BAgqwE3gXNZho6NGF97w7AAO52PK7tmiXY23liBZwBI0PDfy6ztl7vXddFfJ7MBnkOiMny5dlb5dxWiMeog==',key_name='tempest-TestNetworkBasicOps-1706390229',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0edb2c85b88b4b168a4b2e8c5ed4a05c',ramdisk_id='',reservation_id='r-mat2tuft',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e5994bac-385d-4cfe-962e-386aa0559983',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-139500885',owner_user_name='tempest-TestNetworkBasicOps-139500885-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-08T10:20:49Z,user_data=None,user_id='d50b19166a7245e390a6e29682191263',uuid=20ffb86b-b5ba-4818-82e4-14a755c48807,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "754d5578-d995-4502-af66-b164dfdf1189", "address": "fa:16:3e:5d:e9:f4", "network": {"id": "84428682-9eff-4658-a105-8c0d1de9c87f", "bridge": "br-int", "label": "tempest-network-smoke--1945963860", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap754d5578-d9", "ovs_interfaceid": "754d5578-d995-4502-af66-b164dfdf1189", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 08 10:20:54 compute-0 nova_compute[262220]: 2025-10-08 10:20:54.710 2 DEBUG nova.network.os_vif_util [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converting VIF {"id": "754d5578-d995-4502-af66-b164dfdf1189", "address": "fa:16:3e:5d:e9:f4", "network": {"id": "84428682-9eff-4658-a105-8c0d1de9c87f", "bridge": "br-int", "label": "tempest-network-smoke--1945963860", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap754d5578-d9", "ovs_interfaceid": "754d5578-d995-4502-af66-b164dfdf1189", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 08 10:20:54 compute-0 nova_compute[262220]: 2025-10-08 10:20:54.711 2 DEBUG nova.network.os_vif_util [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:e9:f4,bridge_name='br-int',has_traffic_filtering=True,id=754d5578-d995-4502-af66-b164dfdf1189,network=Network(84428682-9eff-4658-a105-8c0d1de9c87f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap754d5578-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 08 10:20:54 compute-0 nova_compute[262220]: 2025-10-08 10:20:54.712 2 DEBUG nova.objects.instance [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lazy-loading 'pci_devices' on Instance uuid 20ffb86b-b5ba-4818-82e4-14a755c48807 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 08 10:20:54 compute-0 nova_compute[262220]: 2025-10-08 10:20:54.747 2 DEBUG nova.virt.libvirt.driver [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] End _get_guest_xml xml=<domain type="kvm">
Oct 08 10:20:54 compute-0 nova_compute[262220]:   <uuid>20ffb86b-b5ba-4818-82e4-14a755c48807</uuid>
Oct 08 10:20:54 compute-0 nova_compute[262220]:   <name>instance-0000000d</name>
Oct 08 10:20:54 compute-0 nova_compute[262220]:   <memory>131072</memory>
Oct 08 10:20:54 compute-0 nova_compute[262220]:   <vcpu>1</vcpu>
Oct 08 10:20:54 compute-0 nova_compute[262220]:   <metadata>
Oct 08 10:20:54 compute-0 nova_compute[262220]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 08 10:20:54 compute-0 nova_compute[262220]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 08 10:20:54 compute-0 nova_compute[262220]:       <nova:name>tempest-TestNetworkBasicOps-server-451384508</nova:name>
Oct 08 10:20:54 compute-0 nova_compute[262220]:       <nova:creationTime>2025-10-08 10:20:53</nova:creationTime>
Oct 08 10:20:54 compute-0 nova_compute[262220]:       <nova:flavor name="m1.nano">
Oct 08 10:20:54 compute-0 nova_compute[262220]:         <nova:memory>128</nova:memory>
Oct 08 10:20:54 compute-0 nova_compute[262220]:         <nova:disk>1</nova:disk>
Oct 08 10:20:54 compute-0 nova_compute[262220]:         <nova:swap>0</nova:swap>
Oct 08 10:20:54 compute-0 nova_compute[262220]:         <nova:ephemeral>0</nova:ephemeral>
Oct 08 10:20:54 compute-0 nova_compute[262220]:         <nova:vcpus>1</nova:vcpus>
Oct 08 10:20:54 compute-0 nova_compute[262220]:       </nova:flavor>
Oct 08 10:20:54 compute-0 nova_compute[262220]:       <nova:owner>
Oct 08 10:20:54 compute-0 nova_compute[262220]:         <nova:user uuid="d50b19166a7245e390a6e29682191263">tempest-TestNetworkBasicOps-139500885-project-member</nova:user>
Oct 08 10:20:54 compute-0 nova_compute[262220]:         <nova:project uuid="0edb2c85b88b4b168a4b2e8c5ed4a05c">tempest-TestNetworkBasicOps-139500885</nova:project>
Oct 08 10:20:54 compute-0 nova_compute[262220]:       </nova:owner>
Oct 08 10:20:54 compute-0 nova_compute[262220]:       <nova:root type="image" uuid="e5994bac-385d-4cfe-962e-386aa0559983"/>
Oct 08 10:20:54 compute-0 nova_compute[262220]:       <nova:ports>
Oct 08 10:20:54 compute-0 nova_compute[262220]:         <nova:port uuid="754d5578-d995-4502-af66-b164dfdf1189">
Oct 08 10:20:54 compute-0 nova_compute[262220]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 08 10:20:54 compute-0 nova_compute[262220]:         </nova:port>
Oct 08 10:20:54 compute-0 nova_compute[262220]:       </nova:ports>
Oct 08 10:20:54 compute-0 nova_compute[262220]:     </nova:instance>
Oct 08 10:20:54 compute-0 nova_compute[262220]:   </metadata>
Oct 08 10:20:54 compute-0 nova_compute[262220]:   <sysinfo type="smbios">
Oct 08 10:20:54 compute-0 nova_compute[262220]:     <system>
Oct 08 10:20:54 compute-0 nova_compute[262220]:       <entry name="manufacturer">RDO</entry>
Oct 08 10:20:54 compute-0 nova_compute[262220]:       <entry name="product">OpenStack Compute</entry>
Oct 08 10:20:54 compute-0 nova_compute[262220]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 08 10:20:54 compute-0 nova_compute[262220]:       <entry name="serial">20ffb86b-b5ba-4818-82e4-14a755c48807</entry>
Oct 08 10:20:54 compute-0 nova_compute[262220]:       <entry name="uuid">20ffb86b-b5ba-4818-82e4-14a755c48807</entry>
Oct 08 10:20:54 compute-0 nova_compute[262220]:       <entry name="family">Virtual Machine</entry>
Oct 08 10:20:54 compute-0 nova_compute[262220]:     </system>
Oct 08 10:20:54 compute-0 nova_compute[262220]:   </sysinfo>
Oct 08 10:20:54 compute-0 nova_compute[262220]:   <os>
Oct 08 10:20:54 compute-0 nova_compute[262220]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 08 10:20:54 compute-0 nova_compute[262220]:     <boot dev="hd"/>
Oct 08 10:20:54 compute-0 nova_compute[262220]:     <smbios mode="sysinfo"/>
Oct 08 10:20:54 compute-0 nova_compute[262220]:   </os>
Oct 08 10:20:54 compute-0 nova_compute[262220]:   <features>
Oct 08 10:20:54 compute-0 nova_compute[262220]:     <acpi/>
Oct 08 10:20:54 compute-0 nova_compute[262220]:     <apic/>
Oct 08 10:20:54 compute-0 nova_compute[262220]:     <vmcoreinfo/>
Oct 08 10:20:54 compute-0 nova_compute[262220]:   </features>
Oct 08 10:20:54 compute-0 nova_compute[262220]:   <clock offset="utc">
Oct 08 10:20:54 compute-0 nova_compute[262220]:     <timer name="pit" tickpolicy="delay"/>
Oct 08 10:20:54 compute-0 nova_compute[262220]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 08 10:20:54 compute-0 nova_compute[262220]:     <timer name="hpet" present="no"/>
Oct 08 10:20:54 compute-0 nova_compute[262220]:   </clock>
Oct 08 10:20:54 compute-0 nova_compute[262220]:   <cpu mode="host-model" match="exact">
Oct 08 10:20:54 compute-0 nova_compute[262220]:     <topology sockets="1" cores="1" threads="1"/>
Oct 08 10:20:54 compute-0 nova_compute[262220]:   </cpu>
Oct 08 10:20:54 compute-0 nova_compute[262220]:   <devices>
Oct 08 10:20:54 compute-0 nova_compute[262220]:     <disk type="network" device="disk">
Oct 08 10:20:54 compute-0 nova_compute[262220]:       <driver type="raw" cache="none"/>
Oct 08 10:20:54 compute-0 nova_compute[262220]:       <source protocol="rbd" name="vms/20ffb86b-b5ba-4818-82e4-14a755c48807_disk">
Oct 08 10:20:54 compute-0 nova_compute[262220]:         <host name="192.168.122.100" port="6789"/>
Oct 08 10:20:54 compute-0 nova_compute[262220]:         <host name="192.168.122.102" port="6789"/>
Oct 08 10:20:54 compute-0 nova_compute[262220]:         <host name="192.168.122.101" port="6789"/>
Oct 08 10:20:54 compute-0 nova_compute[262220]:       </source>
Oct 08 10:20:54 compute-0 nova_compute[262220]:       <auth username="openstack">
Oct 08 10:20:54 compute-0 nova_compute[262220]:         <secret type="ceph" uuid="787292cc-8154-50c4-9e00-e9be3e817149"/>
Oct 08 10:20:54 compute-0 nova_compute[262220]:       </auth>
Oct 08 10:20:54 compute-0 nova_compute[262220]:       <target dev="vda" bus="virtio"/>
Oct 08 10:20:54 compute-0 nova_compute[262220]:     </disk>
Oct 08 10:20:54 compute-0 nova_compute[262220]:     <disk type="network" device="cdrom">
Oct 08 10:20:54 compute-0 nova_compute[262220]:       <driver type="raw" cache="none"/>
Oct 08 10:20:54 compute-0 nova_compute[262220]:       <source protocol="rbd" name="vms/20ffb86b-b5ba-4818-82e4-14a755c48807_disk.config">
Oct 08 10:20:54 compute-0 nova_compute[262220]:         <host name="192.168.122.100" port="6789"/>
Oct 08 10:20:54 compute-0 nova_compute[262220]:         <host name="192.168.122.102" port="6789"/>
Oct 08 10:20:54 compute-0 nova_compute[262220]:         <host name="192.168.122.101" port="6789"/>
Oct 08 10:20:54 compute-0 nova_compute[262220]:       </source>
Oct 08 10:20:54 compute-0 nova_compute[262220]:       <auth username="openstack">
Oct 08 10:20:54 compute-0 nova_compute[262220]:         <secret type="ceph" uuid="787292cc-8154-50c4-9e00-e9be3e817149"/>
Oct 08 10:20:54 compute-0 nova_compute[262220]:       </auth>
Oct 08 10:20:54 compute-0 nova_compute[262220]:       <target dev="sda" bus="sata"/>
Oct 08 10:20:54 compute-0 nova_compute[262220]:     </disk>
Oct 08 10:20:54 compute-0 nova_compute[262220]:     <interface type="ethernet">
Oct 08 10:20:54 compute-0 nova_compute[262220]:       <mac address="fa:16:3e:5d:e9:f4"/>
Oct 08 10:20:54 compute-0 nova_compute[262220]:       <model type="virtio"/>
Oct 08 10:20:54 compute-0 nova_compute[262220]:       <driver name="vhost" rx_queue_size="512"/>
Oct 08 10:20:54 compute-0 nova_compute[262220]:       <mtu size="1442"/>
Oct 08 10:20:54 compute-0 nova_compute[262220]:       <target dev="tap754d5578-d9"/>
Oct 08 10:20:54 compute-0 nova_compute[262220]:     </interface>
Oct 08 10:20:54 compute-0 nova_compute[262220]:     <serial type="pty">
Oct 08 10:20:54 compute-0 nova_compute[262220]:       <log file="/var/lib/nova/instances/20ffb86b-b5ba-4818-82e4-14a755c48807/console.log" append="off"/>
Oct 08 10:20:54 compute-0 nova_compute[262220]:     </serial>
Oct 08 10:20:54 compute-0 nova_compute[262220]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 08 10:20:54 compute-0 nova_compute[262220]:     <video>
Oct 08 10:20:54 compute-0 nova_compute[262220]:       <model type="virtio"/>
Oct 08 10:20:54 compute-0 nova_compute[262220]:     </video>
Oct 08 10:20:54 compute-0 nova_compute[262220]:     <input type="tablet" bus="usb"/>
Oct 08 10:20:54 compute-0 nova_compute[262220]:     <rng model="virtio">
Oct 08 10:20:54 compute-0 nova_compute[262220]:       <backend model="random">/dev/urandom</backend>
Oct 08 10:20:54 compute-0 nova_compute[262220]:     </rng>
Oct 08 10:20:54 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root"/>
Oct 08 10:20:54 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:20:54 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:20:54 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:20:54 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:20:54 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:20:54 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:20:54 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:20:54 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:20:54 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:20:54 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:20:54 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:20:54 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:20:54 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:20:54 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:20:54 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:20:54 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:20:54 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:20:54 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:20:54 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:20:54 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:20:54 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:20:54 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:20:54 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:20:54 compute-0 nova_compute[262220]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 10:20:54 compute-0 nova_compute[262220]:     <controller type="usb" index="0"/>
Oct 08 10:20:54 compute-0 nova_compute[262220]:     <memballoon model="virtio">
Oct 08 10:20:54 compute-0 nova_compute[262220]:       <stats period="10"/>
Oct 08 10:20:54 compute-0 nova_compute[262220]:     </memballoon>
Oct 08 10:20:54 compute-0 nova_compute[262220]:   </devices>
Oct 08 10:20:54 compute-0 nova_compute[262220]: </domain>
Oct 08 10:20:54 compute-0 nova_compute[262220]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 08 10:20:54 compute-0 nova_compute[262220]: 2025-10-08 10:20:54.748 2 DEBUG nova.compute.manager [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Preparing to wait for external event network-vif-plugged-754d5578-d995-4502-af66-b164dfdf1189 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 08 10:20:54 compute-0 nova_compute[262220]: 2025-10-08 10:20:54.748 2 DEBUG oslo_concurrency.lockutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "20ffb86b-b5ba-4818-82e4-14a755c48807-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:20:54 compute-0 nova_compute[262220]: 2025-10-08 10:20:54.749 2 DEBUG oslo_concurrency.lockutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "20ffb86b-b5ba-4818-82e4-14a755c48807-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:20:54 compute-0 nova_compute[262220]: 2025-10-08 10:20:54.749 2 DEBUG oslo_concurrency.lockutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "20ffb86b-b5ba-4818-82e4-14a755c48807-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:20:54 compute-0 nova_compute[262220]: 2025-10-08 10:20:54.750 2 DEBUG nova.virt.libvirt.vif [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-08T10:20:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-451384508',display_name='tempest-TestNetworkBasicOps-server-451384508',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-451384508',id=13,image_ref='e5994bac-385d-4cfe-962e-386aa0559983',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKzAAbR+LebFHZ4MQpbXVINvQrQE4iZi3jhjlRa4bUuBuh7BAgqwE3gXNZho6NGF97w7AAO52PK7tmiXY23liBZwBI0PDfy6ztl7vXddFfJ7MBnkOiMny5dlb5dxWiMeog==',key_name='tempest-TestNetworkBasicOps-1706390229',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0edb2c85b88b4b168a4b2e8c5ed4a05c',ramdisk_id='',reservation_id='r-mat2tuft',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e5994bac-385d-4cfe-962e-386aa0559983',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-139500885',owner_user_name='tempest-TestNetworkBasicOps-139500885-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-08T10:20:49Z,user_data=None,user_id='d50b19166a7245e390a6e29682191263',uuid=20ffb86b-b5ba-4818-82e4-14a755c48807,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "754d5578-d995-4502-af66-b164dfdf1189", "address": "fa:16:3e:5d:e9:f4", "network": {"id": "84428682-9eff-4658-a105-8c0d1de9c87f", "bridge": "br-int", "label": "tempest-network-smoke--1945963860", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap754d5578-d9", "ovs_interfaceid": "754d5578-d995-4502-af66-b164dfdf1189", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 08 10:20:54 compute-0 nova_compute[262220]: 2025-10-08 10:20:54.750 2 DEBUG nova.network.os_vif_util [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converting VIF {"id": "754d5578-d995-4502-af66-b164dfdf1189", "address": "fa:16:3e:5d:e9:f4", "network": {"id": "84428682-9eff-4658-a105-8c0d1de9c87f", "bridge": "br-int", "label": "tempest-network-smoke--1945963860", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap754d5578-d9", "ovs_interfaceid": "754d5578-d995-4502-af66-b164dfdf1189", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 08 10:20:54 compute-0 nova_compute[262220]: 2025-10-08 10:20:54.750 2 DEBUG nova.network.os_vif_util [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:e9:f4,bridge_name='br-int',has_traffic_filtering=True,id=754d5578-d995-4502-af66-b164dfdf1189,network=Network(84428682-9eff-4658-a105-8c0d1de9c87f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap754d5578-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 08 10:20:54 compute-0 nova_compute[262220]: 2025-10-08 10:20:54.751 2 DEBUG os_vif [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:e9:f4,bridge_name='br-int',has_traffic_filtering=True,id=754d5578-d995-4502-af66-b164dfdf1189,network=Network(84428682-9eff-4658-a105-8c0d1de9c87f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap754d5578-d9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 08 10:20:54 compute-0 nova_compute[262220]: 2025-10-08 10:20:54.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:20:54 compute-0 nova_compute[262220]: 2025-10-08 10:20:54.752 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 10:20:54 compute-0 nova_compute[262220]: 2025-10-08 10:20:54.752 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 10:20:54 compute-0 nova_compute[262220]: 2025-10-08 10:20:54.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:20:54 compute-0 nova_compute[262220]: 2025-10-08 10:20:54.755 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap754d5578-d9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 10:20:54 compute-0 nova_compute[262220]: 2025-10-08 10:20:54.756 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap754d5578-d9, col_values=(('external_ids', {'iface-id': '754d5578-d995-4502-af66-b164dfdf1189', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5d:e9:f4', 'vm-uuid': '20ffb86b-b5ba-4818-82e4-14a755c48807'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 10:20:54 compute-0 nova_compute[262220]: 2025-10-08 10:20:54.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:20:54 compute-0 NetworkManager[44872]: <info>  [1759918854.7584] manager: (tap754d5578-d9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/49)
Oct 08 10:20:54 compute-0 nova_compute[262220]: 2025-10-08 10:20:54.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 08 10:20:54 compute-0 nova_compute[262220]: 2025-10-08 10:20:54.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:20:54 compute-0 nova_compute[262220]: 2025-10-08 10:20:54.765 2 INFO os_vif [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:e9:f4,bridge_name='br-int',has_traffic_filtering=True,id=754d5578-d995-4502-af66-b164dfdf1189,network=Network(84428682-9eff-4658-a105-8c0d1de9c87f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap754d5578-d9')
Oct 08 10:20:54 compute-0 nova_compute[262220]: 2025-10-08 10:20:54.919 2 DEBUG nova.virt.libvirt.driver [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 08 10:20:54 compute-0 nova_compute[262220]: 2025-10-08 10:20:54.919 2 DEBUG nova.virt.libvirt.driver [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 08 10:20:54 compute-0 nova_compute[262220]: 2025-10-08 10:20:54.920 2 DEBUG nova.virt.libvirt.driver [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] No VIF found with MAC fa:16:3e:5d:e9:f4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 08 10:20:54 compute-0 nova_compute[262220]: 2025-10-08 10:20:54.920 2 INFO nova.virt.libvirt.driver [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Using config drive
Oct 08 10:20:54 compute-0 nova_compute[262220]: 2025-10-08 10:20:54.952 2 DEBUG nova.storage.rbd_utils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] rbd image 20ffb86b-b5ba-4818-82e4-14a755c48807_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 08 10:20:55 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:20:55 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:20:55 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:20:55.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:20:55 compute-0 nova_compute[262220]: 2025-10-08 10:20:55.321 2 DEBUG nova.network.neutron [req-9e28075a-3aa8-48c9-9d35-5095a1b2805b req-d89bb926-5361-48c4-b3f2-e665275fa2b6 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Updated VIF entry in instance network info cache for port 754d5578-d995-4502-af66-b164dfdf1189. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 08 10:20:55 compute-0 nova_compute[262220]: 2025-10-08 10:20:55.322 2 DEBUG nova.network.neutron [req-9e28075a-3aa8-48c9-9d35-5095a1b2805b req-d89bb926-5361-48c4-b3f2-e665275fa2b6 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Updating instance_info_cache with network_info: [{"id": "754d5578-d995-4502-af66-b164dfdf1189", "address": "fa:16:3e:5d:e9:f4", "network": {"id": "84428682-9eff-4658-a105-8c0d1de9c87f", "bridge": "br-int", "label": "tempest-network-smoke--1945963860", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap754d5578-d9", "ovs_interfaceid": "754d5578-d995-4502-af66-b164dfdf1189", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 08 10:20:55 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1090: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 08 10:20:55 compute-0 nova_compute[262220]: 2025-10-08 10:20:55.445 2 DEBUG oslo_concurrency.lockutils [req-9e28075a-3aa8-48c9-9d35-5095a1b2805b req-d89bb926-5361-48c4-b3f2-e665275fa2b6 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Releasing lock "refresh_cache-20ffb86b-b5ba-4818-82e4-14a755c48807" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 08 10:20:55 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/838441368' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 08 10:20:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:20:55] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct 08 10:20:55 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:20:55] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct 08 10:20:55 compute-0 nova_compute[262220]: 2025-10-08 10:20:55.872 2 INFO nova.virt.libvirt.driver [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Creating config drive at /var/lib/nova/instances/20ffb86b-b5ba-4818-82e4-14a755c48807/disk.config
Oct 08 10:20:55 compute-0 nova_compute[262220]: 2025-10-08 10:20:55.877 2 DEBUG oslo_concurrency.processutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/20ffb86b-b5ba-4818-82e4-14a755c48807/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5wh3428j execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:20:55 compute-0 nova_compute[262220]: 2025-10-08 10:20:55.907 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:20:56 compute-0 nova_compute[262220]: 2025-10-08 10:20:56.023 2 DEBUG oslo_concurrency.processutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/20ffb86b-b5ba-4818-82e4-14a755c48807/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5wh3428j" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:20:56 compute-0 nova_compute[262220]: 2025-10-08 10:20:56.054 2 DEBUG nova.storage.rbd_utils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] rbd image 20ffb86b-b5ba-4818-82e4-14a755c48807_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 08 10:20:56 compute-0 nova_compute[262220]: 2025-10-08 10:20:56.058 2 DEBUG oslo_concurrency.processutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/20ffb86b-b5ba-4818-82e4-14a755c48807/disk.config 20ffb86b-b5ba-4818-82e4-14a755c48807_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:20:56 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:20:56 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:20:56 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:20:56.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:20:56 compute-0 nova_compute[262220]: 2025-10-08 10:20:56.222 2 DEBUG oslo_concurrency.processutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/20ffb86b-b5ba-4818-82e4-14a755c48807/disk.config 20ffb86b-b5ba-4818-82e4-14a755c48807_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:20:56 compute-0 nova_compute[262220]: 2025-10-08 10:20:56.223 2 INFO nova.virt.libvirt.driver [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Deleting local config drive /var/lib/nova/instances/20ffb86b-b5ba-4818-82e4-14a755c48807/disk.config because it was imported into RBD.
Oct 08 10:20:56 compute-0 NetworkManager[44872]: <info>  [1759918856.2768] manager: (tap754d5578-d9): new Tun device (/org/freedesktop/NetworkManager/Devices/50)
Oct 08 10:20:56 compute-0 kernel: tap754d5578-d9: entered promiscuous mode
Oct 08 10:20:56 compute-0 nova_compute[262220]: 2025-10-08 10:20:56.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:20:56 compute-0 ovn_controller[153187]: 2025-10-08T10:20:56Z|00068|binding|INFO|Claiming lport 754d5578-d995-4502-af66-b164dfdf1189 for this chassis.
Oct 08 10:20:56 compute-0 ovn_controller[153187]: 2025-10-08T10:20:56Z|00069|binding|INFO|754d5578-d995-4502-af66-b164dfdf1189: Claiming fa:16:3e:5d:e9:f4 10.100.0.6
Oct 08 10:20:56 compute-0 nova_compute[262220]: 2025-10-08 10:20:56.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:20:56 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:20:56.293 163175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:e9:f4 10.100.0.6'], port_security=['fa:16:3e:5d:e9:f4 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '20ffb86b-b5ba-4818-82e4-14a755c48807', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84428682-9eff-4658-a105-8c0d1de9c87f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0edb2c85b88b4b168a4b2e8c5ed4a05c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '16d57876-2c07-4569-9200-1b8e93dece9c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f9ad9bb7-7c7b-464c-bbd0-86ab756be37d, chassis=[<ovs.db.idl.Row object at 0x7f191f105a60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f191f105a60>], logical_port=754d5578-d995-4502-af66-b164dfdf1189) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 08 10:20:56 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:20:56.295 163175 INFO neutron.agent.ovn.metadata.agent [-] Port 754d5578-d995-4502-af66-b164dfdf1189 in datapath 84428682-9eff-4658-a105-8c0d1de9c87f bound to our chassis
Oct 08 10:20:56 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:20:56.296 163175 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 84428682-9eff-4658-a105-8c0d1de9c87f
Oct 08 10:20:56 compute-0 systemd-udevd[283072]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 10:20:56 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:20:56.307 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[63a60383-39fe-4bb7-b6fb-a742f309ed0a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:20:56 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:20:56.308 163175 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap84428682-91 in ovnmeta-84428682-9eff-4658-a105-8c0d1de9c87f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 08 10:20:56 compute-0 systemd-machined[216030]: New machine qemu-4-instance-0000000d.
Oct 08 10:20:56 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:20:56.314 267781 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap84428682-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 08 10:20:56 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:20:56.314 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[50931ffe-53d7-4233-b8d0-6b2274a493d8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:20:56 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:20:56.315 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[aa252b14-8018-4e61-8977-1cf67ad18958]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:20:56 compute-0 NetworkManager[44872]: <info>  [1759918856.3208] device (tap754d5578-d9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 08 10:20:56 compute-0 NetworkManager[44872]: <info>  [1759918856.3227] device (tap754d5578-d9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 08 10:20:56 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:20:56.326 163290 DEBUG oslo.privsep.daemon [-] privsep: reply[3614025f-17f7-4a93-97e8-0455be8fffb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:20:56 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-0000000d.
Oct 08 10:20:56 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:20:56.354 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[529e2ef0-00fb-4262-b780-c9ae777ba119]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:20:56 compute-0 nova_compute[262220]: 2025-10-08 10:20:56.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:20:56 compute-0 ovn_controller[153187]: 2025-10-08T10:20:56Z|00070|binding|INFO|Setting lport 754d5578-d995-4502-af66-b164dfdf1189 ovn-installed in OVS
Oct 08 10:20:56 compute-0 ovn_controller[153187]: 2025-10-08T10:20:56Z|00071|binding|INFO|Setting lport 754d5578-d995-4502-af66-b164dfdf1189 up in Southbound
Oct 08 10:20:56 compute-0 nova_compute[262220]: 2025-10-08 10:20:56.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:20:56 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:20:56.385 267799 DEBUG oslo.privsep.daemon [-] privsep: reply[48cd43e4-8c3c-42f5-a9fb-b9636cab651b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:20:56 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:20:56.390 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[20b425f2-5b12-4e3b-b05f-af7de13d5e90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:20:56 compute-0 NetworkManager[44872]: <info>  [1759918856.3924] manager: (tap84428682-90): new Veth device (/org/freedesktop/NetworkManager/Devices/51)
Oct 08 10:20:56 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:20:56.425 267799 DEBUG oslo.privsep.daemon [-] privsep: reply[5688bf14-4dd3-4879-bd2b-ae442ff48cb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:20:56 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:20:56.428 267799 DEBUG oslo.privsep.daemon [-] privsep: reply[d0c4eadd-263c-4afb-930c-f19827d0466b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:20:56 compute-0 NetworkManager[44872]: <info>  [1759918856.4501] device (tap84428682-90): carrier: link connected
Oct 08 10:20:56 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:20:56.457 267799 DEBUG oslo.privsep.daemon [-] privsep: reply[cd52d35f-ffc2-4b5a-8e15-079da6d9db27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:20:56 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:20:56.474 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[e0bde2e7-e843-41e4-8968-b48fe339e51b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap84428682-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:de:8d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 481508, 'reachable_time': 16262, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283104, 'error': None, 'target': 'ovnmeta-84428682-9eff-4658-a105-8c0d1de9c87f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:20:56 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:20:56.497 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[955430c4-c030-4a5d-a527-0ca875c6cc3d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee8:de8d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 481508, 'tstamp': 481508}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283105, 'error': None, 'target': 'ovnmeta-84428682-9eff-4658-a105-8c0d1de9c87f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:20:56 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:20:56.514 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[7b37b536-1c18-49d0-9139-9d76a6e0215d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap84428682-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:de:8d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 481508, 'reachable_time': 16262, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 283106, 'error': None, 'target': 'ovnmeta-84428682-9eff-4658-a105-8c0d1de9c87f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:20:56 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:20:56.552 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[f1944d86-3a22-48c2-8003-8a3df97e2e09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:20:56 compute-0 ceph-mon[73572]: pgmap v1090: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 08 10:20:56 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:20:56.624 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[a7fb33f3-0d16-4475-b6da-03fd037341db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:20:56 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:20:56.626 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84428682-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 10:20:56 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:20:56.626 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 10:20:56 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:20:56.627 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap84428682-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 10:20:56 compute-0 nova_compute[262220]: 2025-10-08 10:20:56.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:20:56 compute-0 kernel: tap84428682-90: entered promiscuous mode
Oct 08 10:20:56 compute-0 NetworkManager[44872]: <info>  [1759918856.6302] manager: (tap84428682-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Oct 08 10:20:56 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:20:56.636 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap84428682-90, col_values=(('external_ids', {'iface-id': 'aead10e1-bf7c-4d43-bf9e-517a64e3ea62'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 10:20:56 compute-0 nova_compute[262220]: 2025-10-08 10:20:56.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:20:56 compute-0 ovn_controller[153187]: 2025-10-08T10:20:56Z|00072|binding|INFO|Releasing lport aead10e1-bf7c-4d43-bf9e-517a64e3ea62 from this chassis (sb_readonly=0)
Oct 08 10:20:56 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:20:56.640 163175 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/84428682-9eff-4658-a105-8c0d1de9c87f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/84428682-9eff-4658-a105-8c0d1de9c87f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 08 10:20:56 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:20:56.642 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[6f97d722-3589-4825-9b9f-22b9fa6e67ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:20:56 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:20:56.642 163175 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 08 10:20:56 compute-0 ovn_metadata_agent[163169]: global
Oct 08 10:20:56 compute-0 ovn_metadata_agent[163169]:     log         /dev/log local0 debug
Oct 08 10:20:56 compute-0 ovn_metadata_agent[163169]:     log-tag     haproxy-metadata-proxy-84428682-9eff-4658-a105-8c0d1de9c87f
Oct 08 10:20:56 compute-0 ovn_metadata_agent[163169]:     user        root
Oct 08 10:20:56 compute-0 ovn_metadata_agent[163169]:     group       root
Oct 08 10:20:56 compute-0 ovn_metadata_agent[163169]:     maxconn     1024
Oct 08 10:20:56 compute-0 ovn_metadata_agent[163169]:     pidfile     /var/lib/neutron/external/pids/84428682-9eff-4658-a105-8c0d1de9c87f.pid.haproxy
Oct 08 10:20:56 compute-0 ovn_metadata_agent[163169]:     daemon
Oct 08 10:20:56 compute-0 ovn_metadata_agent[163169]: 
Oct 08 10:20:56 compute-0 ovn_metadata_agent[163169]: defaults
Oct 08 10:20:56 compute-0 ovn_metadata_agent[163169]:     log global
Oct 08 10:20:56 compute-0 ovn_metadata_agent[163169]:     mode http
Oct 08 10:20:56 compute-0 ovn_metadata_agent[163169]:     option httplog
Oct 08 10:20:56 compute-0 ovn_metadata_agent[163169]:     option dontlognull
Oct 08 10:20:56 compute-0 ovn_metadata_agent[163169]:     option http-server-close
Oct 08 10:20:56 compute-0 ovn_metadata_agent[163169]:     option forwardfor
Oct 08 10:20:56 compute-0 ovn_metadata_agent[163169]:     retries                 3
Oct 08 10:20:56 compute-0 ovn_metadata_agent[163169]:     timeout http-request    30s
Oct 08 10:20:56 compute-0 ovn_metadata_agent[163169]:     timeout connect         30s
Oct 08 10:20:56 compute-0 ovn_metadata_agent[163169]:     timeout client          32s
Oct 08 10:20:56 compute-0 ovn_metadata_agent[163169]:     timeout server          32s
Oct 08 10:20:56 compute-0 ovn_metadata_agent[163169]:     timeout http-keep-alive 30s
Oct 08 10:20:56 compute-0 ovn_metadata_agent[163169]: 
Oct 08 10:20:56 compute-0 ovn_metadata_agent[163169]: 
Oct 08 10:20:56 compute-0 ovn_metadata_agent[163169]: listen listener
Oct 08 10:20:56 compute-0 ovn_metadata_agent[163169]:     bind 169.254.169.254:80
Oct 08 10:20:56 compute-0 ovn_metadata_agent[163169]:     server metadata /var/lib/neutron/metadata_proxy
Oct 08 10:20:56 compute-0 ovn_metadata_agent[163169]:     http-request add-header X-OVN-Network-ID 84428682-9eff-4658-a105-8c0d1de9c87f
Oct 08 10:20:56 compute-0 ovn_metadata_agent[163169]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 08 10:20:56 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:20:56.643 163175 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-84428682-9eff-4658-a105-8c0d1de9c87f', 'env', 'PROCESS_TAG=haproxy-84428682-9eff-4658-a105-8c0d1de9c87f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/84428682-9eff-4658-a105-8c0d1de9c87f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 08 10:20:56 compute-0 nova_compute[262220]: 2025-10-08 10:20:56.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:20:56 compute-0 nova_compute[262220]: 2025-10-08 10:20:56.660 2 DEBUG nova.compute.manager [req-53ae439b-572f-427f-94c5-1ea4a98196f1 req-f8b07e25-5651-4abc-8e18-5cde1b710aa3 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Received event network-vif-plugged-754d5578-d995-4502-af66-b164dfdf1189 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 08 10:20:56 compute-0 nova_compute[262220]: 2025-10-08 10:20:56.660 2 DEBUG oslo_concurrency.lockutils [req-53ae439b-572f-427f-94c5-1ea4a98196f1 req-f8b07e25-5651-4abc-8e18-5cde1b710aa3 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "20ffb86b-b5ba-4818-82e4-14a755c48807-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:20:56 compute-0 nova_compute[262220]: 2025-10-08 10:20:56.661 2 DEBUG oslo_concurrency.lockutils [req-53ae439b-572f-427f-94c5-1ea4a98196f1 req-f8b07e25-5651-4abc-8e18-5cde1b710aa3 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "20ffb86b-b5ba-4818-82e4-14a755c48807-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:20:56 compute-0 nova_compute[262220]: 2025-10-08 10:20:56.661 2 DEBUG oslo_concurrency.lockutils [req-53ae439b-572f-427f-94c5-1ea4a98196f1 req-f8b07e25-5651-4abc-8e18-5cde1b710aa3 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "20ffb86b-b5ba-4818-82e4-14a755c48807-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:20:56 compute-0 nova_compute[262220]: 2025-10-08 10:20:56.661 2 DEBUG nova.compute.manager [req-53ae439b-572f-427f-94c5-1ea4a98196f1 req-f8b07e25-5651-4abc-8e18-5cde1b710aa3 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Processing event network-vif-plugged-754d5578-d995-4502-af66-b164dfdf1189 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 08 10:20:56 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:20:56.956 163175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2a:d6:ca', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'b2:8b:1e:40:84:a3'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 08 10:20:56 compute-0 nova_compute[262220]: 2025-10-08 10:20:56.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:20:57 compute-0 podman[283179]: 2025-10-08 10:20:57.03779082 +0000 UTC m=+0.050971948 container create a50a99a7b908636d1a3476dbe2c028af766a990ddb805618b5dbf08be82bf4f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-84428682-9eff-4658-a105-8c0d1de9c87f, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true)
Oct 08 10:20:57 compute-0 systemd[1]: Started libpod-conmon-a50a99a7b908636d1a3476dbe2c028af766a990ddb805618b5dbf08be82bf4f8.scope.
Oct 08 10:20:57 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:20:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9305ceef478b92232a6159096bf3391d562b676524b5a981565d66364f354e43/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 08 10:20:57 compute-0 podman[283179]: 2025-10-08 10:20:57.012973218 +0000 UTC m=+0.026154376 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 08 10:20:57 compute-0 podman[283179]: 2025-10-08 10:20:57.116682607 +0000 UTC m=+0.129863745 container init a50a99a7b908636d1a3476dbe2c028af766a990ddb805618b5dbf08be82bf4f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-84428682-9eff-4658-a105-8c0d1de9c87f, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 08 10:20:57 compute-0 podman[283179]: 2025-10-08 10:20:57.126530076 +0000 UTC m=+0.139711214 container start a50a99a7b908636d1a3476dbe2c028af766a990ddb805618b5dbf08be82bf4f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-84428682-9eff-4658-a105-8c0d1de9c87f, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 08 10:20:57 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:20:57 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:20:57 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:20:57.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:20:57 compute-0 neutron-haproxy-ovnmeta-84428682-9eff-4658-a105-8c0d1de9c87f[283195]: [NOTICE]   (283199) : New worker (283201) forked
Oct 08 10:20:57 compute-0 neutron-haproxy-ovnmeta-84428682-9eff-4658-a105-8c0d1de9c87f[283195]: [NOTICE]   (283199) : Loading success.
Oct 08 10:20:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:20:57.193 163175 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 08 10:20:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:20:57.195Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:20:57 compute-0 nova_compute[262220]: 2025-10-08 10:20:57.213 2 DEBUG nova.compute.manager [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 08 10:20:57 compute-0 nova_compute[262220]: 2025-10-08 10:20:57.215 2 DEBUG nova.virt.driver [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Emitting event <LifecycleEvent: 1759918857.212883, 20ffb86b-b5ba-4818-82e4-14a755c48807 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 08 10:20:57 compute-0 nova_compute[262220]: 2025-10-08 10:20:57.215 2 INFO nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] VM Started (Lifecycle Event)
Oct 08 10:20:57 compute-0 nova_compute[262220]: 2025-10-08 10:20:57.231 2 DEBUG nova.virt.libvirt.driver [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 08 10:20:57 compute-0 nova_compute[262220]: 2025-10-08 10:20:57.235 2 INFO nova.virt.libvirt.driver [-] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Instance spawned successfully.
Oct 08 10:20:57 compute-0 nova_compute[262220]: 2025-10-08 10:20:57.235 2 DEBUG nova.virt.libvirt.driver [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 08 10:20:57 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1091: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 08 10:20:57 compute-0 nova_compute[262220]: 2025-10-08 10:20:57.360 2 DEBUG nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 08 10:20:57 compute-0 nova_compute[262220]: 2025-10-08 10:20:57.363 2 DEBUG nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 08 10:20:57 compute-0 nova_compute[262220]: 2025-10-08 10:20:57.381 2 DEBUG nova.virt.libvirt.driver [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 08 10:20:57 compute-0 nova_compute[262220]: 2025-10-08 10:20:57.381 2 DEBUG nova.virt.libvirt.driver [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 08 10:20:57 compute-0 nova_compute[262220]: 2025-10-08 10:20:57.382 2 DEBUG nova.virt.libvirt.driver [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 08 10:20:57 compute-0 nova_compute[262220]: 2025-10-08 10:20:57.383 2 DEBUG nova.virt.libvirt.driver [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 08 10:20:57 compute-0 nova_compute[262220]: 2025-10-08 10:20:57.383 2 DEBUG nova.virt.libvirt.driver [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 08 10:20:57 compute-0 nova_compute[262220]: 2025-10-08 10:20:57.384 2 DEBUG nova.virt.libvirt.driver [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 08 10:20:57 compute-0 nova_compute[262220]: 2025-10-08 10:20:57.405 2 INFO nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 08 10:20:57 compute-0 nova_compute[262220]: 2025-10-08 10:20:57.405 2 DEBUG nova.virt.driver [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Emitting event <LifecycleEvent: 1759918857.2152803, 20ffb86b-b5ba-4818-82e4-14a755c48807 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 08 10:20:57 compute-0 nova_compute[262220]: 2025-10-08 10:20:57.405 2 INFO nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] VM Paused (Lifecycle Event)
Oct 08 10:20:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:20:57.418 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:20:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:20:57.419 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:20:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:20:57.420 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:20:57 compute-0 nova_compute[262220]: 2025-10-08 10:20:57.641 2 DEBUG nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 08 10:20:57 compute-0 nova_compute[262220]: 2025-10-08 10:20:57.644 2 DEBUG nova.virt.driver [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Emitting event <LifecycleEvent: 1759918857.218107, 20ffb86b-b5ba-4818-82e4-14a755c48807 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 08 10:20:57 compute-0 nova_compute[262220]: 2025-10-08 10:20:57.644 2 INFO nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] VM Resumed (Lifecycle Event)
Oct 08 10:20:57 compute-0 nova_compute[262220]: 2025-10-08 10:20:57.763 2 INFO nova.compute.manager [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Took 8.47 seconds to spawn the instance on the hypervisor.
Oct 08 10:20:57 compute-0 nova_compute[262220]: 2025-10-08 10:20:57.764 2 DEBUG nova.compute.manager [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 08 10:20:57 compute-0 nova_compute[262220]: 2025-10-08 10:20:57.770 2 DEBUG nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 08 10:20:57 compute-0 nova_compute[262220]: 2025-10-08 10:20:57.773 2 DEBUG nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 08 10:20:57 compute-0 nova_compute[262220]: 2025-10-08 10:20:57.834 2 INFO nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 08 10:20:57 compute-0 nova_compute[262220]: 2025-10-08 10:20:57.862 2 INFO nova.compute.manager [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Took 9.52 seconds to build instance.
Oct 08 10:20:57 compute-0 nova_compute[262220]: 2025-10-08 10:20:57.897 2 DEBUG oslo_concurrency.lockutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "20ffb86b-b5ba-4818-82e4-14a755c48807" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:20:57 compute-0 podman[283211]: 2025-10-08 10:20:57.901519501 +0000 UTC m=+0.054784821 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 08 10:20:57 compute-0 podman[283212]: 2025-10-08 10:20:57.927245452 +0000 UTC m=+0.077221165 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 08 10:20:58 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:20:58 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:20:58 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:20:58.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:20:58 compute-0 nova_compute[262220]: 2025-10-08 10:20:58.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:20:58 compute-0 ceph-mon[73572]: pgmap v1091: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 08 10:20:58 compute-0 nova_compute[262220]: 2025-10-08 10:20:58.769 2 DEBUG nova.compute.manager [req-4cfb66e6-a728-4c9c-bd5c-3ebc72e2f00e req-009ecf50-ebc7-4bbd-beb0-0aa66a0f0acf 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Received event network-vif-plugged-754d5578-d995-4502-af66-b164dfdf1189 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 08 10:20:58 compute-0 nova_compute[262220]: 2025-10-08 10:20:58.770 2 DEBUG oslo_concurrency.lockutils [req-4cfb66e6-a728-4c9c-bd5c-3ebc72e2f00e req-009ecf50-ebc7-4bbd-beb0-0aa66a0f0acf 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "20ffb86b-b5ba-4818-82e4-14a755c48807-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:20:58 compute-0 nova_compute[262220]: 2025-10-08 10:20:58.770 2 DEBUG oslo_concurrency.lockutils [req-4cfb66e6-a728-4c9c-bd5c-3ebc72e2f00e req-009ecf50-ebc7-4bbd-beb0-0aa66a0f0acf 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "20ffb86b-b5ba-4818-82e4-14a755c48807-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:20:58 compute-0 nova_compute[262220]: 2025-10-08 10:20:58.770 2 DEBUG oslo_concurrency.lockutils [req-4cfb66e6-a728-4c9c-bd5c-3ebc72e2f00e req-009ecf50-ebc7-4bbd-beb0-0aa66a0f0acf 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "20ffb86b-b5ba-4818-82e4-14a755c48807-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:20:58 compute-0 nova_compute[262220]: 2025-10-08 10:20:58.771 2 DEBUG nova.compute.manager [req-4cfb66e6-a728-4c9c-bd5c-3ebc72e2f00e req-009ecf50-ebc7-4bbd-beb0-0aa66a0f0acf 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] No waiting events found dispatching network-vif-plugged-754d5578-d995-4502-af66-b164dfdf1189 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 08 10:20:58 compute-0 nova_compute[262220]: 2025-10-08 10:20:58.771 2 WARNING nova.compute.manager [req-4cfb66e6-a728-4c9c-bd5c-3ebc72e2f00e req-009ecf50-ebc7-4bbd-beb0-0aa66a0f0acf 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Received unexpected event network-vif-plugged-754d5578-d995-4502-af66-b164dfdf1189 for instance with vm_state active and task_state None.
Oct 08 10:20:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:20:58.851Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:20:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:20:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:20:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:20:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:59 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:20:59 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:20:59 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:20:59 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:20:59.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:20:59 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1092: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 93 op/s
Oct 08 10:20:59 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:20:59 compute-0 nova_compute[262220]: 2025-10-08 10:20:59.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:21:00 compute-0 ovn_controller[153187]: 2025-10-08T10:21:00Z|00073|binding|INFO|Releasing lport aead10e1-bf7c-4d43-bf9e-517a64e3ea62 from this chassis (sb_readonly=0)
Oct 08 10:21:00 compute-0 NetworkManager[44872]: <info>  [1759918860.1332] manager: (patch-provnet-5eda3e1f-dd4f-4c7d-b5fb-d8c0d9996d6d-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/53)
Oct 08 10:21:00 compute-0 NetworkManager[44872]: <info>  [1759918860.1349] manager: (patch-br-int-to-provnet-5eda3e1f-dd4f-4c7d-b5fb-d8c0d9996d6d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Oct 08 10:21:00 compute-0 nova_compute[262220]: 2025-10-08 10:21:00.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:21:00 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:21:00 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:21:00 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:21:00.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:21:00 compute-0 ovn_controller[153187]: 2025-10-08T10:21:00Z|00074|binding|INFO|Releasing lport aead10e1-bf7c-4d43-bf9e-517a64e3ea62 from this chassis (sb_readonly=0)
Oct 08 10:21:00 compute-0 nova_compute[262220]: 2025-10-08 10:21:00.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:21:00 compute-0 nova_compute[262220]: 2025-10-08 10:21:00.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:21:00 compute-0 nova_compute[262220]: 2025-10-08 10:21:00.590 2 DEBUG nova.compute.manager [req-1fe53c74-2f6c-448b-b020-5e239b5b1313 req-57aae472-04d8-4c7c-b84f-64ef066fcc8c 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Received event network-changed-754d5578-d995-4502-af66-b164dfdf1189 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 08 10:21:00 compute-0 nova_compute[262220]: 2025-10-08 10:21:00.590 2 DEBUG nova.compute.manager [req-1fe53c74-2f6c-448b-b020-5e239b5b1313 req-57aae472-04d8-4c7c-b84f-64ef066fcc8c 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Refreshing instance network info cache due to event network-changed-754d5578-d995-4502-af66-b164dfdf1189. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 08 10:21:00 compute-0 nova_compute[262220]: 2025-10-08 10:21:00.590 2 DEBUG oslo_concurrency.lockutils [req-1fe53c74-2f6c-448b-b020-5e239b5b1313 req-57aae472-04d8-4c7c-b84f-64ef066fcc8c 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "refresh_cache-20ffb86b-b5ba-4818-82e4-14a755c48807" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 08 10:21:00 compute-0 nova_compute[262220]: 2025-10-08 10:21:00.590 2 DEBUG oslo_concurrency.lockutils [req-1fe53c74-2f6c-448b-b020-5e239b5b1313 req-57aae472-04d8-4c7c-b84f-64ef066fcc8c 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquired lock "refresh_cache-20ffb86b-b5ba-4818-82e4-14a755c48807" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 08 10:21:00 compute-0 nova_compute[262220]: 2025-10-08 10:21:00.590 2 DEBUG nova.network.neutron [req-1fe53c74-2f6c-448b-b020-5e239b5b1313 req-57aae472-04d8-4c7c-b84f-64ef066fcc8c 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Refreshing network info cache for port 754d5578-d995-4502-af66-b164dfdf1189 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 08 10:21:00 compute-0 ceph-mon[73572]: pgmap v1092: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 93 op/s
Oct 08 10:21:01 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:21:01 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:21:01 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:21:01.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:21:01 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1093: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 92 op/s
Oct 08 10:21:01 compute-0 nova_compute[262220]: 2025-10-08 10:21:01.518 2 DEBUG nova.network.neutron [req-1fe53c74-2f6c-448b-b020-5e239b5b1313 req-57aae472-04d8-4c7c-b84f-64ef066fcc8c 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Updated VIF entry in instance network info cache for port 754d5578-d995-4502-af66-b164dfdf1189. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 08 10:21:01 compute-0 nova_compute[262220]: 2025-10-08 10:21:01.519 2 DEBUG nova.network.neutron [req-1fe53c74-2f6c-448b-b020-5e239b5b1313 req-57aae472-04d8-4c7c-b84f-64ef066fcc8c 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Updating instance_info_cache with network_info: [{"id": "754d5578-d995-4502-af66-b164dfdf1189", "address": "fa:16:3e:5d:e9:f4", "network": {"id": "84428682-9eff-4658-a105-8c0d1de9c87f", "bridge": "br-int", "label": "tempest-network-smoke--1945963860", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap754d5578-d9", "ovs_interfaceid": "754d5578-d995-4502-af66-b164dfdf1189", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 08 10:21:01 compute-0 nova_compute[262220]: 2025-10-08 10:21:01.543 2 DEBUG oslo_concurrency.lockutils [req-1fe53c74-2f6c-448b-b020-5e239b5b1313 req-57aae472-04d8-4c7c-b84f-64ef066fcc8c 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Releasing lock "refresh_cache-20ffb86b-b5ba-4818-82e4-14a755c48807" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 08 10:21:01 compute-0 ceph-mon[73572]: pgmap v1093: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 92 op/s
Oct 08 10:21:02 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:21:02 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:21:02 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:21:02.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:21:02 compute-0 sudo[283254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:21:02 compute-0 sudo[283254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:21:02 compute-0 sudo[283254]: pam_unix(sudo:session): session closed for user root
Oct 08 10:21:02 compute-0 sudo[283279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 08 10:21:02 compute-0 sudo[283279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:21:02 compute-0 sudo[283279]: pam_unix(sudo:session): session closed for user root
Oct 08 10:21:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:21:02 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:21:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 10:21:02 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:21:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 08 10:21:02 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 10:21:02 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:21:02 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:21:02 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1094: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 96 op/s
Oct 08 10:21:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 08 10:21:02 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:21:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 08 10:21:02 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:21:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 08 10:21:02 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 10:21:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 08 10:21:02 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 10:21:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 10:21:02 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:21:03 compute-0 sudo[283336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:21:03 compute-0 sudo[283336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:21:03 compute-0 sudo[283336]: pam_unix(sudo:session): session closed for user root
Oct 08 10:21:03 compute-0 sudo[283361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 08 10:21:03 compute-0 sudo[283361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:21:03 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:21:03 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:21:03 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:21:03.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:21:03 compute-0 nova_compute[262220]: 2025-10-08 10:21:03.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:21:03 compute-0 podman[283426]: 2025-10-08 10:21:03.488297452 +0000 UTC m=+0.044530509 container create 9716dd35eca77001f82761ed54bc7ccbde983384b6274fc41cd8392db1cb3ee3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 10:21:03 compute-0 systemd[1]: Started libpod-conmon-9716dd35eca77001f82761ed54bc7ccbde983384b6274fc41cd8392db1cb3ee3.scope.
Oct 08 10:21:03 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:21:03 compute-0 podman[283426]: 2025-10-08 10:21:03.470524388 +0000 UTC m=+0.026757455 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:21:03 compute-0 podman[283426]: 2025-10-08 10:21:03.581515742 +0000 UTC m=+0.137748809 container init 9716dd35eca77001f82761ed54bc7ccbde983384b6274fc41cd8392db1cb3ee3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:21:03 compute-0 podman[283426]: 2025-10-08 10:21:03.59011294 +0000 UTC m=+0.146345987 container start 9716dd35eca77001f82761ed54bc7ccbde983384b6274fc41cd8392db1cb3ee3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:21:03 compute-0 podman[283426]: 2025-10-08 10:21:03.592734285 +0000 UTC m=+0.148967352 container attach 9716dd35eca77001f82761ed54bc7ccbde983384b6274fc41cd8392db1cb3ee3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct 08 10:21:03 compute-0 frosty_panini[283443]: 167 167
Oct 08 10:21:03 compute-0 systemd[1]: libpod-9716dd35eca77001f82761ed54bc7ccbde983384b6274fc41cd8392db1cb3ee3.scope: Deactivated successfully.
Oct 08 10:21:03 compute-0 conmon[283443]: conmon 9716dd35eca77001f827 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9716dd35eca77001f82761ed54bc7ccbde983384b6274fc41cd8392db1cb3ee3.scope/container/memory.events
Oct 08 10:21:03 compute-0 podman[283426]: 2025-10-08 10:21:03.596496125 +0000 UTC m=+0.152729192 container died 9716dd35eca77001f82761ed54bc7ccbde983384b6274fc41cd8392db1cb3ee3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct 08 10:21:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-e980b9c50f3672dda04709a039eba3c84d331e2b9531596fe1ee1313016df19a-merged.mount: Deactivated successfully.
Oct 08 10:21:03 compute-0 podman[283426]: 2025-10-08 10:21:03.636864469 +0000 UTC m=+0.193097516 container remove 9716dd35eca77001f82761ed54bc7ccbde983384b6274fc41cd8392db1cb3ee3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_panini, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 08 10:21:03 compute-0 systemd[1]: libpod-conmon-9716dd35eca77001f82761ed54bc7ccbde983384b6274fc41cd8392db1cb3ee3.scope: Deactivated successfully.
Oct 08 10:21:03 compute-0 podman[283466]: 2025-10-08 10:21:03.805144743 +0000 UTC m=+0.046332617 container create fb16413f0e95f990284ce8b427f8e9082190fe18117984bf52afd49372cf2d0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_edison, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct 08 10:21:03 compute-0 systemd[1]: Started libpod-conmon-fb16413f0e95f990284ce8b427f8e9082190fe18117984bf52afd49372cf2d0a.scope.
Oct 08 10:21:03 compute-0 podman[283466]: 2025-10-08 10:21:03.78461843 +0000 UTC m=+0.025806324 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:21:03 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:21:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f432acd06e93ef527bc9db81a0b9f6934c3eb69ff83a81eb171c9b43a2e065fb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:21:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f432acd06e93ef527bc9db81a0b9f6934c3eb69ff83a81eb171c9b43a2e065fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:21:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f432acd06e93ef527bc9db81a0b9f6934c3eb69ff83a81eb171c9b43a2e065fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:21:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f432acd06e93ef527bc9db81a0b9f6934c3eb69ff83a81eb171c9b43a2e065fb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:21:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f432acd06e93ef527bc9db81a0b9f6934c3eb69ff83a81eb171c9b43a2e065fb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 10:21:03 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 10:21:03 compute-0 ceph-mon[73572]: pgmap v1094: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 96 op/s
Oct 08 10:21:03 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:21:03 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:21:03 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 10:21:03 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 10:21:03 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:21:03 compute-0 podman[283466]: 2025-10-08 10:21:03.937239759 +0000 UTC m=+0.178427713 container init fb16413f0e95f990284ce8b427f8e9082190fe18117984bf52afd49372cf2d0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_edison, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct 08 10:21:03 compute-0 podman[283466]: 2025-10-08 10:21:03.943759979 +0000 UTC m=+0.184947893 container start fb16413f0e95f990284ce8b427f8e9082190fe18117984bf52afd49372cf2d0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:21:03 compute-0 podman[283466]: 2025-10-08 10:21:03.949288197 +0000 UTC m=+0.190476101 container attach fb16413f0e95f990284ce8b427f8e9082190fe18117984bf52afd49372cf2d0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_edison, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Oct 08 10:21:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:21:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:21:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:21:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:04 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:21:04 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:21:04 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:21:04 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:21:04.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:21:04 compute-0 charming_edison[283483]: --> passed data devices: 0 physical, 1 LVM
Oct 08 10:21:04 compute-0 charming_edison[283483]: --> All data devices are unavailable
Oct 08 10:21:04 compute-0 systemd[1]: libpod-fb16413f0e95f990284ce8b427f8e9082190fe18117984bf52afd49372cf2d0a.scope: Deactivated successfully.
Oct 08 10:21:04 compute-0 podman[283466]: 2025-10-08 10:21:04.304271921 +0000 UTC m=+0.545459805 container died fb16413f0e95f990284ce8b427f8e9082190fe18117984bf52afd49372cf2d0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_edison, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:21:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-f432acd06e93ef527bc9db81a0b9f6934c3eb69ff83a81eb171c9b43a2e065fb-merged.mount: Deactivated successfully.
Oct 08 10:21:04 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:21:04 compute-0 podman[283466]: 2025-10-08 10:21:04.359948579 +0000 UTC m=+0.601136463 container remove fb16413f0e95f990284ce8b427f8e9082190fe18117984bf52afd49372cf2d0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_edison, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 08 10:21:04 compute-0 systemd[1]: libpod-conmon-fb16413f0e95f990284ce8b427f8e9082190fe18117984bf52afd49372cf2d0a.scope: Deactivated successfully.
Oct 08 10:21:04 compute-0 sudo[283361]: pam_unix(sudo:session): session closed for user root
Oct 08 10:21:04 compute-0 sudo[283511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:21:04 compute-0 sudo[283511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:21:04 compute-0 sudo[283511]: pam_unix(sudo:session): session closed for user root
Oct 08 10:21:04 compute-0 sudo[283536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- lvm list --format json
Oct 08 10:21:04 compute-0 sudo[283536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:21:04 compute-0 nova_compute[262220]: 2025-10-08 10:21:04.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:21:04 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1095: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 105 op/s
Oct 08 10:21:05 compute-0 podman[283599]: 2025-10-08 10:21:05.053286587 +0000 UTC m=+0.049377985 container create eeca578d42a035890efb3f4f255ba8bd325ba05e0b254c100bb3728a3e9107c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_wilson, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 10:21:05 compute-0 systemd[1]: Started libpod-conmon-eeca578d42a035890efb3f4f255ba8bd325ba05e0b254c100bb3728a3e9107c3.scope.
Oct 08 10:21:05 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:21:05 compute-0 podman[283599]: 2025-10-08 10:21:05.032376112 +0000 UTC m=+0.028467540 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:21:05 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:21:05 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:21:05 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:21:05.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:21:05 compute-0 podman[283599]: 2025-10-08 10:21:05.154672051 +0000 UTC m=+0.150763469 container init eeca578d42a035890efb3f4f255ba8bd325ba05e0b254c100bb3728a3e9107c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_wilson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 10:21:05 compute-0 podman[283599]: 2025-10-08 10:21:05.161699767 +0000 UTC m=+0.157791165 container start eeca578d42a035890efb3f4f255ba8bd325ba05e0b254c100bb3728a3e9107c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_wilson, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 08 10:21:05 compute-0 podman[283599]: 2025-10-08 10:21:05.165644115 +0000 UTC m=+0.161735513 container attach eeca578d42a035890efb3f4f255ba8bd325ba05e0b254c100bb3728a3e9107c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_wilson, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:21:05 compute-0 competent_wilson[283616]: 167 167
Oct 08 10:21:05 compute-0 systemd[1]: libpod-eeca578d42a035890efb3f4f255ba8bd325ba05e0b254c100bb3728a3e9107c3.scope: Deactivated successfully.
Oct 08 10:21:05 compute-0 conmon[283616]: conmon eeca578d42a035890efb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-eeca578d42a035890efb3f4f255ba8bd325ba05e0b254c100bb3728a3e9107c3.scope/container/memory.events
Oct 08 10:21:05 compute-0 podman[283599]: 2025-10-08 10:21:05.169378696 +0000 UTC m=+0.165470114 container died eeca578d42a035890efb3f4f255ba8bd325ba05e0b254c100bb3728a3e9107c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct 08 10:21:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-5867df47bc60c7a3eeef65aa44f38d516d1cea97906dd30fa2fe7c3f95dee430-merged.mount: Deactivated successfully.
Oct 08 10:21:05 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:21:05.195 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26869918-b723-425c-a2e1-0d697f3d0fec, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 10:21:05 compute-0 podman[283599]: 2025-10-08 10:21:05.20360849 +0000 UTC m=+0.199699888 container remove eeca578d42a035890efb3f4f255ba8bd325ba05e0b254c100bb3728a3e9107c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:21:05 compute-0 systemd[1]: libpod-conmon-eeca578d42a035890efb3f4f255ba8bd325ba05e0b254c100bb3728a3e9107c3.scope: Deactivated successfully.
Oct 08 10:21:05 compute-0 podman[283640]: 2025-10-08 10:21:05.380305836 +0000 UTC m=+0.057611251 container create 19e02c3df2032a8fad63005f0fd3120601df069c5d449e407051fa3f5df3292c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_bhabha, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 08 10:21:05 compute-0 systemd[1]: Started libpod-conmon-19e02c3df2032a8fad63005f0fd3120601df069c5d449e407051fa3f5df3292c.scope.
Oct 08 10:21:05 compute-0 podman[283640]: 2025-10-08 10:21:05.353627845 +0000 UTC m=+0.030933330 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:21:05 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:21:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fd00f409b760c558a096ba3488869e7bd83e691b9bfdc9b49a1815a59b54ec6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:21:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fd00f409b760c558a096ba3488869e7bd83e691b9bfdc9b49a1815a59b54ec6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:21:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fd00f409b760c558a096ba3488869e7bd83e691b9bfdc9b49a1815a59b54ec6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:21:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fd00f409b760c558a096ba3488869e7bd83e691b9bfdc9b49a1815a59b54ec6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:21:05 compute-0 podman[283640]: 2025-10-08 10:21:05.484239742 +0000 UTC m=+0.161545157 container init 19e02c3df2032a8fad63005f0fd3120601df069c5d449e407051fa3f5df3292c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_bhabha, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 10:21:05 compute-0 podman[283640]: 2025-10-08 10:21:05.493196882 +0000 UTC m=+0.170502287 container start 19e02c3df2032a8fad63005f0fd3120601df069c5d449e407051fa3f5df3292c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 08 10:21:05 compute-0 podman[283640]: 2025-10-08 10:21:05.496684525 +0000 UTC m=+0.173989950 container attach 19e02c3df2032a8fad63005f0fd3120601df069c5d449e407051fa3f5df3292c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_bhabha, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 08 10:21:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:21:05] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Oct 08 10:21:05 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:21:05] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Oct 08 10:21:05 compute-0 interesting_bhabha[283657]: {
Oct 08 10:21:05 compute-0 interesting_bhabha[283657]:     "1": [
Oct 08 10:21:05 compute-0 interesting_bhabha[283657]:         {
Oct 08 10:21:05 compute-0 interesting_bhabha[283657]:             "devices": [
Oct 08 10:21:05 compute-0 interesting_bhabha[283657]:                 "/dev/loop3"
Oct 08 10:21:05 compute-0 interesting_bhabha[283657]:             ],
Oct 08 10:21:05 compute-0 interesting_bhabha[283657]:             "lv_name": "ceph_lv0",
Oct 08 10:21:05 compute-0 interesting_bhabha[283657]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:21:05 compute-0 interesting_bhabha[283657]:             "lv_size": "21470642176",
Oct 08 10:21:05 compute-0 interesting_bhabha[283657]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 08 10:21:05 compute-0 interesting_bhabha[283657]:             "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 10:21:05 compute-0 interesting_bhabha[283657]:             "name": "ceph_lv0",
Oct 08 10:21:05 compute-0 interesting_bhabha[283657]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:21:05 compute-0 interesting_bhabha[283657]:             "tags": {
Oct 08 10:21:05 compute-0 interesting_bhabha[283657]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:21:05 compute-0 interesting_bhabha[283657]:                 "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 10:21:05 compute-0 interesting_bhabha[283657]:                 "ceph.cephx_lockbox_secret": "",
Oct 08 10:21:05 compute-0 interesting_bhabha[283657]:                 "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct 08 10:21:05 compute-0 interesting_bhabha[283657]:                 "ceph.cluster_name": "ceph",
Oct 08 10:21:05 compute-0 interesting_bhabha[283657]:                 "ceph.crush_device_class": "",
Oct 08 10:21:05 compute-0 interesting_bhabha[283657]:                 "ceph.encrypted": "0",
Oct 08 10:21:05 compute-0 interesting_bhabha[283657]:                 "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct 08 10:21:05 compute-0 interesting_bhabha[283657]:                 "ceph.osd_id": "1",
Oct 08 10:21:05 compute-0 interesting_bhabha[283657]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 08 10:21:05 compute-0 interesting_bhabha[283657]:                 "ceph.type": "block",
Oct 08 10:21:05 compute-0 interesting_bhabha[283657]:                 "ceph.vdo": "0",
Oct 08 10:21:05 compute-0 interesting_bhabha[283657]:                 "ceph.with_tpm": "0"
Oct 08 10:21:05 compute-0 interesting_bhabha[283657]:             },
Oct 08 10:21:05 compute-0 interesting_bhabha[283657]:             "type": "block",
Oct 08 10:21:05 compute-0 interesting_bhabha[283657]:             "vg_name": "ceph_vg0"
Oct 08 10:21:05 compute-0 interesting_bhabha[283657]:         }
Oct 08 10:21:05 compute-0 interesting_bhabha[283657]:     ]
Oct 08 10:21:05 compute-0 interesting_bhabha[283657]: }
Oct 08 10:21:05 compute-0 systemd[1]: libpod-19e02c3df2032a8fad63005f0fd3120601df069c5d449e407051fa3f5df3292c.scope: Deactivated successfully.
Oct 08 10:21:05 compute-0 podman[283640]: 2025-10-08 10:21:05.808027158 +0000 UTC m=+0.485332563 container died 19e02c3df2032a8fad63005f0fd3120601df069c5d449e407051fa3f5df3292c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_bhabha, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 08 10:21:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-5fd00f409b760c558a096ba3488869e7bd83e691b9bfdc9b49a1815a59b54ec6-merged.mount: Deactivated successfully.
Oct 08 10:21:05 compute-0 podman[283640]: 2025-10-08 10:21:05.863332074 +0000 UTC m=+0.540637519 container remove 19e02c3df2032a8fad63005f0fd3120601df069c5d449e407051fa3f5df3292c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_bhabha, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Oct 08 10:21:05 compute-0 sudo[283536]: pam_unix(sudo:session): session closed for user root
Oct 08 10:21:05 compute-0 systemd[1]: libpod-conmon-19e02c3df2032a8fad63005f0fd3120601df069c5d449e407051fa3f5df3292c.scope: Deactivated successfully.
Oct 08 10:21:05 compute-0 ceph-mon[73572]: pgmap v1095: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 105 op/s
Oct 08 10:21:05 compute-0 sudo[283680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:21:05 compute-0 sudo[283680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:21:05 compute-0 sudo[283680]: pam_unix(sudo:session): session closed for user root
Oct 08 10:21:06 compute-0 sudo[283706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- raw list --format json
Oct 08 10:21:06 compute-0 sudo[283706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:21:06 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:21:06 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:21:06 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:21:06.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:21:06 compute-0 podman[283773]: 2025-10-08 10:21:06.4472694 +0000 UTC m=+0.048988563 container create 7f77a41fdb9d517846dd70c9fa5ef203b9f4dbfcb9fbb5a8e3e05773f368cb02 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:21:06 compute-0 systemd[1]: Started libpod-conmon-7f77a41fdb9d517846dd70c9fa5ef203b9f4dbfcb9fbb5a8e3e05773f368cb02.scope.
Oct 08 10:21:06 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:21:06 compute-0 podman[283773]: 2025-10-08 10:21:06.426547161 +0000 UTC m=+0.028266354 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:21:06 compute-0 podman[283773]: 2025-10-08 10:21:06.522310462 +0000 UTC m=+0.124029615 container init 7f77a41fdb9d517846dd70c9fa5ef203b9f4dbfcb9fbb5a8e3e05773f368cb02 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_hermann, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Oct 08 10:21:06 compute-0 podman[283773]: 2025-10-08 10:21:06.528859324 +0000 UTC m=+0.130578487 container start 7f77a41fdb9d517846dd70c9fa5ef203b9f4dbfcb9fbb5a8e3e05773f368cb02 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 10:21:06 compute-0 podman[283773]: 2025-10-08 10:21:06.532607646 +0000 UTC m=+0.134326829 container attach 7f77a41fdb9d517846dd70c9fa5ef203b9f4dbfcb9fbb5a8e3e05773f368cb02 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_hermann, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 10:21:06 compute-0 dreamy_hermann[283790]: 167 167
Oct 08 10:21:06 compute-0 systemd[1]: libpod-7f77a41fdb9d517846dd70c9fa5ef203b9f4dbfcb9fbb5a8e3e05773f368cb02.scope: Deactivated successfully.
Oct 08 10:21:06 compute-0 podman[283773]: 2025-10-08 10:21:06.53401091 +0000 UTC m=+0.135730063 container died 7f77a41fdb9d517846dd70c9fa5ef203b9f4dbfcb9fbb5a8e3e05773f368cb02 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_hermann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Oct 08 10:21:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-7dafc3cc8d41f2cfe9db45b0847c94394823fd6beb55eef00a0a26fe20186932-merged.mount: Deactivated successfully.
Oct 08 10:21:06 compute-0 podman[283773]: 2025-10-08 10:21:06.569993622 +0000 UTC m=+0.171712775 container remove 7f77a41fdb9d517846dd70c9fa5ef203b9f4dbfcb9fbb5a8e3e05773f368cb02 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_hermann, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:21:06 compute-0 systemd[1]: libpod-conmon-7f77a41fdb9d517846dd70c9fa5ef203b9f4dbfcb9fbb5a8e3e05773f368cb02.scope: Deactivated successfully.
Oct 08 10:21:06 compute-0 podman[283813]: 2025-10-08 10:21:06.755067719 +0000 UTC m=+0.049290743 container create 66a90f0e25d32cb1eeec06992ba93c38779d11432fc80a1cbe7ec295b8f2df76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_lamarr, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 10:21:06 compute-0 systemd[1]: Started libpod-conmon-66a90f0e25d32cb1eeec06992ba93c38779d11432fc80a1cbe7ec295b8f2df76.scope.
Oct 08 10:21:06 compute-0 podman[283813]: 2025-10-08 10:21:06.736596052 +0000 UTC m=+0.030819096 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:21:06 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:21:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c279a2d3aa1a6345b1bf8153ea09a05564686780077d829b485f2446c26aad65/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:21:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c279a2d3aa1a6345b1bf8153ea09a05564686780077d829b485f2446c26aad65/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:21:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c279a2d3aa1a6345b1bf8153ea09a05564686780077d829b485f2446c26aad65/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:21:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c279a2d3aa1a6345b1bf8153ea09a05564686780077d829b485f2446c26aad65/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:21:06 compute-0 podman[283813]: 2025-10-08 10:21:06.867346464 +0000 UTC m=+0.161569558 container init 66a90f0e25d32cb1eeec06992ba93c38779d11432fc80a1cbe7ec295b8f2df76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct 08 10:21:06 compute-0 podman[283813]: 2025-10-08 10:21:06.873691309 +0000 UTC m=+0.167914323 container start 66a90f0e25d32cb1eeec06992ba93c38779d11432fc80a1cbe7ec295b8f2df76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct 08 10:21:06 compute-0 podman[283813]: 2025-10-08 10:21:06.877205723 +0000 UTC m=+0.171428827 container attach 66a90f0e25d32cb1eeec06992ba93c38779d11432fc80a1cbe7ec295b8f2df76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct 08 10:21:06 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1096: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 13 KiB/s wr, 77 op/s
Oct 08 10:21:07 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:21:07 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:21:07 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:21:07.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:21:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:21:07.195Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:21:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:21:07.195Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:21:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:21:07.197Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:21:07 compute-0 lvm[283905]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 08 10:21:07 compute-0 lvm[283905]: VG ceph_vg0 finished
Oct 08 10:21:07 compute-0 charming_lamarr[283830]: {}
Oct 08 10:21:07 compute-0 systemd[1]: libpod-66a90f0e25d32cb1eeec06992ba93c38779d11432fc80a1cbe7ec295b8f2df76.scope: Deactivated successfully.
Oct 08 10:21:07 compute-0 podman[283813]: 2025-10-08 10:21:07.657454068 +0000 UTC m=+0.951677082 container died 66a90f0e25d32cb1eeec06992ba93c38779d11432fc80a1cbe7ec295b8f2df76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 08 10:21:07 compute-0 systemd[1]: libpod-66a90f0e25d32cb1eeec06992ba93c38779d11432fc80a1cbe7ec295b8f2df76.scope: Consumed 1.114s CPU time.
Oct 08 10:21:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-c279a2d3aa1a6345b1bf8153ea09a05564686780077d829b485f2446c26aad65-merged.mount: Deactivated successfully.
Oct 08 10:21:07 compute-0 podman[283813]: 2025-10-08 10:21:07.70613871 +0000 UTC m=+1.000361724 container remove 66a90f0e25d32cb1eeec06992ba93c38779d11432fc80a1cbe7ec295b8f2df76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_lamarr, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 08 10:21:07 compute-0 systemd[1]: libpod-conmon-66a90f0e25d32cb1eeec06992ba93c38779d11432fc80a1cbe7ec295b8f2df76.scope: Deactivated successfully.
Oct 08 10:21:07 compute-0 sudo[283706]: pam_unix(sudo:session): session closed for user root
Oct 08 10:21:07 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 10:21:07 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:21:07 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 10:21:07 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:21:07 compute-0 sudo[283919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 08 10:21:07 compute-0 sudo[283919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:21:07 compute-0 sudo[283919]: pam_unix(sudo:session): session closed for user root
Oct 08 10:21:07 compute-0 ceph-mon[73572]: pgmap v1096: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 13 KiB/s wr, 77 op/s
Oct 08 10:21:07 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:21:07 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:21:08 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:21:08 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:21:08 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:21:08.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:21:08 compute-0 nova_compute[262220]: 2025-10-08 10:21:08.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:21:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:21:08.851Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:21:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:21:08.852Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:21:08 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1097: 353 pgs: 353 active+clean; 109 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 121 op/s
Oct 08 10:21:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:21:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:21:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:21:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:09 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:21:09 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:21:09 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:21:09 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:21:09.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:21:09 compute-0 ovn_controller[153187]: 2025-10-08T10:21:09Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5d:e9:f4 10.100.0.6
Oct 08 10:21:09 compute-0 ovn_controller[153187]: 2025-10-08T10:21:09Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5d:e9:f4 10.100.0.6
Oct 08 10:21:09 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:21:09 compute-0 sudo[283946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:21:09 compute-0 sudo[283946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:21:09 compute-0 sudo[283946]: pam_unix(sudo:session): session closed for user root
Oct 08 10:21:09 compute-0 nova_compute[262220]: 2025-10-08 10:21:09.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:21:10 compute-0 ceph-mon[73572]: pgmap v1097: 353 pgs: 353 active+clean; 109 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 121 op/s
Oct 08 10:21:10 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:21:10 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:21:10 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:21:10.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:21:10 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1098: 353 pgs: 353 active+clean; 109 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 524 KiB/s rd, 2.1 MiB/s wr, 53 op/s
Oct 08 10:21:11 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:21:11 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:21:11 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:21:11.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:21:12 compute-0 ceph-mon[73572]: pgmap v1098: 353 pgs: 353 active+clean; 109 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 524 KiB/s rd, 2.1 MiB/s wr, 53 op/s
Oct 08 10:21:12 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:21:12 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:21:12 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:21:12.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:21:12 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1099: 353 pgs: 353 active+clean; 109 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 524 KiB/s rd, 2.1 MiB/s wr, 53 op/s
Oct 08 10:21:12 compute-0 podman[283974]: 2025-10-08 10:21:12.934991874 +0000 UTC m=+0.086415361 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 08 10:21:13 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:21:13 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:21:13 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:21:13.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:21:13 compute-0 nova_compute[262220]: 2025-10-08 10:21:13.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:21:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:21:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:21:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:21:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:14 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:21:14 compute-0 ceph-mon[73572]: pgmap v1099: 353 pgs: 353 active+clean; 109 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 524 KiB/s rd, 2.1 MiB/s wr, 53 op/s
Oct 08 10:21:14 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:21:14 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:21:14 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:21:14.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:21:14 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:21:14 compute-0 nova_compute[262220]: 2025-10-08 10:21:14.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:21:14 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1100: 353 pgs: 353 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 605 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Oct 08 10:21:15 compute-0 nova_compute[262220]: 2025-10-08 10:21:15.045 2 INFO nova.compute.manager [None req-b7e65dd7-0c37-4fb1-a4e5-af46f3e28783 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Get console output
Oct 08 10:21:15 compute-0 nova_compute[262220]: 2025-10-08 10:21:15.050 631 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 08 10:21:15 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:21:15 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:21:15 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:21:15.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:21:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:21:15] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Oct 08 10:21:15 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:21:15] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Oct 08 10:21:15 compute-0 ovn_controller[153187]: 2025-10-08T10:21:15Z|00075|binding|INFO|Releasing lport aead10e1-bf7c-4d43-bf9e-517a64e3ea62 from this chassis (sb_readonly=0)
Oct 08 10:21:15 compute-0 nova_compute[262220]: 2025-10-08 10:21:15.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:21:15 compute-0 ovn_controller[153187]: 2025-10-08T10:21:15Z|00076|binding|INFO|Releasing lport aead10e1-bf7c-4d43-bf9e-517a64e3ea62 from this chassis (sb_readonly=0)
Oct 08 10:21:15 compute-0 nova_compute[262220]: 2025-10-08 10:21:15.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:21:16 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:21:16 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:21:16 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:21:16.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:21:16 compute-0 ceph-mon[73572]: pgmap v1100: 353 pgs: 353 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 605 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Oct 08 10:21:16 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1101: 353 pgs: 353 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 08 10:21:17 compute-0 nova_compute[262220]: 2025-10-08 10:21:17.091 2 INFO nova.compute.manager [None req-fd9c196e-c764-4c81-9d2a-89372caff073 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Get console output
Oct 08 10:21:17 compute-0 nova_compute[262220]: 2025-10-08 10:21:17.099 631 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 08 10:21:17 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:21:17 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:21:17 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:21:17.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:21:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:21:17.198Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:21:17 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:21:17 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:21:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:21:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:21:17 compute-0 nova_compute[262220]: 2025-10-08 10:21:17.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:21:17 compute-0 NetworkManager[44872]: <info>  [1759918877.9771] manager: (patch-provnet-5eda3e1f-dd4f-4c7d-b5fb-d8c0d9996d6d-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Oct 08 10:21:17 compute-0 NetworkManager[44872]: <info>  [1759918877.9789] manager: (patch-br-int-to-provnet-5eda3e1f-dd4f-4c7d-b5fb-d8c0d9996d6d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/56)
Oct 08 10:21:18 compute-0 nova_compute[262220]: 2025-10-08 10:21:18.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:21:18 compute-0 ovn_controller[153187]: 2025-10-08T10:21:18Z|00077|binding|INFO|Releasing lport aead10e1-bf7c-4d43-bf9e-517a64e3ea62 from this chassis (sb_readonly=0)
Oct 08 10:21:18 compute-0 ovn_controller[153187]: 2025-10-08T10:21:18Z|00078|binding|INFO|Releasing lport aead10e1-bf7c-4d43-bf9e-517a64e3ea62 from this chassis (sb_readonly=0)
Oct 08 10:21:18 compute-0 nova_compute[262220]: 2025-10-08 10:21:18.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:21:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:21:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:21:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:21:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:21:18 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:21:18 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:21:18 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:21:18.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:21:18 compute-0 ceph-mon[73572]: pgmap v1101: 353 pgs: 353 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 08 10:21:18 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:21:18 compute-0 nova_compute[262220]: 2025-10-08 10:21:18.238 2 INFO nova.compute.manager [None req-77a15bed-ce4b-4893-be27-147d1f7ae8fd d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Get console output
Oct 08 10:21:18 compute-0 nova_compute[262220]: 2025-10-08 10:21:18.244 631 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 08 10:21:18 compute-0 nova_compute[262220]: 2025-10-08 10:21:18.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:21:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:21:18.852Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:21:18 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1102: 353 pgs: 353 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 08 10:21:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:21:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:21:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:21:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:19 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:21:19 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:21:19 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:21:19 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:21:19.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:21:19 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:21:19 compute-0 nova_compute[262220]: 2025-10-08 10:21:19.844 2 DEBUG nova.compute.manager [req-82538725-4911-4429-a23d-4de6ec61ad28 req-1affc0b0-140a-41a5-aa07-eeb5cace1008 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Received event network-changed-754d5578-d995-4502-af66-b164dfdf1189 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 08 10:21:19 compute-0 nova_compute[262220]: 2025-10-08 10:21:19.845 2 DEBUG nova.compute.manager [req-82538725-4911-4429-a23d-4de6ec61ad28 req-1affc0b0-140a-41a5-aa07-eeb5cace1008 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Refreshing instance network info cache due to event network-changed-754d5578-d995-4502-af66-b164dfdf1189. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 08 10:21:19 compute-0 nova_compute[262220]: 2025-10-08 10:21:19.845 2 DEBUG oslo_concurrency.lockutils [req-82538725-4911-4429-a23d-4de6ec61ad28 req-1affc0b0-140a-41a5-aa07-eeb5cace1008 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "refresh_cache-20ffb86b-b5ba-4818-82e4-14a755c48807" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 08 10:21:19 compute-0 nova_compute[262220]: 2025-10-08 10:21:19.845 2 DEBUG oslo_concurrency.lockutils [req-82538725-4911-4429-a23d-4de6ec61ad28 req-1affc0b0-140a-41a5-aa07-eeb5cace1008 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquired lock "refresh_cache-20ffb86b-b5ba-4818-82e4-14a755c48807" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 08 10:21:19 compute-0 nova_compute[262220]: 2025-10-08 10:21:19.845 2 DEBUG nova.network.neutron [req-82538725-4911-4429-a23d-4de6ec61ad28 req-1affc0b0-140a-41a5-aa07-eeb5cace1008 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Refreshing network info cache for port 754d5578-d995-4502-af66-b164dfdf1189 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 08 10:21:19 compute-0 nova_compute[262220]: 2025-10-08 10:21:19.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:21:20 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:21:20 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:21:20 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:21:20.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:21:20 compute-0 ceph-mon[73572]: pgmap v1102: 353 pgs: 353 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 08 10:21:20 compute-0 nova_compute[262220]: 2025-10-08 10:21:20.484 2 DEBUG oslo_concurrency.lockutils [None req-4fca9653-4a5c-48cd-850c-bb69fac83642 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "20ffb86b-b5ba-4818-82e4-14a755c48807" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:21:20 compute-0 nova_compute[262220]: 2025-10-08 10:21:20.484 2 DEBUG oslo_concurrency.lockutils [None req-4fca9653-4a5c-48cd-850c-bb69fac83642 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "20ffb86b-b5ba-4818-82e4-14a755c48807" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:21:20 compute-0 nova_compute[262220]: 2025-10-08 10:21:20.485 2 DEBUG oslo_concurrency.lockutils [None req-4fca9653-4a5c-48cd-850c-bb69fac83642 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "20ffb86b-b5ba-4818-82e4-14a755c48807-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:21:20 compute-0 nova_compute[262220]: 2025-10-08 10:21:20.485 2 DEBUG oslo_concurrency.lockutils [None req-4fca9653-4a5c-48cd-850c-bb69fac83642 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "20ffb86b-b5ba-4818-82e4-14a755c48807-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:21:20 compute-0 nova_compute[262220]: 2025-10-08 10:21:20.486 2 DEBUG oslo_concurrency.lockutils [None req-4fca9653-4a5c-48cd-850c-bb69fac83642 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "20ffb86b-b5ba-4818-82e4-14a755c48807-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:21:20 compute-0 nova_compute[262220]: 2025-10-08 10:21:20.488 2 INFO nova.compute.manager [None req-4fca9653-4a5c-48cd-850c-bb69fac83642 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Terminating instance
Oct 08 10:21:20 compute-0 nova_compute[262220]: 2025-10-08 10:21:20.490 2 DEBUG nova.compute.manager [None req-4fca9653-4a5c-48cd-850c-bb69fac83642 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 08 10:21:20 compute-0 kernel: tap754d5578-d9 (unregistering): left promiscuous mode
Oct 08 10:21:20 compute-0 NetworkManager[44872]: <info>  [1759918880.5524] device (tap754d5578-d9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 08 10:21:20 compute-0 ovn_controller[153187]: 2025-10-08T10:21:20Z|00079|binding|INFO|Releasing lport 754d5578-d995-4502-af66-b164dfdf1189 from this chassis (sb_readonly=0)
Oct 08 10:21:20 compute-0 ovn_controller[153187]: 2025-10-08T10:21:20Z|00080|binding|INFO|Setting lport 754d5578-d995-4502-af66-b164dfdf1189 down in Southbound
Oct 08 10:21:20 compute-0 ovn_controller[153187]: 2025-10-08T10:21:20Z|00081|binding|INFO|Removing iface tap754d5578-d9 ovn-installed in OVS
Oct 08 10:21:20 compute-0 nova_compute[262220]: 2025-10-08 10:21:20.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:21:20 compute-0 nova_compute[262220]: 2025-10-08 10:21:20.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:21:20 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:21:20.581 163175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:e9:f4 10.100.0.6'], port_security=['fa:16:3e:5d:e9:f4 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '20ffb86b-b5ba-4818-82e4-14a755c48807', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84428682-9eff-4658-a105-8c0d1de9c87f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0edb2c85b88b4b168a4b2e8c5ed4a05c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '16d57876-2c07-4569-9200-1b8e93dece9c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f9ad9bb7-7c7b-464c-bbd0-86ab756be37d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f191f105a60>], logical_port=754d5578-d995-4502-af66-b164dfdf1189) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f191f105a60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 08 10:21:20 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:21:20.590 163175 INFO neutron.agent.ovn.metadata.agent [-] Port 754d5578-d995-4502-af66-b164dfdf1189 in datapath 84428682-9eff-4658-a105-8c0d1de9c87f unbound from our chassis
Oct 08 10:21:20 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:21:20.592 163175 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 84428682-9eff-4658-a105-8c0d1de9c87f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 08 10:21:20 compute-0 nova_compute[262220]: 2025-10-08 10:21:20.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:21:20 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:21:20.593 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[dc83fafa-39ce-4111-9187-1cf4ff2f1949]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:21:20 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:21:20.595 163175 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-84428682-9eff-4658-a105-8c0d1de9c87f namespace which is not needed anymore
Oct 08 10:21:20 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Oct 08 10:21:20 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d0000000d.scope: Consumed 12.962s CPU time.
Oct 08 10:21:20 compute-0 systemd-machined[216030]: Machine qemu-4-instance-0000000d terminated.
Oct 08 10:21:20 compute-0 neutron-haproxy-ovnmeta-84428682-9eff-4658-a105-8c0d1de9c87f[283195]: [NOTICE]   (283199) : haproxy version is 2.8.14-c23fe91
Oct 08 10:21:20 compute-0 neutron-haproxy-ovnmeta-84428682-9eff-4658-a105-8c0d1de9c87f[283195]: [NOTICE]   (283199) : path to executable is /usr/sbin/haproxy
Oct 08 10:21:20 compute-0 neutron-haproxy-ovnmeta-84428682-9eff-4658-a105-8c0d1de9c87f[283195]: [WARNING]  (283199) : Exiting Master process...
Oct 08 10:21:20 compute-0 neutron-haproxy-ovnmeta-84428682-9eff-4658-a105-8c0d1de9c87f[283195]: [ALERT]    (283199) : Current worker (283201) exited with code 143 (Terminated)
Oct 08 10:21:20 compute-0 neutron-haproxy-ovnmeta-84428682-9eff-4658-a105-8c0d1de9c87f[283195]: [WARNING]  (283199) : All workers exited. Exiting... (0)
Oct 08 10:21:20 compute-0 systemd[1]: libpod-a50a99a7b908636d1a3476dbe2c028af766a990ddb805618b5dbf08be82bf4f8.scope: Deactivated successfully.
Oct 08 10:21:20 compute-0 nova_compute[262220]: 2025-10-08 10:21:20.727 2 INFO nova.virt.libvirt.driver [-] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Instance destroyed successfully.
Oct 08 10:21:20 compute-0 nova_compute[262220]: 2025-10-08 10:21:20.728 2 DEBUG nova.objects.instance [None req-4fca9653-4a5c-48cd-850c-bb69fac83642 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lazy-loading 'resources' on Instance uuid 20ffb86b-b5ba-4818-82e4-14a755c48807 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 08 10:21:20 compute-0 podman[284028]: 2025-10-08 10:21:20.729860975 +0000 UTC m=+0.052523067 container died a50a99a7b908636d1a3476dbe2c028af766a990ddb805618b5dbf08be82bf4f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-84428682-9eff-4658-a105-8c0d1de9c87f, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 08 10:21:20 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a50a99a7b908636d1a3476dbe2c028af766a990ddb805618b5dbf08be82bf4f8-userdata-shm.mount: Deactivated successfully.
Oct 08 10:21:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-9305ceef478b92232a6159096bf3391d562b676524b5a981565d66364f354e43-merged.mount: Deactivated successfully.
Oct 08 10:21:20 compute-0 podman[284028]: 2025-10-08 10:21:20.772418879 +0000 UTC m=+0.095080971 container cleanup a50a99a7b908636d1a3476dbe2c028af766a990ddb805618b5dbf08be82bf4f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-84428682-9eff-4658-a105-8c0d1de9c87f, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 08 10:21:20 compute-0 systemd[1]: libpod-conmon-a50a99a7b908636d1a3476dbe2c028af766a990ddb805618b5dbf08be82bf4f8.scope: Deactivated successfully.
Oct 08 10:21:20 compute-0 nova_compute[262220]: 2025-10-08 10:21:20.795 2 DEBUG nova.virt.libvirt.vif [None req-4fca9653-4a5c-48cd-850c-bb69fac83642 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-08T10:20:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-451384508',display_name='tempest-TestNetworkBasicOps-server-451384508',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-451384508',id=13,image_ref='e5994bac-385d-4cfe-962e-386aa0559983',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKzAAbR+LebFHZ4MQpbXVINvQrQE4iZi3jhjlRa4bUuBuh7BAgqwE3gXNZho6NGF97w7AAO52PK7tmiXY23liBZwBI0PDfy6ztl7vXddFfJ7MBnkOiMny5dlb5dxWiMeog==',key_name='tempest-TestNetworkBasicOps-1706390229',keypairs=<?>,launch_index=0,launched_at=2025-10-08T10:20:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0edb2c85b88b4b168a4b2e8c5ed4a05c',ramdisk_id='',reservation_id='r-mat2tuft',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e5994bac-385d-4cfe-962e-386aa0559983',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-139500885',owner_user_name='tempest-TestNetworkBasicOps-139500885-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-08T10:20:57Z,user_data=None,user_id='d50b19166a7245e390a6e29682191263',uuid=20ffb86b-b5ba-4818-82e4-14a755c48807,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "754d5578-d995-4502-af66-b164dfdf1189", "address": "fa:16:3e:5d:e9:f4", "network": {"id": "84428682-9eff-4658-a105-8c0d1de9c87f", "bridge": "br-int", "label": "tempest-network-smoke--1945963860", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap754d5578-d9", "ovs_interfaceid": "754d5578-d995-4502-af66-b164dfdf1189", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 08 10:21:20 compute-0 nova_compute[262220]: 2025-10-08 10:21:20.796 2 DEBUG nova.network.os_vif_util [None req-4fca9653-4a5c-48cd-850c-bb69fac83642 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converting VIF {"id": "754d5578-d995-4502-af66-b164dfdf1189", "address": "fa:16:3e:5d:e9:f4", "network": {"id": "84428682-9eff-4658-a105-8c0d1de9c87f", "bridge": "br-int", "label": "tempest-network-smoke--1945963860", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap754d5578-d9", "ovs_interfaceid": "754d5578-d995-4502-af66-b164dfdf1189", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 08 10:21:20 compute-0 nova_compute[262220]: 2025-10-08 10:21:20.797 2 DEBUG nova.network.os_vif_util [None req-4fca9653-4a5c-48cd-850c-bb69fac83642 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5d:e9:f4,bridge_name='br-int',has_traffic_filtering=True,id=754d5578-d995-4502-af66-b164dfdf1189,network=Network(84428682-9eff-4658-a105-8c0d1de9c87f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap754d5578-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 08 10:21:20 compute-0 nova_compute[262220]: 2025-10-08 10:21:20.797 2 DEBUG os_vif [None req-4fca9653-4a5c-48cd-850c-bb69fac83642 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5d:e9:f4,bridge_name='br-int',has_traffic_filtering=True,id=754d5578-d995-4502-af66-b164dfdf1189,network=Network(84428682-9eff-4658-a105-8c0d1de9c87f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap754d5578-d9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 08 10:21:20 compute-0 nova_compute[262220]: 2025-10-08 10:21:20.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:21:20 compute-0 nova_compute[262220]: 2025-10-08 10:21:20.799 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap754d5578-d9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 10:21:20 compute-0 nova_compute[262220]: 2025-10-08 10:21:20.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:21:20 compute-0 nova_compute[262220]: 2025-10-08 10:21:20.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 08 10:21:20 compute-0 nova_compute[262220]: 2025-10-08 10:21:20.841 2 INFO os_vif [None req-4fca9653-4a5c-48cd-850c-bb69fac83642 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5d:e9:f4,bridge_name='br-int',has_traffic_filtering=True,id=754d5578-d995-4502-af66-b164dfdf1189,network=Network(84428682-9eff-4658-a105-8c0d1de9c87f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap754d5578-d9')
Oct 08 10:21:20 compute-0 podman[284067]: 2025-10-08 10:21:20.871723536 +0000 UTC m=+0.076696527 container remove a50a99a7b908636d1a3476dbe2c028af766a990ddb805618b5dbf08be82bf4f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-84428682-9eff-4658-a105-8c0d1de9c87f, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 08 10:21:20 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:21:20.878 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[2985efaa-3322-4f6e-927f-4b9206a4ac1f]: (4, ('Wed Oct  8 10:21:20 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-84428682-9eff-4658-a105-8c0d1de9c87f (a50a99a7b908636d1a3476dbe2c028af766a990ddb805618b5dbf08be82bf4f8)\na50a99a7b908636d1a3476dbe2c028af766a990ddb805618b5dbf08be82bf4f8\nWed Oct  8 10:21:20 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-84428682-9eff-4658-a105-8c0d1de9c87f (a50a99a7b908636d1a3476dbe2c028af766a990ddb805618b5dbf08be82bf4f8)\na50a99a7b908636d1a3476dbe2c028af766a990ddb805618b5dbf08be82bf4f8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:21:20 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:21:20.880 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[949a224b-f569-4a98-a0ce-46811f5c817a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:21:20 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:21:20.881 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84428682-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 10:21:20 compute-0 nova_compute[262220]: 2025-10-08 10:21:20.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:21:20 compute-0 kernel: tap84428682-90: left promiscuous mode
Oct 08 10:21:20 compute-0 nova_compute[262220]: 2025-10-08 10:21:20.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:21:20 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:21:20.907 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[823d980a-a297-4a29-9408-ebcc6a29f35a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:21:20 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1103: 353 pgs: 353 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 99 KiB/s rd, 108 KiB/s wr, 22 op/s
Oct 08 10:21:20 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:21:20.941 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[015ff961-673d-4d62-8ca2-a67a867775da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:21:20 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:21:20.943 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[2c7a51f7-6166-4e22-b84f-94c0f983a8e8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:21:20 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:21:20.971 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[8485fdde-aec7-4c74-8c40-45d6d8d4bdf5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 481501, 'reachable_time': 38049, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284100, 'error': None, 'target': 'ovnmeta-84428682-9eff-4658-a105-8c0d1de9c87f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:21:20 compute-0 systemd[1]: run-netns-ovnmeta\x2d84428682\x2d9eff\x2d4658\x2da105\x2d8c0d1de9c87f.mount: Deactivated successfully.
Oct 08 10:21:20 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:21:20.979 163290 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-84428682-9eff-4658-a105-8c0d1de9c87f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 08 10:21:20 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:21:20.979 163290 DEBUG oslo.privsep.daemon [-] privsep: reply[29c487aa-7264-43c5-aec3-f38386bd890a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 08 10:21:21 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:21:21 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:21:21 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:21:21.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:21:21 compute-0 ceph-mon[73572]: from='client.? 192.168.122.10:0/2018419146' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 08 10:21:21 compute-0 ceph-mon[73572]: from='client.? 192.168.122.10:0/2018419146' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 08 10:21:21 compute-0 nova_compute[262220]: 2025-10-08 10:21:21.340 2 INFO nova.virt.libvirt.driver [None req-4fca9653-4a5c-48cd-850c-bb69fac83642 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Deleting instance files /var/lib/nova/instances/20ffb86b-b5ba-4818-82e4-14a755c48807_del
Oct 08 10:21:21 compute-0 nova_compute[262220]: 2025-10-08 10:21:21.341 2 INFO nova.virt.libvirt.driver [None req-4fca9653-4a5c-48cd-850c-bb69fac83642 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Deletion of /var/lib/nova/instances/20ffb86b-b5ba-4818-82e4-14a755c48807_del complete
Oct 08 10:21:21 compute-0 nova_compute[262220]: 2025-10-08 10:21:21.514 2 INFO nova.compute.manager [None req-4fca9653-4a5c-48cd-850c-bb69fac83642 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Took 1.02 seconds to destroy the instance on the hypervisor.
Oct 08 10:21:21 compute-0 nova_compute[262220]: 2025-10-08 10:21:21.515 2 DEBUG oslo.service.loopingcall [None req-4fca9653-4a5c-48cd-850c-bb69fac83642 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 08 10:21:21 compute-0 nova_compute[262220]: 2025-10-08 10:21:21.515 2 DEBUG nova.compute.manager [-] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 08 10:21:21 compute-0 nova_compute[262220]: 2025-10-08 10:21:21.516 2 DEBUG nova.network.neutron [-] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 08 10:21:21 compute-0 nova_compute[262220]: 2025-10-08 10:21:21.675 2 DEBUG nova.network.neutron [req-82538725-4911-4429-a23d-4de6ec61ad28 req-1affc0b0-140a-41a5-aa07-eeb5cace1008 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Updated VIF entry in instance network info cache for port 754d5578-d995-4502-af66-b164dfdf1189. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 08 10:21:21 compute-0 nova_compute[262220]: 2025-10-08 10:21:21.676 2 DEBUG nova.network.neutron [req-82538725-4911-4429-a23d-4de6ec61ad28 req-1affc0b0-140a-41a5-aa07-eeb5cace1008 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Updating instance_info_cache with network_info: [{"id": "754d5578-d995-4502-af66-b164dfdf1189", "address": "fa:16:3e:5d:e9:f4", "network": {"id": "84428682-9eff-4658-a105-8c0d1de9c87f", "bridge": "br-int", "label": "tempest-network-smoke--1945963860", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap754d5578-d9", "ovs_interfaceid": "754d5578-d995-4502-af66-b164dfdf1189", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 08 10:21:21 compute-0 nova_compute[262220]: 2025-10-08 10:21:21.708 2 DEBUG oslo_concurrency.lockutils [req-82538725-4911-4429-a23d-4de6ec61ad28 req-1affc0b0-140a-41a5-aa07-eeb5cace1008 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Releasing lock "refresh_cache-20ffb86b-b5ba-4818-82e4-14a755c48807" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 08 10:21:21 compute-0 nova_compute[262220]: 2025-10-08 10:21:21.964 2 DEBUG nova.compute.manager [req-2754b5d5-13ee-405a-a32d-8e5f05b0f719 req-6ca85ba5-355a-4e33-b765-b94479442c16 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Received event network-vif-unplugged-754d5578-d995-4502-af66-b164dfdf1189 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 08 10:21:21 compute-0 nova_compute[262220]: 2025-10-08 10:21:21.965 2 DEBUG oslo_concurrency.lockutils [req-2754b5d5-13ee-405a-a32d-8e5f05b0f719 req-6ca85ba5-355a-4e33-b765-b94479442c16 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "20ffb86b-b5ba-4818-82e4-14a755c48807-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:21:21 compute-0 nova_compute[262220]: 2025-10-08 10:21:21.965 2 DEBUG oslo_concurrency.lockutils [req-2754b5d5-13ee-405a-a32d-8e5f05b0f719 req-6ca85ba5-355a-4e33-b765-b94479442c16 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "20ffb86b-b5ba-4818-82e4-14a755c48807-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:21:21 compute-0 nova_compute[262220]: 2025-10-08 10:21:21.966 2 DEBUG oslo_concurrency.lockutils [req-2754b5d5-13ee-405a-a32d-8e5f05b0f719 req-6ca85ba5-355a-4e33-b765-b94479442c16 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "20ffb86b-b5ba-4818-82e4-14a755c48807-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:21:21 compute-0 nova_compute[262220]: 2025-10-08 10:21:21.966 2 DEBUG nova.compute.manager [req-2754b5d5-13ee-405a-a32d-8e5f05b0f719 req-6ca85ba5-355a-4e33-b765-b94479442c16 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] No waiting events found dispatching network-vif-unplugged-754d5578-d995-4502-af66-b164dfdf1189 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 08 10:21:21 compute-0 nova_compute[262220]: 2025-10-08 10:21:21.967 2 DEBUG nova.compute.manager [req-2754b5d5-13ee-405a-a32d-8e5f05b0f719 req-6ca85ba5-355a-4e33-b765-b94479442c16 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Received event network-vif-unplugged-754d5578-d995-4502-af66-b164dfdf1189 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 08 10:21:21 compute-0 nova_compute[262220]: 2025-10-08 10:21:21.967 2 DEBUG nova.compute.manager [req-2754b5d5-13ee-405a-a32d-8e5f05b0f719 req-6ca85ba5-355a-4e33-b765-b94479442c16 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Received event network-vif-plugged-754d5578-d995-4502-af66-b164dfdf1189 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 08 10:21:21 compute-0 nova_compute[262220]: 2025-10-08 10:21:21.968 2 DEBUG oslo_concurrency.lockutils [req-2754b5d5-13ee-405a-a32d-8e5f05b0f719 req-6ca85ba5-355a-4e33-b765-b94479442c16 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "20ffb86b-b5ba-4818-82e4-14a755c48807-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:21:21 compute-0 nova_compute[262220]: 2025-10-08 10:21:21.968 2 DEBUG oslo_concurrency.lockutils [req-2754b5d5-13ee-405a-a32d-8e5f05b0f719 req-6ca85ba5-355a-4e33-b765-b94479442c16 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "20ffb86b-b5ba-4818-82e4-14a755c48807-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:21:21 compute-0 nova_compute[262220]: 2025-10-08 10:21:21.969 2 DEBUG oslo_concurrency.lockutils [req-2754b5d5-13ee-405a-a32d-8e5f05b0f719 req-6ca85ba5-355a-4e33-b765-b94479442c16 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "20ffb86b-b5ba-4818-82e4-14a755c48807-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:21:21 compute-0 nova_compute[262220]: 2025-10-08 10:21:21.969 2 DEBUG nova.compute.manager [req-2754b5d5-13ee-405a-a32d-8e5f05b0f719 req-6ca85ba5-355a-4e33-b765-b94479442c16 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] No waiting events found dispatching network-vif-plugged-754d5578-d995-4502-af66-b164dfdf1189 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 08 10:21:21 compute-0 nova_compute[262220]: 2025-10-08 10:21:21.969 2 WARNING nova.compute.manager [req-2754b5d5-13ee-405a-a32d-8e5f05b0f719 req-6ca85ba5-355a-4e33-b765-b94479442c16 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Received unexpected event network-vif-plugged-754d5578-d995-4502-af66-b164dfdf1189 for instance with vm_state active and task_state deleting.
Oct 08 10:21:22 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:21:22 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:21:22 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:21:22.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:21:22 compute-0 ceph-mon[73572]: pgmap v1103: 353 pgs: 353 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 99 KiB/s rd, 108 KiB/s wr, 22 op/s
Oct 08 10:21:22 compute-0 nova_compute[262220]: 2025-10-08 10:21:22.397 2 DEBUG nova.network.neutron [-] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 08 10:21:22 compute-0 nova_compute[262220]: 2025-10-08 10:21:22.414 2 DEBUG nova.compute.manager [req-07b4d789-46eb-412f-b2de-6bfe7c2cf29c req-93df61ff-6998-4b81-a0ff-b19e38ed2d1a 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Received event network-vif-deleted-754d5578-d995-4502-af66-b164dfdf1189 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 08 10:21:22 compute-0 nova_compute[262220]: 2025-10-08 10:21:22.414 2 INFO nova.compute.manager [req-07b4d789-46eb-412f-b2de-6bfe7c2cf29c req-93df61ff-6998-4b81-a0ff-b19e38ed2d1a 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Neutron deleted interface 754d5578-d995-4502-af66-b164dfdf1189; detaching it from the instance and deleting it from the info cache
Oct 08 10:21:22 compute-0 nova_compute[262220]: 2025-10-08 10:21:22.415 2 DEBUG nova.network.neutron [req-07b4d789-46eb-412f-b2de-6bfe7c2cf29c req-93df61ff-6998-4b81-a0ff-b19e38ed2d1a 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 08 10:21:22 compute-0 nova_compute[262220]: 2025-10-08 10:21:22.424 2 INFO nova.compute.manager [-] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Took 0.91 seconds to deallocate network for instance.
Oct 08 10:21:22 compute-0 nova_compute[262220]: 2025-10-08 10:21:22.436 2 DEBUG nova.compute.manager [req-07b4d789-46eb-412f-b2de-6bfe7c2cf29c req-93df61ff-6998-4b81-a0ff-b19e38ed2d1a 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Detach interface failed, port_id=754d5578-d995-4502-af66-b164dfdf1189, reason: Instance 20ffb86b-b5ba-4818-82e4-14a755c48807 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 08 10:21:22 compute-0 nova_compute[262220]: 2025-10-08 10:21:22.507 2 DEBUG oslo_concurrency.lockutils [None req-4fca9653-4a5c-48cd-850c-bb69fac83642 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:21:22 compute-0 nova_compute[262220]: 2025-10-08 10:21:22.508 2 DEBUG oslo_concurrency.lockutils [None req-4fca9653-4a5c-48cd-850c-bb69fac83642 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:21:22 compute-0 nova_compute[262220]: 2025-10-08 10:21:22.573 2 DEBUG oslo_concurrency.processutils [None req-4fca9653-4a5c-48cd-850c-bb69fac83642 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:21:22 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1104: 353 pgs: 353 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 99 KiB/s rd, 108 KiB/s wr, 22 op/s
Oct 08 10:21:23 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 08 10:21:23 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2926904480' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:21:23 compute-0 nova_compute[262220]: 2025-10-08 10:21:23.074 2 DEBUG oslo_concurrency.processutils [None req-4fca9653-4a5c-48cd-850c-bb69fac83642 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:21:23 compute-0 nova_compute[262220]: 2025-10-08 10:21:23.080 2 DEBUG nova.compute.provider_tree [None req-4fca9653-4a5c-48cd-850c-bb69fac83642 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 08 10:21:23 compute-0 nova_compute[262220]: 2025-10-08 10:21:23.113 2 DEBUG nova.scheduler.client.report [None req-4fca9653-4a5c-48cd-850c-bb69fac83642 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 08 10:21:23 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:21:23 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:21:23 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:21:23.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:21:23 compute-0 nova_compute[262220]: 2025-10-08 10:21:23.246 2 DEBUG oslo_concurrency.lockutils [None req-4fca9653-4a5c-48cd-850c-bb69fac83642 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:21:23 compute-0 nova_compute[262220]: 2025-10-08 10:21:23.323 2 INFO nova.scheduler.client.report [None req-4fca9653-4a5c-48cd-850c-bb69fac83642 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Deleted allocations for instance 20ffb86b-b5ba-4818-82e4-14a755c48807
Oct 08 10:21:23 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/2926904480' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:21:23 compute-0 nova_compute[262220]: 2025-10-08 10:21:23.430 2 DEBUG oslo_concurrency.lockutils [None req-4fca9653-4a5c-48cd-850c-bb69fac83642 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "20ffb86b-b5ba-4818-82e4-14a755c48807" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.946s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:21:23 compute-0 nova_compute[262220]: 2025-10-08 10:21:23.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:21:23 compute-0 podman[284127]: 2025-10-08 10:21:23.947521546 +0000 UTC m=+0.103944928 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct 08 10:21:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:21:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:21:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:21:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:24 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:21:24 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:21:24 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:21:24 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:21:24.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:21:24 compute-0 ceph-mon[73572]: pgmap v1104: 353 pgs: 353 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 99 KiB/s rd, 108 KiB/s wr, 22 op/s
Oct 08 10:21:24 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:21:24 compute-0 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Oct 08 10:21:24 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:21:24.351462) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 08 10:21:24 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Oct 08 10:21:24 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918884351535, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2126, "num_deletes": 251, "total_data_size": 4146520, "memory_usage": 4215872, "flush_reason": "Manual Compaction"}
Oct 08 10:21:24 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Oct 08 10:21:24 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918884373212, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 4005822, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29538, "largest_seqno": 31663, "table_properties": {"data_size": 3996283, "index_size": 5969, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 19940, "raw_average_key_size": 20, "raw_value_size": 3977210, "raw_average_value_size": 4083, "num_data_blocks": 257, "num_entries": 974, "num_filter_entries": 974, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759918684, "oldest_key_time": 1759918684, "file_creation_time": 1759918884, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Oct 08 10:21:24 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 21804 microseconds, and 8534 cpu microseconds.
Oct 08 10:21:24 compute-0 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 08 10:21:24 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:21:24.373268) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 4005822 bytes OK
Oct 08 10:21:24 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:21:24.373295) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Oct 08 10:21:24 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:21:24.374867) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Oct 08 10:21:24 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:21:24.374882) EVENT_LOG_v1 {"time_micros": 1759918884374877, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 08 10:21:24 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:21:24.374901) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 08 10:21:24 compute-0 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 4137891, prev total WAL file size 4137891, number of live WAL files 2.
Oct 08 10:21:24 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 10:21:24 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:21:24.376086) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Oct 08 10:21:24 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 08 10:21:24 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(3911KB)], [65(11MB)]
Oct 08 10:21:24 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918884376145, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 16135283, "oldest_snapshot_seqno": -1}
Oct 08 10:21:24 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 6209 keys, 13992151 bytes, temperature: kUnknown
Oct 08 10:21:24 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918884457766, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 13992151, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13951366, "index_size": 24163, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15557, "raw_key_size": 159114, "raw_average_key_size": 25, "raw_value_size": 13840335, "raw_average_value_size": 2229, "num_data_blocks": 969, "num_entries": 6209, "num_filter_entries": 6209, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916581, "oldest_key_time": 0, "file_creation_time": 1759918884, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Oct 08 10:21:24 compute-0 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 08 10:21:24 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:21:24.458242) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 13992151 bytes
Oct 08 10:21:24 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:21:24.459685) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 197.4 rd, 171.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.8, 11.6 +0.0 blob) out(13.3 +0.0 blob), read-write-amplify(7.5) write-amplify(3.5) OK, records in: 6730, records dropped: 521 output_compression: NoCompression
Oct 08 10:21:24 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:21:24.459726) EVENT_LOG_v1 {"time_micros": 1759918884459708, "job": 36, "event": "compaction_finished", "compaction_time_micros": 81738, "compaction_time_cpu_micros": 37283, "output_level": 6, "num_output_files": 1, "total_output_size": 13992151, "num_input_records": 6730, "num_output_records": 6209, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 08 10:21:24 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 10:21:24 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918884460869, "job": 36, "event": "table_file_deletion", "file_number": 67}
Oct 08 10:21:24 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 10:21:24 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918884463907, "job": 36, "event": "table_file_deletion", "file_number": 65}
Oct 08 10:21:24 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:21:24.375952) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:21:24 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:21:24.463940) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:21:24 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:21:24.463944) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:21:24 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:21:24.463961) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:21:24 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:21:24.463962) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:21:24 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:21:24.463964) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:21:24 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1105: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 119 KiB/s rd, 111 KiB/s wr, 51 op/s
Oct 08 10:21:25 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:21:25 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:21:25 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:21:25.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:21:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:21:25] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Oct 08 10:21:25 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:21:25] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Oct 08 10:21:25 compute-0 nova_compute[262220]: 2025-10-08 10:21:25.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:21:26 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:21:26 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:21:26 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:21:26.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:21:26 compute-0 ceph-mon[73572]: pgmap v1105: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 119 KiB/s rd, 111 KiB/s wr, 51 op/s
Oct 08 10:21:26 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1106: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 15 KiB/s wr, 29 op/s
Oct 08 10:21:27 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:21:27 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:21:27 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:21:27.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:21:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:21:27.199Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:21:28 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:21:28 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:21:28 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:21:28.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:21:28 compute-0 ceph-mon[73572]: pgmap v1106: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 15 KiB/s wr, 29 op/s
Oct 08 10:21:28 compute-0 nova_compute[262220]: 2025-10-08 10:21:28.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:21:28 compute-0 nova_compute[262220]: 2025-10-08 10:21:28.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:21:28 compute-0 nova_compute[262220]: 2025-10-08 10:21:28.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:21:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:21:28.853Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:21:28 compute-0 podman[284159]: 2025-10-08 10:21:28.916975794 +0000 UTC m=+0.080689056 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 08 10:21:28 compute-0 podman[284160]: 2025-10-08 10:21:28.930528302 +0000 UTC m=+0.081072079 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 08 10:21:28 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1107: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 15 KiB/s wr, 30 op/s
Oct 08 10:21:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:29 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:21:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:29 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:21:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:29 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:21:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:29 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:21:29 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:21:29 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:21:29 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:21:29.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:21:29 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:21:29 compute-0 sudo[284200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:21:29 compute-0 sudo[284200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:21:29 compute-0 sudo[284200]: pam_unix(sudo:session): session closed for user root
Oct 08 10:21:30 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:21:30 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:21:30 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:21:30.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:21:30 compute-0 ceph-mon[73572]: pgmap v1107: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 15 KiB/s wr, 30 op/s
Oct 08 10:21:30 compute-0 nova_compute[262220]: 2025-10-08 10:21:30.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:21:30 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1108: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.2 KiB/s wr, 29 op/s
Oct 08 10:21:31 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:21:31 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:21:31 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:21:31.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:21:32 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:21:32 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:21:32 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:21:32.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:21:32 compute-0 ceph-mon[73572]: pgmap v1108: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.2 KiB/s wr, 29 op/s
Oct 08 10:21:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:21:32 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:21:32 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1109: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.2 KiB/s wr, 29 op/s
Oct 08 10:21:33 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:21:33 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:21:33 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:21:33.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:21:33 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:21:33 compute-0 nova_compute[262220]: 2025-10-08 10:21:33.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:21:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:34 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:21:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:34 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:21:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:34 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:21:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:34 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:21:34 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:21:34 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:21:34 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:21:34.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:21:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:21:34 compute-0 ceph-mon[73572]: pgmap v1109: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.2 KiB/s wr, 29 op/s
Oct 08 10:21:34 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1110: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 3.2 KiB/s wr, 29 op/s
Oct 08 10:21:35 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:21:35 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:21:35 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:21:35.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:21:35 compute-0 nova_compute[262220]: 2025-10-08 10:21:35.724 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759918880.724088, 20ffb86b-b5ba-4818-82e4-14a755c48807 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 08 10:21:35 compute-0 nova_compute[262220]: 2025-10-08 10:21:35.725 2 INFO nova.compute.manager [-] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] VM Stopped (Lifecycle Event)
Oct 08 10:21:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:21:35] "GET /metrics HTTP/1.1" 200 48451 "" "Prometheus/2.51.0"
Oct 08 10:21:35 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:21:35] "GET /metrics HTTP/1.1" 200 48451 "" "Prometheus/2.51.0"
Oct 08 10:21:35 compute-0 nova_compute[262220]: 2025-10-08 10:21:35.746 2 DEBUG nova.compute.manager [None req-a5e69c99-7c2a-4b47-a998-55e8a3203fa1 - - - - - -] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 08 10:21:35 compute-0 nova_compute[262220]: 2025-10-08 10:21:35.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:21:36 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:21:36 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:21:36 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:21:36.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:21:36 compute-0 ceph-mon[73572]: pgmap v1110: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 3.2 KiB/s wr, 29 op/s
Oct 08 10:21:36 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1111: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:21:37 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:21:37 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:21:37 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:21:37.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:21:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:21:37.200Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:21:37 compute-0 ceph-mon[73572]: pgmap v1111: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:21:38 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:21:38 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:21:38 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:21:38.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:21:38 compute-0 nova_compute[262220]: 2025-10-08 10:21:38.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:21:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:21:38.853Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:21:38 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1112: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:21:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:21:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:21:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:21:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:39 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:21:39 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:21:39 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:21:39 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:21:39.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:21:39 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:21:40 compute-0 ceph-mon[73572]: pgmap v1112: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:21:40 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:21:40 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:21:40 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:21:40.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:21:40 compute-0 nova_compute[262220]: 2025-10-08 10:21:40.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:21:40 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1113: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:21:41 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:21:41 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:21:41 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:21:41.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:21:42 compute-0 ceph-mon[73572]: pgmap v1113: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:21:42 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:21:42 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:21:42 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:21:42.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:21:42 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1114: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:21:43 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:21:43 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:21:43 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:21:43.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:21:43 compute-0 nova_compute[262220]: 2025-10-08 10:21:43.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:21:43 compute-0 podman[284239]: 2025-10-08 10:21:43.889389723 +0000 UTC m=+0.052680922 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible)
Oct 08 10:21:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:21:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:21:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:21:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:44 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:21:44 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:21:44 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:21:44 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:21:44.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:21:44 compute-0 ceph-mon[73572]: pgmap v1114: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:21:44 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:21:44 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1115: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:21:45 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:21:45 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:21:45 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:21:45.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:21:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:21:45] "GET /metrics HTTP/1.1" 200 48451 "" "Prometheus/2.51.0"
Oct 08 10:21:45 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:21:45] "GET /metrics HTTP/1.1" 200 48451 "" "Prometheus/2.51.0"
Oct 08 10:21:45 compute-0 nova_compute[262220]: 2025-10-08 10:21:45.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:21:46 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:21:46 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:21:46 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:21:46.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:21:46 compute-0 ceph-mon[73572]: pgmap v1115: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:21:46 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1116: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:21:47 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:21:47 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:21:47 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:21:47.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:21:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:21:47.201Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:21:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:21:47.201Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:21:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:21:47.202Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:21:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:21:47
Oct 08 10:21:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 08 10:21:47 compute-0 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct 08 10:21:47 compute-0 ceph-mgr[73869]: [balancer INFO root] pools ['default.rgw.log', 'images', '.nfs', '.rgw.root', 'cephfs.cephfs.data', 'volumes', 'vms', 'backups', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control']
Oct 08 10:21:47 compute-0 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct 08 10:21:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:21:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:21:47 compute-0 nova_compute[262220]: 2025-10-08 10:21:47.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:21:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:21:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:21:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:21:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:21:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:21:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:21:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct 08 10:21:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:21:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 08 10:21:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:21:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 10:21:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:21:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:21:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:21:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:21:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:21:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 08 10:21:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:21:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 08 10:21:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:21:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:21:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:21:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 08 10:21:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:21:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 08 10:21:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:21:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:21:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:21:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 08 10:21:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:21:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 10:21:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 08 10:21:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:21:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:21:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:21:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:21:48 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:21:48 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:21:48 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:21:48.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:21:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 08 10:21:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:21:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:21:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:21:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:21:48 compute-0 ceph-mon[73572]: pgmap v1116: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:21:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:21:48 compute-0 nova_compute[262220]: 2025-10-08 10:21:48.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:21:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:21:48.854Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:21:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:21:48.855Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:21:48 compute-0 nova_compute[262220]: 2025-10-08 10:21:48.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:21:48 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1117: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:21:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:21:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:21:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:21:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:49 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:21:49 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:21:49 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:21:49 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:21:49.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:21:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:21:49 compute-0 sudo[284266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:21:49 compute-0 sudo[284266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:21:49 compute-0 sudo[284266]: pam_unix(sudo:session): session closed for user root
Oct 08 10:21:50 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:21:50 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:21:50 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:21:50.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:21:50 compute-0 ceph-mon[73572]: pgmap v1117: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:21:50 compute-0 nova_compute[262220]: 2025-10-08 10:21:50.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:21:50 compute-0 nova_compute[262220]: 2025-10-08 10:21:50.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:21:50 compute-0 nova_compute[262220]: 2025-10-08 10:21:50.886 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 08 10:21:50 compute-0 nova_compute[262220]: 2025-10-08 10:21:50.886 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 08 10:21:50 compute-0 nova_compute[262220]: 2025-10-08 10:21:50.900 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 08 10:21:50 compute-0 nova_compute[262220]: 2025-10-08 10:21:50.901 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:21:50 compute-0 nova_compute[262220]: 2025-10-08 10:21:50.901 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:21:50 compute-0 nova_compute[262220]: 2025-10-08 10:21:50.925 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:21:50 compute-0 nova_compute[262220]: 2025-10-08 10:21:50.926 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:21:50 compute-0 nova_compute[262220]: 2025-10-08 10:21:50.926 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:21:50 compute-0 nova_compute[262220]: 2025-10-08 10:21:50.926 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 08 10:21:50 compute-0 nova_compute[262220]: 2025-10-08 10:21:50.927 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:21:50 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1118: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:21:51 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:21:51 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:21:51 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:21:51.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:21:51 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 08 10:21:51 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1213713613' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:21:51 compute-0 nova_compute[262220]: 2025-10-08 10:21:51.441 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:21:51 compute-0 nova_compute[262220]: 2025-10-08 10:21:51.626 2 WARNING nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 10:21:51 compute-0 nova_compute[262220]: 2025-10-08 10:21:51.627 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4562MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 08 10:21:51 compute-0 nova_compute[262220]: 2025-10-08 10:21:51.628 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:21:51 compute-0 nova_compute[262220]: 2025-10-08 10:21:51.628 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:21:51 compute-0 nova_compute[262220]: 2025-10-08 10:21:51.684 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 08 10:21:51 compute-0 nova_compute[262220]: 2025-10-08 10:21:51.684 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 08 10:21:51 compute-0 nova_compute[262220]: 2025-10-08 10:21:51.704 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:21:52 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 08 10:21:52 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1981282094' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:21:52 compute-0 nova_compute[262220]: 2025-10-08 10:21:52.175 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:21:52 compute-0 nova_compute[262220]: 2025-10-08 10:21:52.182 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 08 10:21:52 compute-0 nova_compute[262220]: 2025-10-08 10:21:52.219 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 08 10:21:52 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:21:52 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:21:52 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:21:52.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:21:52 compute-0 nova_compute[262220]: 2025-10-08 10:21:52.257 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 08 10:21:52 compute-0 nova_compute[262220]: 2025-10-08 10:21:52.257 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:21:52 compute-0 ceph-mon[73572]: pgmap v1118: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:21:52 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/1213713613' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:21:52 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/1981282094' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:21:52 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1119: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:21:53 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:21:53 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:21:53 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:21:53.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:21:53 compute-0 nova_compute[262220]: 2025-10-08 10:21:53.253 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:21:53 compute-0 nova_compute[262220]: 2025-10-08 10:21:53.254 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:21:53 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/1277217659' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:21:53 compute-0 nova_compute[262220]: 2025-10-08 10:21:53.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:21:53 compute-0 nova_compute[262220]: 2025-10-08 10:21:53.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:21:53 compute-0 nova_compute[262220]: 2025-10-08 10:21:53.886 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 08 10:21:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:21:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:21:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:21:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:54 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:21:54 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:21:54 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:21:54 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:21:54.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:21:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:21:54 compute-0 ceph-mon[73572]: pgmap v1119: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:21:54 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/3029593801' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:21:54 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/2125170172' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:21:54 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1120: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:21:54 compute-0 podman[284342]: 2025-10-08 10:21:54.974164289 +0000 UTC m=+0.129308836 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:21:55 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:21:55 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:21:55 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:21:55.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:21:55 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/1638101476' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:21:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:21:55] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Oct 08 10:21:55 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:21:55] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Oct 08 10:21:55 compute-0 nova_compute[262220]: 2025-10-08 10:21:55.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:21:56 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:21:56 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:21:56 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:21:56.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:21:56 compute-0 ceph-mon[73572]: pgmap v1120: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:21:56 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1121: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:21:57 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:21:57 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:21:57 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:21:57.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:21:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:21:57.202Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:21:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:21:57.420 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:21:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:21:57.421 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:21:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:21:57.421 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:21:57 compute-0 nova_compute[262220]: 2025-10-08 10:21:57.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:21:58 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:21:58 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:21:58 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:21:58.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:21:58 compute-0 ceph-mon[73572]: pgmap v1121: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:21:58 compute-0 nova_compute[262220]: 2025-10-08 10:21:58.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:21:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:21:58.856Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:21:58 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1122: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:21:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:59 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:21:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:59 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:21:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:59 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:21:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:59 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:21:59 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:21:59 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:21:59 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:21:59.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:21:59 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:21:59 compute-0 ovn_controller[153187]: 2025-10-08T10:21:59Z|00082|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Oct 08 10:21:59 compute-0 podman[284374]: 2025-10-08 10:21:59.897923692 +0000 UTC m=+0.055281736 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent)
Oct 08 10:21:59 compute-0 podman[284373]: 2025-10-08 10:21:59.915891722 +0000 UTC m=+0.077009787 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 08 10:22:00 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:22:00 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:22:00 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:22:00.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:22:00 compute-0 ceph-mon[73572]: pgmap v1122: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:22:00 compute-0 nova_compute[262220]: 2025-10-08 10:22:00.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:22:00 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1123: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:22:01 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:22:01 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:22:01 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:22:01.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:22:01 compute-0 ceph-mon[73572]: pgmap v1123: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:22:02 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:22:02 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:22:02 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:22:02.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:22:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:22:02 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:22:02 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:22:02 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1124: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:22:03 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:22:03 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:22:03 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:22:03.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:22:03 compute-0 nova_compute[262220]: 2025-10-08 10:22:03.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:22:03 compute-0 ceph-mon[73572]: pgmap v1124: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:22:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:22:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:22:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:22:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:04 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:22:04 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:22:04 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:22:04 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:22:04.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:22:04 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:22:04 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1125: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:22:05 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:22:05 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:22:05 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:22:05.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:22:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:22:05] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 08 10:22:05 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:22:05] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 08 10:22:05 compute-0 nova_compute[262220]: 2025-10-08 10:22:05.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:22:06 compute-0 ceph-mon[73572]: pgmap v1125: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:22:06 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:22:06 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:22:06 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:22:06.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:22:06 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1126: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:22:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:22:07.203Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:22:07 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:22:07 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:22:07 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:22:07.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:22:08 compute-0 ceph-mon[73572]: pgmap v1126: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:22:08 compute-0 sudo[284420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:22:08 compute-0 sudo[284420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:22:08 compute-0 sudo[284420]: pam_unix(sudo:session): session closed for user root
Oct 08 10:22:08 compute-0 sudo[284445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Oct 08 10:22:08 compute-0 sudo[284445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:22:08 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:22:08 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:22:08 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:22:08.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:22:08 compute-0 nova_compute[262220]: 2025-10-08 10:22:08.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:22:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:22:08.857Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:22:08 compute-0 podman[284544]: 2025-10-08 10:22:08.896237155 +0000 UTC m=+0.085581725 container exec 01c666addd8584f02eb7d29040d97e45d8cb7f834fd134029c1f7cca550aeb9d (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Oct 08 10:22:08 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1127: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:22:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:22:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:22:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:22:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:09 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:22:09 compute-0 podman[284544]: 2025-10-08 10:22:09.023521875 +0000 UTC m=+0.212866475 container exec_died 01c666addd8584f02eb7d29040d97e45d8cb7f834fd134029c1f7cca550aeb9d (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 08 10:22:09 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:22:09 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:22:09 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:22:09.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:22:09 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:22:09 compute-0 sudo[284694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:22:09 compute-0 sudo[284694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:22:09 compute-0 sudo[284694]: pam_unix(sudo:session): session closed for user root
Oct 08 10:22:09 compute-0 podman[284682]: 2025-10-08 10:22:09.883007638 +0000 UTC m=+0.183320260 container exec 4eb6b712d09e2820826e6f961f0aba1bff09f13eee1664dbb03edca7615a708b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 10:22:10 compute-0 podman[284682]: 2025-10-08 10:22:10.066889406 +0000 UTC m=+0.367201918 container exec_died 4eb6b712d09e2820826e6f961f0aba1bff09f13eee1664dbb03edca7615a708b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 10:22:10 compute-0 ceph-mon[73572]: pgmap v1127: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:22:10 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:22:10 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:22:10 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:22:10.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:22:10 compute-0 podman[284781]: 2025-10-08 10:22:10.660621698 +0000 UTC m=+0.120803702 container exec 90486abb955ec1d9472e9211269572dd99696faaed865d52f07cc20a187b4c4b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct 08 10:22:10 compute-0 podman[284801]: 2025-10-08 10:22:10.748248408 +0000 UTC m=+0.060477684 container exec_died 90486abb955ec1d9472e9211269572dd99696faaed865d52f07cc20a187b4c4b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Oct 08 10:22:10 compute-0 podman[284781]: 2025-10-08 10:22:10.775609631 +0000 UTC m=+0.235791645 container exec_died 90486abb955ec1d9472e9211269572dd99696faaed865d52f07cc20a187b4c4b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 08 10:22:10 compute-0 nova_compute[262220]: 2025-10-08 10:22:10.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:22:10 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1128: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:22:11 compute-0 podman[284845]: 2025-10-08 10:22:11.048022008 +0000 UTC m=+0.059707590 container exec 1c113ffcb0d41c6337c256a4892f13e82bdea6ca8062b10324516aa631301ea5 (image=quay.io/ceph/haproxy:2.3, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp)
Oct 08 10:22:11 compute-0 podman[284845]: 2025-10-08 10:22:11.060501161 +0000 UTC m=+0.072186733 container exec_died 1c113ffcb0d41c6337c256a4892f13e82bdea6ca8062b10324516aa631301ea5 (image=quay.io/ceph/haproxy:2.3, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp)
Oct 08 10:22:11 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:22:11 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:22:11 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:22:11.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:22:11 compute-0 podman[284915]: 2025-10-08 10:22:11.296287514 +0000 UTC m=+0.056762503 container exec 5814788fb4b63196d097e0c60082efdca22825a3ba337ddec5c99170a1bfd89d (image=quay.io/ceph/keepalived:2.2.4, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, version=2.2.4, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, architecture=x86_64, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=)
Oct 08 10:22:11 compute-0 podman[284915]: 2025-10-08 10:22:11.308410486 +0000 UTC m=+0.068885445 container exec_died 5814788fb4b63196d097e0c60082efdca22825a3ba337ddec5c99170a1bfd89d (image=quay.io/ceph/keepalived:2.2.4, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, distribution-scope=public, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, io.buildah.version=1.28.2, name=keepalived, architecture=x86_64, io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Oct 08 10:22:11 compute-0 podman[284982]: 2025-10-08 10:22:11.55565834 +0000 UTC m=+0.071397947 container exec feb968c21a5fda3cd2e7b86379d5576dbe3dadfd4cb62b92318e18ddbaaafb75 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 10:22:11 compute-0 podman[284982]: 2025-10-08 10:22:11.600457847 +0000 UTC m=+0.116197414 container exec_died feb968c21a5fda3cd2e7b86379d5576dbe3dadfd4cb62b92318e18ddbaaafb75 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 10:22:11 compute-0 podman[285059]: 2025-10-08 10:22:11.851537594 +0000 UTC m=+0.056888918 container exec 73efcf403aa6fcb4b04e1ef714b7bf4169b576e4a4c1b34cdc0b458f62db65fd (image=quay.io/ceph/grafana:10.4.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 08 10:22:12 compute-0 podman[285059]: 2025-10-08 10:22:12.090921703 +0000 UTC m=+0.296273017 container exec_died 73efcf403aa6fcb4b04e1ef714b7bf4169b576e4a4c1b34cdc0b458f62db65fd (image=quay.io/ceph/grafana:10.4.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 08 10:22:12 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:22:12 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:22:12 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:22:12.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:22:12 compute-0 ceph-mon[73572]: pgmap v1128: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:22:12 compute-0 podman[285171]: 2025-10-08 10:22:12.620786613 +0000 UTC m=+0.075179238 container exec 50d7285a7766fc915688c3cc737e186f9ad2230d7b0b7e759880375ad996bb6c (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 10:22:12 compute-0 podman[285171]: 2025-10-08 10:22:12.67734604 +0000 UTC m=+0.131738655 container exec_died 50d7285a7766fc915688c3cc737e186f9ad2230d7b0b7e759880375ad996bb6c (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 08 10:22:12 compute-0 sudo[284445]: pam_unix(sudo:session): session closed for user root
Oct 08 10:22:12 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 10:22:12 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:22:12 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 10:22:12 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:22:12 compute-0 sudo[285213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:22:12 compute-0 sudo[285213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:22:12 compute-0 sudo[285213]: pam_unix(sudo:session): session closed for user root
Oct 08 10:22:12 compute-0 sudo[285238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 08 10:22:12 compute-0 sudo[285238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:22:12 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1129: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:22:13 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:22:13 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:22:13 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:22:13.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:22:13 compute-0 sudo[285238]: pam_unix(sudo:session): session closed for user root
Oct 08 10:22:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 10:22:13 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:22:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 08 10:22:13 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 10:22:13 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1130: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 08 10:22:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 08 10:22:13 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:22:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 08 10:22:13 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:22:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 08 10:22:13 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 10:22:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 08 10:22:13 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 10:22:13 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 10:22:13 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:22:13 compute-0 sudo[285295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:22:13 compute-0 sudo[285295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:22:13 compute-0 sudo[285295]: pam_unix(sudo:session): session closed for user root
Oct 08 10:22:13 compute-0 nova_compute[262220]: 2025-10-08 10:22:13.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:22:13 compute-0 sudo[285320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 08 10:22:13 compute-0 sudo[285320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:22:13 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:22:13 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:22:13 compute-0 ceph-mon[73572]: pgmap v1129: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:22:13 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:22:13 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 10:22:13 compute-0 ceph-mon[73572]: pgmap v1130: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 08 10:22:13 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:22:13 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:22:13 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 10:22:13 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 10:22:13 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:22:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:22:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:22:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:22:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:14 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:22:14 compute-0 podman[285388]: 2025-10-08 10:22:14.082676919 +0000 UTC m=+0.044578520 container create c82757e803798211c98c21719bd2d937575665442bfd5837668cde015253c2bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_wilbur, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:22:14 compute-0 systemd[1]: Started libpod-conmon-c82757e803798211c98c21719bd2d937575665442bfd5837668cde015253c2bc.scope.
Oct 08 10:22:14 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:22:14 compute-0 podman[285388]: 2025-10-08 10:22:14.062178577 +0000 UTC m=+0.024080278 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:22:14 compute-0 podman[285388]: 2025-10-08 10:22:14.16572296 +0000 UTC m=+0.127624591 container init c82757e803798211c98c21719bd2d937575665442bfd5837668cde015253c2bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid)
Oct 08 10:22:14 compute-0 podman[285388]: 2025-10-08 10:22:14.172874272 +0000 UTC m=+0.134775873 container start c82757e803798211c98c21719bd2d937575665442bfd5837668cde015253c2bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_wilbur, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct 08 10:22:14 compute-0 podman[285388]: 2025-10-08 10:22:14.176128657 +0000 UTC m=+0.138030308 container attach c82757e803798211c98c21719bd2d937575665442bfd5837668cde015253c2bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 10:22:14 compute-0 condescending_wilbur[285405]: 167 167
Oct 08 10:22:14 compute-0 systemd[1]: libpod-c82757e803798211c98c21719bd2d937575665442bfd5837668cde015253c2bc.scope: Deactivated successfully.
Oct 08 10:22:14 compute-0 podman[285388]: 2025-10-08 10:22:14.179174606 +0000 UTC m=+0.141076227 container died c82757e803798211c98c21719bd2d937575665442bfd5837668cde015253c2bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 08 10:22:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-79ebbc1869eb2ac9bde5ee4bf799a0a9762b51a18f26b4932d4622cee9c70d41-merged.mount: Deactivated successfully.
Oct 08 10:22:14 compute-0 podman[285388]: 2025-10-08 10:22:14.227446854 +0000 UTC m=+0.189348475 container remove c82757e803798211c98c21719bd2d937575665442bfd5837668cde015253c2bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Oct 08 10:22:14 compute-0 systemd[1]: libpod-conmon-c82757e803798211c98c21719bd2d937575665442bfd5837668cde015253c2bc.scope: Deactivated successfully.
Oct 08 10:22:14 compute-0 podman[285404]: 2025-10-08 10:22:14.23939193 +0000 UTC m=+0.100706913 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 08 10:22:14 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:22:14 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:22:14 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:22:14.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:22:14 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:22:14 compute-0 podman[285447]: 2025-10-08 10:22:14.432755714 +0000 UTC m=+0.054312326 container create 721d92f262b1092f9e309c1463a731c3b38107a2f490871c58e54c7e451f6d4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_hypatia, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Oct 08 10:22:14 compute-0 systemd[1]: Started libpod-conmon-721d92f262b1092f9e309c1463a731c3b38107a2f490871c58e54c7e451f6d4c.scope.
Oct 08 10:22:14 compute-0 podman[285447]: 2025-10-08 10:22:14.411155476 +0000 UTC m=+0.032712178 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:22:14 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:22:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/615359048ccfdd7ce81456af79e3f618ae9fbfbeca5dbbbbf3d96e99805b6c0b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:22:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/615359048ccfdd7ce81456af79e3f618ae9fbfbeca5dbbbbf3d96e99805b6c0b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:22:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/615359048ccfdd7ce81456af79e3f618ae9fbfbeca5dbbbbf3d96e99805b6c0b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:22:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/615359048ccfdd7ce81456af79e3f618ae9fbfbeca5dbbbbf3d96e99805b6c0b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:22:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/615359048ccfdd7ce81456af79e3f618ae9fbfbeca5dbbbbf3d96e99805b6c0b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 10:22:14 compute-0 podman[285447]: 2025-10-08 10:22:14.55680698 +0000 UTC m=+0.178363652 container init 721d92f262b1092f9e309c1463a731c3b38107a2f490871c58e54c7e451f6d4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_hypatia, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct 08 10:22:14 compute-0 podman[285447]: 2025-10-08 10:22:14.566402129 +0000 UTC m=+0.187958761 container start 721d92f262b1092f9e309c1463a731c3b38107a2f490871c58e54c7e451f6d4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:22:14 compute-0 podman[285447]: 2025-10-08 10:22:14.5704641 +0000 UTC m=+0.192020722 container attach 721d92f262b1092f9e309c1463a731c3b38107a2f490871c58e54c7e451f6d4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_hypatia, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid)
Oct 08 10:22:14 compute-0 quizzical_hypatia[285463]: --> passed data devices: 0 physical, 1 LVM
Oct 08 10:22:14 compute-0 quizzical_hypatia[285463]: --> All data devices are unavailable
Oct 08 10:22:14 compute-0 systemd[1]: libpod-721d92f262b1092f9e309c1463a731c3b38107a2f490871c58e54c7e451f6d4c.scope: Deactivated successfully.
Oct 08 10:22:14 compute-0 podman[285447]: 2025-10-08 10:22:14.973300628 +0000 UTC m=+0.594857260 container died 721d92f262b1092f9e309c1463a731c3b38107a2f490871c58e54c7e451f6d4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:22:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-615359048ccfdd7ce81456af79e3f618ae9fbfbeca5dbbbbf3d96e99805b6c0b-merged.mount: Deactivated successfully.
Oct 08 10:22:15 compute-0 podman[285447]: 2025-10-08 10:22:15.012515255 +0000 UTC m=+0.634071877 container remove 721d92f262b1092f9e309c1463a731c3b38107a2f490871c58e54c7e451f6d4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_hypatia, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Oct 08 10:22:15 compute-0 systemd[1]: libpod-conmon-721d92f262b1092f9e309c1463a731c3b38107a2f490871c58e54c7e451f6d4c.scope: Deactivated successfully.
Oct 08 10:22:15 compute-0 sudo[285320]: pam_unix(sudo:session): session closed for user root
Oct 08 10:22:15 compute-0 sudo[285490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:22:15 compute-0 sudo[285490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:22:15 compute-0 sudo[285490]: pam_unix(sudo:session): session closed for user root
Oct 08 10:22:15 compute-0 sudo[285515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- lvm list --format json
Oct 08 10:22:15 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:22:15 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:22:15 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:22:15.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:22:15 compute-0 sudo[285515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:22:15 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1131: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 08 10:22:15 compute-0 podman[285583]: 2025-10-08 10:22:15.732817704 +0000 UTC m=+0.042797143 container create a285891f0ed4efc999687f655ec86f539953f42f7e586f1f1310e7054505670d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_maxwell, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Oct 08 10:22:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:22:15] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 08 10:22:15 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:22:15] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 08 10:22:15 compute-0 systemd[1]: Started libpod-conmon-a285891f0ed4efc999687f655ec86f539953f42f7e586f1f1310e7054505670d.scope.
Oct 08 10:22:15 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:22:15 compute-0 podman[285583]: 2025-10-08 10:22:15.805399398 +0000 UTC m=+0.115378927 container init a285891f0ed4efc999687f655ec86f539953f42f7e586f1f1310e7054505670d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Oct 08 10:22:15 compute-0 podman[285583]: 2025-10-08 10:22:15.71814231 +0000 UTC m=+0.028121769 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:22:15 compute-0 podman[285583]: 2025-10-08 10:22:15.822885531 +0000 UTC m=+0.132864970 container start a285891f0ed4efc999687f655ec86f539953f42f7e586f1f1310e7054505670d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:22:15 compute-0 podman[285583]: 2025-10-08 10:22:15.826375214 +0000 UTC m=+0.136354653 container attach a285891f0ed4efc999687f655ec86f539953f42f7e586f1f1310e7054505670d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_maxwell, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 08 10:22:15 compute-0 optimistic_maxwell[285599]: 167 167
Oct 08 10:22:15 compute-0 systemd[1]: libpod-a285891f0ed4efc999687f655ec86f539953f42f7e586f1f1310e7054505670d.scope: Deactivated successfully.
Oct 08 10:22:15 compute-0 podman[285583]: 2025-10-08 10:22:15.830701345 +0000 UTC m=+0.140680774 container died a285891f0ed4efc999687f655ec86f539953f42f7e586f1f1310e7054505670d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_maxwell, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct 08 10:22:15 compute-0 nova_compute[262220]: 2025-10-08 10:22:15.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:22:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-a41d9a9408d9c17a89805be3cf7463349bc94fb893768610c4016c2d600d56f5-merged.mount: Deactivated successfully.
Oct 08 10:22:15 compute-0 podman[285583]: 2025-10-08 10:22:15.871494421 +0000 UTC m=+0.181473860 container remove a285891f0ed4efc999687f655ec86f539953f42f7e586f1f1310e7054505670d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_maxwell, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:22:15 compute-0 systemd[1]: libpod-conmon-a285891f0ed4efc999687f655ec86f539953f42f7e586f1f1310e7054505670d.scope: Deactivated successfully.
Oct 08 10:22:16 compute-0 podman[285623]: 2025-10-08 10:22:16.076659147 +0000 UTC m=+0.046018078 container create dc25fa6968b5ea66e7ee28f243368114eeb8e2e641bbfbd86ec17b34b8abcf4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_engelbart, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:22:16 compute-0 systemd[1]: Started libpod-conmon-dc25fa6968b5ea66e7ee28f243368114eeb8e2e641bbfbd86ec17b34b8abcf4a.scope.
Oct 08 10:22:16 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:22:16 compute-0 podman[285623]: 2025-10-08 10:22:16.057373404 +0000 UTC m=+0.026732355 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:22:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f23a78ac9ec7bf53311f9974dbcfe3d15f0c0879b4a5dd65e5bf98e321555fe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:22:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f23a78ac9ec7bf53311f9974dbcfe3d15f0c0879b4a5dd65e5bf98e321555fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:22:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f23a78ac9ec7bf53311f9974dbcfe3d15f0c0879b4a5dd65e5bf98e321555fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:22:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f23a78ac9ec7bf53311f9974dbcfe3d15f0c0879b4a5dd65e5bf98e321555fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:22:16 compute-0 podman[285623]: 2025-10-08 10:22:16.169089521 +0000 UTC m=+0.138448492 container init dc25fa6968b5ea66e7ee28f243368114eeb8e2e641bbfbd86ec17b34b8abcf4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_engelbart, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Oct 08 10:22:16 compute-0 podman[285623]: 2025-10-08 10:22:16.179331422 +0000 UTC m=+0.148690373 container start dc25fa6968b5ea66e7ee28f243368114eeb8e2e641bbfbd86ec17b34b8abcf4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_engelbart, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct 08 10:22:16 compute-0 podman[285623]: 2025-10-08 10:22:16.182583647 +0000 UTC m=+0.151942578 container attach dc25fa6968b5ea66e7ee28f243368114eeb8e2e641bbfbd86ec17b34b8abcf4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 08 10:22:16 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:22:16 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:22:16 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:22:16.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:22:16 compute-0 friendly_engelbart[285639]: {
Oct 08 10:22:16 compute-0 friendly_engelbart[285639]:     "1": [
Oct 08 10:22:16 compute-0 friendly_engelbart[285639]:         {
Oct 08 10:22:16 compute-0 friendly_engelbart[285639]:             "devices": [
Oct 08 10:22:16 compute-0 friendly_engelbart[285639]:                 "/dev/loop3"
Oct 08 10:22:16 compute-0 friendly_engelbart[285639]:             ],
Oct 08 10:22:16 compute-0 friendly_engelbart[285639]:             "lv_name": "ceph_lv0",
Oct 08 10:22:16 compute-0 friendly_engelbart[285639]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:22:16 compute-0 friendly_engelbart[285639]:             "lv_size": "21470642176",
Oct 08 10:22:16 compute-0 friendly_engelbart[285639]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 08 10:22:16 compute-0 friendly_engelbart[285639]:             "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 10:22:16 compute-0 friendly_engelbart[285639]:             "name": "ceph_lv0",
Oct 08 10:22:16 compute-0 friendly_engelbart[285639]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:22:16 compute-0 friendly_engelbart[285639]:             "tags": {
Oct 08 10:22:16 compute-0 friendly_engelbart[285639]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:22:16 compute-0 friendly_engelbart[285639]:                 "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 10:22:16 compute-0 friendly_engelbart[285639]:                 "ceph.cephx_lockbox_secret": "",
Oct 08 10:22:16 compute-0 friendly_engelbart[285639]:                 "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct 08 10:22:16 compute-0 friendly_engelbart[285639]:                 "ceph.cluster_name": "ceph",
Oct 08 10:22:16 compute-0 friendly_engelbart[285639]:                 "ceph.crush_device_class": "",
Oct 08 10:22:16 compute-0 friendly_engelbart[285639]:                 "ceph.encrypted": "0",
Oct 08 10:22:16 compute-0 friendly_engelbart[285639]:                 "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct 08 10:22:16 compute-0 friendly_engelbart[285639]:                 "ceph.osd_id": "1",
Oct 08 10:22:16 compute-0 friendly_engelbart[285639]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 08 10:22:16 compute-0 friendly_engelbart[285639]:                 "ceph.type": "block",
Oct 08 10:22:16 compute-0 friendly_engelbart[285639]:                 "ceph.vdo": "0",
Oct 08 10:22:16 compute-0 friendly_engelbart[285639]:                 "ceph.with_tpm": "0"
Oct 08 10:22:16 compute-0 friendly_engelbart[285639]:             },
Oct 08 10:22:16 compute-0 friendly_engelbart[285639]:             "type": "block",
Oct 08 10:22:16 compute-0 friendly_engelbart[285639]:             "vg_name": "ceph_vg0"
Oct 08 10:22:16 compute-0 friendly_engelbart[285639]:         }
Oct 08 10:22:16 compute-0 friendly_engelbart[285639]:     ]
Oct 08 10:22:16 compute-0 friendly_engelbart[285639]: }
Oct 08 10:22:16 compute-0 systemd[1]: libpod-dc25fa6968b5ea66e7ee28f243368114eeb8e2e641bbfbd86ec17b34b8abcf4a.scope: Deactivated successfully.
Oct 08 10:22:16 compute-0 podman[285623]: 2025-10-08 10:22:16.493820817 +0000 UTC m=+0.463179748 container died dc25fa6968b5ea66e7ee28f243368114eeb8e2e641bbfbd86ec17b34b8abcf4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_engelbart, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct 08 10:22:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f23a78ac9ec7bf53311f9974dbcfe3d15f0c0879b4a5dd65e5bf98e321555fe-merged.mount: Deactivated successfully.
Oct 08 10:22:16 compute-0 podman[285623]: 2025-10-08 10:22:16.533159037 +0000 UTC m=+0.502517968 container remove dc25fa6968b5ea66e7ee28f243368114eeb8e2e641bbfbd86ec17b34b8abcf4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_engelbart, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 08 10:22:16 compute-0 systemd[1]: libpod-conmon-dc25fa6968b5ea66e7ee28f243368114eeb8e2e641bbfbd86ec17b34b8abcf4a.scope: Deactivated successfully.
Oct 08 10:22:16 compute-0 ceph-mon[73572]: pgmap v1131: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 08 10:22:16 compute-0 sudo[285515]: pam_unix(sudo:session): session closed for user root
Oct 08 10:22:16 compute-0 sudo[285660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:22:16 compute-0 sudo[285660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:22:16 compute-0 sudo[285660]: pam_unix(sudo:session): session closed for user root
Oct 08 10:22:16 compute-0 sudo[285685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- raw list --format json
Oct 08 10:22:16 compute-0 sudo[285685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:22:17 compute-0 podman[285751]: 2025-10-08 10:22:17.131816148 +0000 UTC m=+0.045286163 container create 95b9b0940bc116f2e3949f4642de8c870a7fb1a3e6f63902d450c72dcc70a93a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_kare, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 10:22:17 compute-0 systemd[1]: Started libpod-conmon-95b9b0940bc116f2e3949f4642de8c870a7fb1a3e6f63902d450c72dcc70a93a.scope.
Oct 08 10:22:17 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:22:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:22:17.203Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:22:17 compute-0 podman[285751]: 2025-10-08 10:22:17.112474734 +0000 UTC m=+0.025944769 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:22:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:22:17.203Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:22:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:22:17.207Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:22:17 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:22:17 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:22:17 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:22:17.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:22:17 compute-0 podman[285751]: 2025-10-08 10:22:17.228940084 +0000 UTC m=+0.142410189 container init 95b9b0940bc116f2e3949f4642de8c870a7fb1a3e6f63902d450c72dcc70a93a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_kare, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 10:22:17 compute-0 podman[285751]: 2025-10-08 10:22:17.23655047 +0000 UTC m=+0.150020475 container start 95b9b0940bc116f2e3949f4642de8c870a7fb1a3e6f63902d450c72dcc70a93a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_kare, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 10:22:17 compute-0 podman[285751]: 2025-10-08 10:22:17.240383744 +0000 UTC m=+0.153853849 container attach 95b9b0940bc116f2e3949f4642de8c870a7fb1a3e6f63902d450c72dcc70a93a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_kare, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid)
Oct 08 10:22:17 compute-0 zen_kare[285767]: 167 167
Oct 08 10:22:17 compute-0 systemd[1]: libpod-95b9b0940bc116f2e3949f4642de8c870a7fb1a3e6f63902d450c72dcc70a93a.scope: Deactivated successfully.
Oct 08 10:22:17 compute-0 podman[285751]: 2025-10-08 10:22:17.243802695 +0000 UTC m=+0.157272740 container died 95b9b0940bc116f2e3949f4642de8c870a7fb1a3e6f63902d450c72dcc70a93a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_kare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct 08 10:22:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-3af8dc3807c1fe1dcb5b9069f803bfc480d8c3a11bcda3e63d4174f57cd1a1bb-merged.mount: Deactivated successfully.
Oct 08 10:22:17 compute-0 podman[285751]: 2025-10-08 10:22:17.280262512 +0000 UTC m=+0.193732517 container remove 95b9b0940bc116f2e3949f4642de8c870a7fb1a3e6f63902d450c72dcc70a93a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_kare, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 08 10:22:17 compute-0 systemd[1]: libpod-conmon-95b9b0940bc116f2e3949f4642de8c870a7fb1a3e6f63902d450c72dcc70a93a.scope: Deactivated successfully.
Oct 08 10:22:17 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1132: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 08 10:22:17 compute-0 podman[285792]: 2025-10-08 10:22:17.514402422 +0000 UTC m=+0.068967148 container create ef758e3a06958bf7ce8321a54b5a6f648faccc3d2f35d2e4796060fe7a0f6dca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct 08 10:22:17 compute-0 podman[285792]: 2025-10-08 10:22:17.482257615 +0000 UTC m=+0.036822431 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:22:17 compute-0 systemd[1]: Started libpod-conmon-ef758e3a06958bf7ce8321a54b5a6f648faccc3d2f35d2e4796060fe7a0f6dca.scope.
Oct 08 10:22:17 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:22:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61b6f9d91a5a6d4b064abb155d2bda1388360b25a5b8179e1fc86c141a26fdb4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:22:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61b6f9d91a5a6d4b064abb155d2bda1388360b25a5b8179e1fc86c141a26fdb4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:22:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61b6f9d91a5a6d4b064abb155d2bda1388360b25a5b8179e1fc86c141a26fdb4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:22:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61b6f9d91a5a6d4b064abb155d2bda1388360b25a5b8179e1fc86c141a26fdb4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:22:17 compute-0 podman[285792]: 2025-10-08 10:22:17.628832097 +0000 UTC m=+0.183396803 container init ef758e3a06958bf7ce8321a54b5a6f648faccc3d2f35d2e4796060fe7a0f6dca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 08 10:22:17 compute-0 podman[285792]: 2025-10-08 10:22:17.636863346 +0000 UTC m=+0.191428062 container start ef758e3a06958bf7ce8321a54b5a6f648faccc3d2f35d2e4796060fe7a0f6dca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 10:22:17 compute-0 podman[285792]: 2025-10-08 10:22:17.64069 +0000 UTC m=+0.195254726 container attach ef758e3a06958bf7ce8321a54b5a6f648faccc3d2f35d2e4796060fe7a0f6dca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_babbage, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:22:17 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:22:17 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:22:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:22:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:22:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:22:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:22:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:22:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:22:18 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:22:18 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:22:18 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:22:18.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:22:18 compute-0 lvm[285884]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 08 10:22:18 compute-0 lvm[285884]: VG ceph_vg0 finished
Oct 08 10:22:18 compute-0 compassionate_babbage[285809]: {}
Oct 08 10:22:18 compute-0 systemd[1]: libpod-ef758e3a06958bf7ce8321a54b5a6f648faccc3d2f35d2e4796060fe7a0f6dca.scope: Deactivated successfully.
Oct 08 10:22:18 compute-0 systemd[1]: libpod-ef758e3a06958bf7ce8321a54b5a6f648faccc3d2f35d2e4796060fe7a0f6dca.scope: Consumed 1.283s CPU time.
Oct 08 10:22:18 compute-0 podman[285887]: 2025-10-08 10:22:18.477431929 +0000 UTC m=+0.026966571 container died ef758e3a06958bf7ce8321a54b5a6f648faccc3d2f35d2e4796060fe7a0f6dca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_babbage, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 08 10:22:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-61b6f9d91a5a6d4b064abb155d2bda1388360b25a5b8179e1fc86c141a26fdb4-merged.mount: Deactivated successfully.
Oct 08 10:22:18 compute-0 podman[285887]: 2025-10-08 10:22:18.52732542 +0000 UTC m=+0.076860072 container remove ef758e3a06958bf7ce8321a54b5a6f648faccc3d2f35d2e4796060fe7a0f6dca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_babbage, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:22:18 compute-0 systemd[1]: libpod-conmon-ef758e3a06958bf7ce8321a54b5a6f648faccc3d2f35d2e4796060fe7a0f6dca.scope: Deactivated successfully.
Oct 08 10:22:18 compute-0 ceph-mon[73572]: pgmap v1132: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 08 10:22:18 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:22:18 compute-0 sudo[285685]: pam_unix(sudo:session): session closed for user root
Oct 08 10:22:18 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 10:22:18 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:22:18 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 10:22:18 compute-0 nova_compute[262220]: 2025-10-08 10:22:18.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:22:18 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:22:18 compute-0 sshd-session[285902]: Accepted publickey for zuul from 192.168.122.10 port 53306 ssh2: ECDSA SHA256:7LvTHAj52RCSkKXOsIbzSDrEw7lwj23D0dAJ0Qgx0Rg
Oct 08 10:22:18 compute-0 systemd-logind[798]: New session 58 of user zuul.
Oct 08 10:22:18 compute-0 systemd[1]: Started Session 58 of User zuul.
Oct 08 10:22:18 compute-0 sshd-session[285902]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 08 10:22:18 compute-0 sudo[285905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 08 10:22:18 compute-0 sudo[285905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:22:18 compute-0 sudo[285905]: pam_unix(sudo:session): session closed for user root
Oct 08 10:22:18 compute-0 sudo[285931]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp -p container,openstack_edpm,system,storage,virt'
Oct 08 10:22:18 compute-0 sudo[285931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:22:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:22:18.857Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:22:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:22:18.857Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:22:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:22:18.858Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:22:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:22:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:22:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:22:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:19 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:22:19 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:22:19 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:22:19 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:22:19.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:22:19 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:22:19 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1133: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 08 10:22:19 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:22:19 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:22:20 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:22:20 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:22:20 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:22:20.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:22:20 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 08 10:22:20 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3627942894' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 08 10:22:20 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 08 10:22:20 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3627942894' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 08 10:22:20 compute-0 ceph-mon[73572]: pgmap v1133: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 08 10:22:20 compute-0 ceph-mon[73572]: from='client.? 192.168.122.10:0/3627942894' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 08 10:22:20 compute-0 ceph-mon[73572]: from='client.? 192.168.122.10:0/3627942894' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 08 10:22:20 compute-0 nova_compute[262220]: 2025-10-08 10:22:20.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:22:21 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:22:21 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:22:21 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:22:21.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:22:21 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16275 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:21 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26111 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:21 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1134: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 08 10:22:21 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.25912 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:21 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16287 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:21 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26123 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:22 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.25921 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:22 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0)
Oct 08 10:22:22 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2372707832' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 08 10:22:22 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:22:22 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:22:22 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:22:22.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:22:22 compute-0 ceph-mon[73572]: from='client.16275 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:22 compute-0 ceph-mon[73572]: from='client.26111 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:22 compute-0 ceph-mon[73572]: pgmap v1134: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 08 10:22:22 compute-0 ceph-mon[73572]: from='client.25912 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:22 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/2372707832' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 08 10:22:22 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/2719208240' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 08 10:22:23 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:22:23 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:22:23 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:22:23.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:22:23 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1135: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 08 10:22:23 compute-0 nova_compute[262220]: 2025-10-08 10:22:23.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:22:23 compute-0 ceph-mon[73572]: from='client.16287 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:23 compute-0 ceph-mon[73572]: from='client.26123 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:23 compute-0 ceph-mon[73572]: from='client.25921 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:23 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/341625263' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 08 10:22:23 compute-0 ceph-mon[73572]: pgmap v1135: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 08 10:22:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:22:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:22:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:22:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:24 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:22:24 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:22:24 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:22:24 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:22:24.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:22:24 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:22:25 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:22:25 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:22:25 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:22:25.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:22:25 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1136: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:22:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:22:25] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Oct 08 10:22:25 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:22:25] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Oct 08 10:22:25 compute-0 nova_compute[262220]: 2025-10-08 10:22:25.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:22:25 compute-0 podman[286268]: 2025-10-08 10:22:25.920563494 +0000 UTC m=+0.079067334 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 08 10:22:26 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:22:26 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:22:26 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:22:26.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:22:26 compute-0 ceph-mon[73572]: pgmap v1136: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:22:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:22:27.208Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:22:27 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:22:27 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:22:27 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:22:27.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:22:27 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1137: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:22:27 compute-0 ovs-vsctl[286326]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct 08 10:22:28 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:22:28 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:22:28 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:22:28.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:22:28 compute-0 virtqemud[261885]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct 08 10:22:28 compute-0 virtqemud[261885]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct 08 10:22:28 compute-0 virtqemud[261885]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 08 10:22:28 compute-0 nova_compute[262220]: 2025-10-08 10:22:28.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:22:28 compute-0 ceph-mon[73572]: pgmap v1137: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:22:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:22:28.859Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:22:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:22:28.859Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:22:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:22:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:22:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:22:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:29 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:22:29 compute-0 ceph-mds[95385]: mds.cephfs.compute-0.lphril asok_command: cache status {prefix=cache status} (starting...)
Oct 08 10:22:29 compute-0 ceph-mds[95385]: mds.cephfs.compute-0.lphril Can't run that command on an inactive MDS!
Oct 08 10:22:29 compute-0 ceph-mds[95385]: mds.cephfs.compute-0.lphril asok_command: client ls {prefix=client ls} (starting...)
Oct 08 10:22:29 compute-0 ceph-mds[95385]: mds.cephfs.compute-0.lphril Can't run that command on an inactive MDS!
Oct 08 10:22:29 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:22:29 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:22:29 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:22:29.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:22:29 compute-0 lvm[286684]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 08 10:22:29 compute-0 lvm[286684]: VG ceph_vg0 finished
Oct 08 10:22:29 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:22:29 compute-0 kernel: block sr0: the capability attribute has been deprecated.
Oct 08 10:22:29 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1138: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:22:29 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26138 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:29 compute-0 ceph-mds[95385]: mds.cephfs.compute-0.lphril asok_command: damage ls {prefix=damage ls} (starting...)
Oct 08 10:22:29 compute-0 ceph-mds[95385]: mds.cephfs.compute-0.lphril Can't run that command on an inactive MDS!
Oct 08 10:22:29 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16308 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:29 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Oct 08 10:22:29 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 08 10:22:29 compute-0 sudo[286816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:22:29 compute-0 sudo[286816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:22:29 compute-0 sudo[286816]: pam_unix(sudo:session): session closed for user root
Oct 08 10:22:30 compute-0 ceph-mds[95385]: mds.cephfs.compute-0.lphril asok_command: dump loads {prefix=dump loads} (starting...)
Oct 08 10:22:30 compute-0 ceph-mds[95385]: mds.cephfs.compute-0.lphril Can't run that command on an inactive MDS!
Oct 08 10:22:30 compute-0 podman[286844]: 2025-10-08 10:22:30.02317628 +0000 UTC m=+0.075606933 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 08 10:22:30 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Oct 08 10:22:30 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3452932124' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 08 10:22:30 compute-0 podman[286845]: 2025-10-08 10:22:30.042704401 +0000 UTC m=+0.095218916 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3)
Oct 08 10:22:30 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26150 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:30 compute-0 ceph-mds[95385]: mds.cephfs.compute-0.lphril asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Oct 08 10:22:30 compute-0 ceph-mds[95385]: mds.cephfs.compute-0.lphril Can't run that command on an inactive MDS!
Oct 08 10:22:30 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16323 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:30 compute-0 ceph-mds[95385]: mds.cephfs.compute-0.lphril asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Oct 08 10:22:30 compute-0 ceph-mds[95385]: mds.cephfs.compute-0.lphril Can't run that command on an inactive MDS!
Oct 08 10:22:30 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:22:30 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:22:30 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:22:30.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:22:30 compute-0 ceph-mds[95385]: mds.cephfs.compute-0.lphril asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Oct 08 10:22:30 compute-0 ceph-mds[95385]: mds.cephfs.compute-0.lphril Can't run that command on an inactive MDS!
Oct 08 10:22:30 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 10:22:30 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1219825122' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:22:30 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26165 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:30 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.25957 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:30 compute-0 ceph-mds[95385]: mds.cephfs.compute-0.lphril asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Oct 08 10:22:30 compute-0 ceph-mds[95385]: mds.cephfs.compute-0.lphril Can't run that command on an inactive MDS!
Oct 08 10:22:30 compute-0 ceph-mon[73572]: pgmap v1138: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:22:30 compute-0 ceph-mon[73572]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 08 10:22:30 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/3290453697' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 08 10:22:30 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3452932124' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 08 10:22:30 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/3604106434' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:22:30 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/1219825122' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:22:30 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16341 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:30 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Oct 08 10:22:30 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 08 10:22:30 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0)
Oct 08 10:22:30 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4093340126' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 08 10:22:30 compute-0 ceph-mds[95385]: mds.cephfs.compute-0.lphril asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Oct 08 10:22:30 compute-0 ceph-mds[95385]: mds.cephfs.compute-0.lphril Can't run that command on an inactive MDS!
Oct 08 10:22:30 compute-0 nova_compute[262220]: 2025-10-08 10:22:30.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:22:30 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.25978 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:30 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26183 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:31 compute-0 ceph-mds[95385]: mds.cephfs.compute-0.lphril asok_command: get subtrees {prefix=get subtrees} (starting...)
Oct 08 10:22:31 compute-0 ceph-mds[95385]: mds.cephfs.compute-0.lphril Can't run that command on an inactive MDS!
Oct 08 10:22:31 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16356 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:31 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:22:31 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:22:31 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:22:31.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:22:31 compute-0 ceph-mds[95385]: mds.cephfs.compute-0.lphril asok_command: ops {prefix=ops} (starting...)
Oct 08 10:22:31 compute-0 ceph-mds[95385]: mds.cephfs.compute-0.lphril Can't run that command on an inactive MDS!
Oct 08 10:22:31 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Oct 08 10:22:31 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1177024407' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 08 10:22:31 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26201 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:31 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1139: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:22:31 compute-0 ceph-mon[73572]: from='client.26138 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:31 compute-0 ceph-mon[73572]: from='client.16308 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:31 compute-0 ceph-mon[73572]: from='client.26150 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:31 compute-0 ceph-mon[73572]: from='client.16323 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:31 compute-0 ceph-mon[73572]: from='client.26165 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:31 compute-0 ceph-mon[73572]: from='client.25957 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:31 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/2338073995' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 08 10:22:31 compute-0 ceph-mon[73572]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 08 10:22:31 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/2389406622' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 08 10:22:31 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/4093340126' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 08 10:22:31 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/67663719' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:22:31 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/829358862' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 08 10:22:31 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/1177024407' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 08 10:22:31 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/3804849288' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 08 10:22:31 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/3840479706' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 08 10:22:31 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26216 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:31 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16377 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:31 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Oct 08 10:22:31 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2799919786' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 08 10:22:31 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26008 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:31 compute-0 ceph-mds[95385]: mds.cephfs.compute-0.lphril asok_command: session ls {prefix=session ls} (starting...)
Oct 08 10:22:31 compute-0 ceph-mds[95385]: mds.cephfs.compute-0.lphril Can't run that command on an inactive MDS!
Oct 08 10:22:32 compute-0 ceph-mds[95385]: mds.cephfs.compute-0.lphril asok_command: status {prefix=status} (starting...)
Oct 08 10:22:32 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16392 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:32 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26237 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:32 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:22:32 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:22:32 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:22:32.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:22:32 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26264 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Oct 08 10:22:32 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/932372394' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 08 10:22:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Oct 08 10:22:32 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 08 10:22:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Oct 08 10:22:32 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1683114755' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 08 10:22:32 compute-0 ceph-mon[73572]: from='client.16341 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:32 compute-0 ceph-mon[73572]: from='client.25978 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:32 compute-0 ceph-mon[73572]: from='client.26183 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:32 compute-0 ceph-mon[73572]: from='client.16356 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:32 compute-0 ceph-mon[73572]: from='client.26201 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:32 compute-0 ceph-mon[73572]: pgmap v1139: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:22:32 compute-0 ceph-mon[73572]: from='client.26216 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:32 compute-0 ceph-mon[73572]: from='client.16377 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:32 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/2799919786' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 08 10:22:32 compute-0 ceph-mon[73572]: from='client.26008 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:32 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/1087323164' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 08 10:22:32 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/2660591270' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 08 10:22:32 compute-0 ceph-mon[73572]: from='client.16392 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:32 compute-0 ceph-mon[73572]: from='client.26237 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:32 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/2832354623' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 08 10:22:32 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/2244189999' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 08 10:22:32 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/3197567280' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 08 10:22:32 compute-0 ceph-mon[73572]: from='client.26264 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:32 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/932372394' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 08 10:22:32 compute-0 ceph-mon[73572]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 08 10:22:32 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/3736912694' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 08 10:22:32 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/1683114755' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 08 10:22:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:22:32 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:22:32 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26065 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Oct 08 10:22:33 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1703494652' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 08 10:22:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Oct 08 10:22:33 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/238833166' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 08 10:22:33 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:22:33 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:22:33 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:22:33.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:22:33 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16434 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:33 compute-0 ceph-mgr[73869]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 08 10:22:33 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T10:22:33.382+0000 7fa108681640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 08 10:22:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Oct 08 10:22:33 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 08 10:22:33 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26303 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:33 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T10:22:33.440+0000 7fa108681640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 08 10:22:33 compute-0 ceph-mgr[73869]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 08 10:22:33 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1140: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:22:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct 08 10:22:33 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3246441215' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 08 10:22:33 compute-0 nova_compute[262220]: 2025-10-08 10:22:33.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:22:33 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/2996532542' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 08 10:22:33 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/3914554974' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 08 10:22:33 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:22:33 compute-0 ceph-mon[73572]: from='client.26065 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:33 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/1703494652' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 08 10:22:33 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/238833166' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 08 10:22:33 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/3227893960' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 08 10:22:33 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/3453398439' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 08 10:22:33 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/2847823306' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 08 10:22:33 compute-0 ceph-mon[73572]: from='client.16434 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:33 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/2977138949' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 08 10:22:33 compute-0 ceph-mon[73572]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 08 10:22:33 compute-0 ceph-mon[73572]: from='client.26303 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:33 compute-0 ceph-mon[73572]: pgmap v1140: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:22:33 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3246441215' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 08 10:22:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Oct 08 10:22:33 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3831085452' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 08 10:22:33 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Oct 08 10:22:33 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1629341640' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 08 10:22:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:22:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:22:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:22:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:34 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:22:34 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26122 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T10:22:34.267+0000 7fa108681640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 08 10:22:34 compute-0 ceph-mgr[73869]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 08 10:22:34 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:22:34 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:22:34 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:22:34.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:22:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Oct 08 10:22:34 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1192288068' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 08 10:22:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:22:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Oct 08 10:22:34 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2790329669' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 08 10:22:34 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26360 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:34 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/3919531791' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 08 10:22:34 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/3000238592' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 08 10:22:34 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/3926470964' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 08 10:22:34 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3831085452' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 08 10:22:34 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/1152860853' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 08 10:22:34 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/1629341640' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 08 10:22:34 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/4153704505' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 08 10:22:34 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/5113252' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 08 10:22:34 compute-0 ceph-mon[73572]: from='client.26122 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:34 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/1192288068' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 08 10:22:34 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/2790329669' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 08 10:22:34 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/2778688367' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 08 10:22:34 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/915932176' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 08 10:22:34 compute-0 ceph-mon[73572]: from='client.26360 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:34 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16476 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Oct 08 10:22:34 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1739770367' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 08 10:22:34 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26378 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:35 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16488 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:35 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:22:35 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:22:35 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:22:35.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:22:35 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Oct 08 10:22:35 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/410494144' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 08 10:22:35 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26393 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:50:00.076350+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 4907008 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:50:01.076476+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 4907008 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:50:02.076595+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 4898816 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:50:03.076723+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989380 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 4898816 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:50:04.076863+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 4898816 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2c59ac00 session 0x559f2dc09680
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2b37f400 session 0x559f2d5eeb40
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:50:05.077005+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83148800 unmapped: 4890624 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:50:06.077366+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83148800 unmapped: 4890624 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:50:07.077511+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4874240 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:50:08.077645+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989380 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4866048 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:50:09.077802+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83189760 unmapped: 4849664 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:50:10.077937+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2b37e800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.693235397s of 19.816581726s, submitted: 2
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83197952 unmapped: 4841472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:50:11.078066+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83197952 unmapped: 4841472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:50:12.078190+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83206144 unmapped: 4833280 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:50:13.078326+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989512 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83206144 unmapped: 4833280 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:50:14.078435+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83206144 unmapped: 4833280 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:50:15.078553+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9000
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 4825088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:50:16.078695+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9400
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 4825088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:50:17.078870+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83222528 unmapped: 4816896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:50:18.079006+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989644 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83222528 unmapped: 4816896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:50:19.079165+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 4808704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:50:20.079318+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 4808704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:50:21.079448+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.952174187s of 10.981819153s, submitted: 2
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83238912 unmapped: 4800512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:50:22.079607+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83238912 unmapped: 4800512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:50:23.079741+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990565 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83238912 unmapped: 4800512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:50:24.079879+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 4792320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:50:25.080059+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 4792320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:50:26.080220+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83255296 unmapped: 4784128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:50:27.080388+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 4775936 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:50:28.080523+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990433 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 4767744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:50:29.080640+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 4767744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:50:30.080833+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 4767744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:50:31.081004+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83279872 unmapped: 4759552 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e9400 session 0x559f2d5ef0e0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2b37e800 session 0x559f2dbe85a0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:50:32.081178+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83279872 unmapped: 4759552 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:50:33.081328+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990301 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 4751360 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:50:34.081550+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 4743168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:50:35.081716+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 4734976 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:50:36.081980+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 4734976 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:50:37.082141+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 4734976 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:50:38.082275+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990301 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 4726784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:50:39.082463+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 4726784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:50:40.082649+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83320832 unmapped: 4718592 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:50:41.082798+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83320832 unmapped: 4718592 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:50:42.082909+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d680c00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.046033859s of 21.209218979s, submitted: 4
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 4702208 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:50:43.083140+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990433 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 4702208 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:50:44.083393+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 4702208 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:50:45.083538+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83345408 unmapped: 4694016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:50:46.083737+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83345408 unmapped: 4694016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:50:47.083902+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83345408 unmapped: 4694016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:50:48.084119+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991945 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83353600 unmapped: 4685824 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:50:49.084272+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83353600 unmapped: 4685824 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:50:50.084804+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 4677632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:50:51.085397+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 4677632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:50:52.085551+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 4669440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:50:53.086092+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991354 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 4669440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:50:54.086271+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 4653056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:50:55.086763+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 4653056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:50:56.086932+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 4653056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:50:57.087076+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 4644864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:50:58.087234+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991354 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 4644864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:50:59.087373+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 4636672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:51:00.087656+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.658838272s of 17.800985336s, submitted: 3
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 4628480 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:51:01.088086+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 4628480 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:51:02.088302+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 4612096 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:51:03.088537+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 4612096 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:51:04.088806+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 4612096 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:51:05.089010+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 4603904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:51:06.089353+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 4603904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:51:07.089474+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 4595712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:51:08.089602+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 4595712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:51:09.089884+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 4595712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:51:10.090101+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 4587520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:51:11.090440+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 4587520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:51:12.090560+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 4579328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:51:13.090693+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 4579328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:51:14.090846+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4571136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:51:15.091010+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4571136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:51:16.091261+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 4562944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:51:17.091384+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 4562944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:51:18.091505+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 4546560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:51:19.091629+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 4546560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:51:20.091804+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 4538368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:51:21.091914+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 4530176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:51:22.093145+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 4530176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:51:23.093272+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 4521984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:51:24.093436+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 4513792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:51:25.093554+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 4513792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:51:26.093701+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4505600 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:51:27.093867+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4505600 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:51:28.094019+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4497408 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:51:29.094249+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4497408 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:51:30.094463+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4497408 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:51:31.094596+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 4489216 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:51:32.094712+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 4489216 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:51:33.094879+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 4481024 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:51:34.095134+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 4481024 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:51:35.095311+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 4472832 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:51:36.095474+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 4472832 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:51:37.095596+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 4472832 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:51:38.095721+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 4456448 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:51:39.095907+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 4456448 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:51:40.096323+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 4448256 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:51:41.096467+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 4448256 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:51:42.096653+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 4448256 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:51:43.096803+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83599360 unmapped: 4440064 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:51:44.096906+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83599360 unmapped: 4440064 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:51:45.097136+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 4431872 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:51:46.097297+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 4431872 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:51:47.097426+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83632128 unmapped: 4407296 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:51:48.097543+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 4399104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:51:49.097685+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 4390912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:51:50.097851+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 4390912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:51:51.098004+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 4390912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:51:52.098157+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 4390912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:51:53.098321+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 4382720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:51:54.098451+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 4382720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:51:55.098611+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 4374528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:51:56.098897+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 4374528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:51:57.099144+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 4366336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:51:58.099269+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 4366336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:51:59.099424+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 4366336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:52:00.099575+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 4358144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:52:01.099719+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 4358144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:52:02.099884+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83689472 unmapped: 4349952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:52:03.100114+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83697664 unmapped: 4341760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:52:04.100258+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 4333568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:52:05.100373+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 4333568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:52:06.100574+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 4333568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:52:07.100740+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 4325376 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:52:08.100895+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 4317184 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:52:09.101175+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 4308992 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:52:10.101339+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 4308992 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:52:11.101479+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 4308992 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:52:12.101641+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 4300800 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:52:13.101837+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 4292608 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:52:14.102008+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 4284416 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:52:15.102170+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 4284416 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:52:16.102463+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 4276224 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:52:17.102637+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 4276224 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:52:18.102791+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 4276224 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:52:19.102931+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 4268032 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:52:20.103088+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 4268032 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:52:21.103253+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 4259840 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:52:22.103388+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 4259840 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:52:23.103524+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 4251648 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:52:24.103661+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83795968 unmapped: 4243456 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:52:25.103780+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83795968 unmapped: 4243456 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:52:26.103995+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83804160 unmapped: 4235264 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:52:27.104127+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83804160 unmapped: 4235264 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:52:28.104273+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 4227072 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:52:29.104616+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 4227072 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:52:30.104757+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 4227072 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:52:31.106478+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83820544 unmapped: 4218880 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:52:32.106615+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 4210688 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:52:33.106792+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 4202496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:52:34.106974+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 4202496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:52:35.108269+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 4202496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:52:36.109594+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83845120 unmapped: 4194304 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:52:37.109780+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83845120 unmapped: 4194304 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:52:38.110206+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83853312 unmapped: 4186112 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:52:39.110323+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83853312 unmapped: 4186112 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:52:40.110508+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83853312 unmapped: 4186112 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:52:41.110936+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83861504 unmapped: 4177920 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:52:42.111094+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83869696 unmapped: 4169728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:52:43.111396+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83877888 unmapped: 4161536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:52:44.111531+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83877888 unmapped: 4161536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:52:45.111681+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83886080 unmapped: 4153344 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:52:46.111849+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83886080 unmapped: 4153344 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:52:47.112067+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83886080 unmapped: 4153344 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:52:48.112543+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83902464 unmapped: 4136960 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:52:49.112899+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 4128768 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:52:50.113204+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 4128768 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:52:51.113609+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 4128768 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:52:52.113794+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 4120576 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:52:53.114105+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 4112384 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:52:54.114239+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 4112384 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:52:55.114377+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 4104192 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:52:56.114554+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 4104192 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:52:57.114720+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 4096000 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:52:58.114926+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 4096000 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:52:59.115107+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 4087808 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:53:00.115292+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 4087808 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:53:01.115454+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 4087808 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:53:02.115585+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83968000 unmapped: 4071424 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:53:03.115740+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83968000 unmapped: 4071424 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:53:04.115868+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 4063232 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:53:05.116003+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 4063232 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:53:06.116245+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 4063232 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:53:07.116403+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 4055040 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:53:08.116592+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 4038656 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:53:09.116781+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84008960 unmapped: 4030464 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:53:10.116980+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84008960 unmapped: 4030464 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:53:11.117164+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84017152 unmapped: 4022272 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:53:12.117368+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84017152 unmapped: 4022272 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:53:13.117512+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84017152 unmapped: 4022272 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:53:14.117669+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 4014080 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:53:15.117844+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 4014080 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:53:16.118008+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 4005888 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:53:17.118101+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 3997696 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:53:18.118242+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 3997696 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:53:19.118401+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 3989504 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:53:20.118678+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 3989504 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:53:21.118898+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 3981312 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:53:22.119134+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 3981312 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:53:23.119341+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 3973120 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:53:24.119478+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 3973120 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:53:25.119572+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 3973120 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:53:26.119765+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:53:27.119999+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 3964928 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:53:28.120118+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 3964928 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:53:29.120351+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 3956736 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:53:30.120480+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 3956736 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:53:31.120597+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 3948544 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:53:32.120715+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 3940352 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:53:33.120827+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 3940352 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:53:34.120943+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3932160 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:53:35.121066+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 3923968 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:53:36.121211+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 3923968 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:53:37.121362+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 3915776 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:53:38.121456+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 3907584 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:53:39.121572+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 3891200 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:53:40.121707+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 3883008 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:53:41.121854+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 3883008 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d03d400 session 0x559f2c8b10e0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:53:42.122058+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 3874816 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:53:43.122213+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 3874816 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:53:44.122352+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 3874816 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:53:45.122486+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 3866624 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:53:46.122665+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 3866624 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:53:47.122837+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 3858432 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:53:48.123101+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 3858432 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:53:49.123229+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 3850240 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:53:50.123353+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 3850240 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:53:51.123520+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 3850240 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:53:52.123677+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 3842048 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2b37e800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 172.042541504s of 172.046844482s, submitted: 1
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:53:53.123837+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 3833856 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991354 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:53:54.123964+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84221952 unmapped: 3817472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:53:55.124094+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84221952 unmapped: 3817472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:53:56.124289+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84221952 unmapped: 3817472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:53:57.124416+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84230144 unmapped: 3809280 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:53:58.124535+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84230144 unmapped: 3809280 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2b37f400
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992866 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:53:59.124664+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84238336 unmapped: 3801088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:54:00.124785+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84238336 unmapped: 3801088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:54:01.124918+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 3792896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 8245 writes, 33K keys, 8245 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s
                                           Cumulative WAL: 8245 writes, 1525 syncs, 5.41 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8245 writes, 33K keys, 8245 commit groups, 1.0 writes per commit group, ingest: 21.32 MB, 0.04 MB/s
                                           Interval WAL: 8245 writes, 1525 syncs, 5.41 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb69b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb69b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb69b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:54:02.125083+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 3710976 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:54:03.125253+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 3694592 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:54:04.125484+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992275 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 3686400 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:54:05.125605+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 3686400 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:54:06.125795+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84361216 unmapped: 3678208 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:54:07.125929+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84361216 unmapped: 3678208 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:54:08.126100+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84369408 unmapped: 3670016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.031002045s of 16.042385101s, submitted: 3
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:54:09.126254+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992143 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84377600 unmapped: 3661824 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:54:10.126372+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 3653632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:54:11.126510+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 3653632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:54:12.126633+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84393984 unmapped: 3645440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:54:13.126764+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84393984 unmapped: 3645440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:54:14.126888+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992143 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84393984 unmapped: 3645440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:54:15.127014+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:54:16.127207+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:54:17.127392+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:54:18.127510+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:54:19.127727+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992143 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 3620864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:54:20.127866+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:54:21.128058+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:54:22.128271+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:54:23.128488+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 3604480 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:54:24.128708+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992143 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 3604480 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:54:25.128881+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:54:26.129102+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:54:27.129395+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84451328 unmapped: 3588096 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:54:28.129568+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84459520 unmapped: 3579904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:54:29.129828+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992143 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e9800 session 0x559f2dbe8d20
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e9000 session 0x559f2c8afa40
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:54:30.130027+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:54:31.130264+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:54:32.130459+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:54:33.130643+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:54:34.130844+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992143 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:54:35.131083+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2b37f400 session 0x559f2cc78d20
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2b37e800 session 0x559f2d9601e0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:54:36.131296+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:54:37.131466+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:54:38.131609+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:54:39.131726+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992143 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:54:40.131919+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2c59ac00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 32.213760376s of 32.217681885s, submitted: 1
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:54:41.132128+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:54:42.132355+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:54:43.132519+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9400
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:54:44.132693+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993787 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:54:45.132835+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:54:46.132988+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e8c00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e8800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:54:47.133106+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:54:48.133237+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:54:49.133383+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995431 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 3481600 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:54:50.133517+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 3481600 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:54:51.133675+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 3473408 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:54:52.133797+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 3473408 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e8400
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.187956810s of 12.277172089s, submitted: 4
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:54:53.133932+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 3473408 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:54:54.134096+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996943 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3465216 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:54:55.134209+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3465216 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:54:56.134375+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3465216 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:54:57.134521+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 3457024 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:54:58.134667+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 3448832 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:54:59.134821+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995629 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 3432448 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:55:00.134987+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 3432448 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:55:01.135092+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3424256 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:55:02.135240+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3424256 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:55:03.135388+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3424256 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:55:04.135516+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995497 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3416064 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:55:05.135664+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3416064 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:55:06.135829+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3407872 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:55:07.136080+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3407872 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:55:08.136280+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3407872 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:55:09.136424+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995497 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 3399680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:55:10.136628+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 3399680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:55:11.136758+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3391488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:55:12.136870+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3391488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:55:13.137011+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84656128 unmapped: 3383296 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e9400 session 0x559f2d9534a0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d680c00 session 0x559f2a974f00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:55:14.137671+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995497 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84656128 unmapped: 3383296 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:55:15.137843+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 3375104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:55:16.138116+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 3375104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:55:17.138298+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 3375104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:55:18.138487+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3366912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:55:19.138664+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995497 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3366912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:55:20.138836+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3366912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:55:21.139002+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3358720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:55:22.139102+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3358720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 29.902038574s of 29.934965134s, submitted: 5
Oct 08 10:22:35 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26164 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:55:23.139262+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84721664 unmapped: 3317760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:55:24.139404+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995569 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84738048 unmapped: 3301376 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:55:25.139514+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 3268608 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:55:26.139647+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 3145728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:55:27.139750+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [0,0,0,1])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84934656 unmapped: 3104768 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:55:28.139911+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85000192 unmapped: 3039232 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:55:29.140269+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995497 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85000192 unmapped: 3039232 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2b37e800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:55:30.140421+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 3022848 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:55:31.140575+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 3022848 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:55:32.140736+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 3022848 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:55:33.140889+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85041152 unmapped: 2998272 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:55:34.141101+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997141 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85041152 unmapped: 2998272 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:55:35.141260+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85041152 unmapped: 2998272 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2b37f400
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.192111969s of 12.802393913s, submitted: 341
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:55:36.141469+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85041152 unmapped: 2998272 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:55:37.141634+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 2990080 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:55:38.141786+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 2990080 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:55:39.141936+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997930 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:55:40.142130+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:55:41.142314+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:55:42.142478+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:55:43.142897+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:55:44.143077+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997930 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:55:45.143276+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:55:46.143438+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:55:47.143563+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:55:48.143722+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:55:49.144429+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997930 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:55:50.144570+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:55:51.144692+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:55:52.144810+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:55:53.144920+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 2973696 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:55:54.145073+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997930 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 2973696 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:55:55.145209+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 2965504 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:55:56.145371+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 2965504 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:55:57.145578+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 2965504 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:55:58.145710+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85082112 unmapped: 2957312 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:55:59.145904+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997930 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85090304 unmapped: 2949120 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:56:00.146075+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85098496 unmapped: 2940928 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:56:01.146230+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85098496 unmapped: 2940928 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:56:02.146396+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2b37f400 session 0x559f2c648960
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2b37e800 session 0x559f2d9612c0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85098496 unmapped: 2940928 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:56:03.146557+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 2932736 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:56:04.146693+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997930 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 2932736 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:56:05.146870+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 2932736 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:56:06.147047+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 2932736 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:56:07.147166+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 2932736 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:56:08.147285+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 2932736 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:56:09.147395+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997930 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 2932736 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:56:10.147590+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 2932736 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:56:11.147726+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 2932736 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:56:12.148181+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 2932736 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9000
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 37.170238495s of 37.409439087s, submitted: 3
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:56:13.148498+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 2932736 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:56:14.148957+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998062 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 2932736 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:56:15.149283+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 2932736 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:56:16.149557+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 2924544 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:56:17.149730+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 2924544 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:56:18.149870+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 2924544 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:56:19.150123+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999574 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 2924544 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:56:20.150347+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 2924544 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:56:21.150573+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 2924544 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:56:22.150730+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 2924544 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:56:23.150886+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85131264 unmapped: 2908160 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:56:24.151071+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998983 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85131264 unmapped: 2908160 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:56:25.151277+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.123427391s of 12.157036781s, submitted: 3
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85131264 unmapped: 2908160 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:56:26.151564+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85131264 unmapped: 2908160 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:56:27.151720+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85131264 unmapped: 2908160 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:56:28.151984+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85131264 unmapped: 2908160 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:56:29.152165+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998260 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85131264 unmapped: 2908160 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:56:30.152420+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85131264 unmapped: 2908160 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:56:31.152625+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85131264 unmapped: 2908160 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:56:32.152865+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85131264 unmapped: 2908160 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:56:33.153080+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85131264 unmapped: 2908160 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:56:34.153262+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998260 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85131264 unmapped: 2908160 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:56:35.153480+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85139456 unmapped: 2899968 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:56:36.153734+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85139456 unmapped: 2899968 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:56:37.153941+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85139456 unmapped: 2899968 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:56:38.154154+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85139456 unmapped: 2899968 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:56:39.154365+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998260 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85139456 unmapped: 2899968 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:56:40.154582+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85139456 unmapped: 2899968 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:56:41.154764+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85139456 unmapped: 2899968 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:56:42.154954+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85139456 unmapped: 2899968 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:56:43.155135+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 2891776 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:56:44.155352+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998260 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 2891776 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:56:45.155554+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 2891776 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:56:46.155705+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 2891776 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:56:47.155857+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e8c00 session 0x559f2d960d20
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2c59ac00 session 0x559f2da1d2c0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 2891776 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:56:48.156011+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 2891776 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:56:49.156215+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998260 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 2891776 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:56:50.156352+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 2891776 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:56:51.156513+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 2891776 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:56:52.156626+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85155840 unmapped: 2883584 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:56:53.158159+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85155840 unmapped: 2883584 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:56:54.158289+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998260 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85155840 unmapped: 2883584 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:56:55.158419+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85155840 unmapped: 2883584 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:56:56.158613+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85155840 unmapped: 2883584 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:56:57.158752+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85155840 unmapped: 2883584 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:56:58.158867+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2b37e800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 33.183380127s of 33.190643311s, submitted: 2
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85155840 unmapped: 2883584 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:56:59.159200+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998392 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85155840 unmapped: 2883584 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:57:00.159320+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85155840 unmapped: 2883584 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:57:01.159465+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85155840 unmapped: 2883584 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:57:02.159608+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85155840 unmapped: 2883584 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:57:03.159763+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85164032 unmapped: 2875392 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:57:04.159892+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998392 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2b37f400
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85164032 unmapped: 2875392 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:57:05.160026+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85164032 unmapped: 2875392 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:57:06.160176+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85164032 unmapped: 2875392 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:57:07.171591+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85164032 unmapped: 2875392 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:57:08.171720+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 2859008 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:57:09.171845+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998392 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 2859008 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:57:10.172020+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.049246788s of 12.226043701s, submitted: 1
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 2850816 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:57:11.172198+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 2850816 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:57:12.172312+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 2850816 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:57:13.172490+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 2850816 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:57:14.172623+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997669 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 2850816 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:57:15.172752+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 2850816 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:57:16.172882+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 2850816 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:57:17.172994+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 2850816 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:57:18.173265+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 2850816 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:57:19.173386+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997669 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 2850816 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2b37f400 session 0x559f2cbf63c0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2b37e800 session 0x559f2d82eb40
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:57:20.175709+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 2850816 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:57:21.175880+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 2850816 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:57:22.176055+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 2850816 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:57:23.176182+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 2850816 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:57:24.176288+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997669 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 2850816 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:57:25.176458+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 2850816 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:57:26.176622+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 2850816 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:57:27.176730+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 2850816 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:57:28.176854+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 2850816 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:57:29.176999+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997669 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 2850816 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:57:30.177163+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e8c00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.388336182s of 20.395538330s, submitted: 2
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 2850816 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:57:31.177313+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 2842624 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:57:32.177447+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 2842624 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:57:33.177626+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 2842624 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:57:34.177831+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997801 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 2842624 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:57:35.178075+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 2826240 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:57:36.178349+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 2826240 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:57:37.178493+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 2826240 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:57:38.178672+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 2826240 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:57:39.178847+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997801 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 2826240 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:57:40.179006+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 2826240 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:57:41.179170+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 2826240 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:57:42.179303+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 2826240 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:57:43.179495+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.363765717s of 12.369489670s, submitted: 1
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 2826240 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:57:44.179634+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997669 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 2826240 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:57:45.180360+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 2826240 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:57:46.180540+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 2826240 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:57:47.180695+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 2826240 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:57:48.180840+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 2826240 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:57:49.181129+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997669 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 2826240 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:57:50.181252+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 2826240 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:57:51.181546+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 2826240 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:57:52.181759+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 2826240 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:57:53.181919+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 2826240 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:57:54.182074+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997669 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 2826240 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:57:55.182186+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 2826240 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:57:56.182319+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 2826240 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:57:57.182440+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:57:58.183142+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 2818048 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:57:59.183269+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 2818048 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997669 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:00.183385+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 2818048 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:01.183513+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 2809856 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:02.183642+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 2809856 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:03.184166+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 2801664 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:04.184497+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 2801664 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997669 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:05.184797+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 2801664 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:06.185156+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 2801664 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:07.185323+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 2801664 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:08.185458+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:09.185703+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997669 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:10.185847+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e8400 session 0x559f2d953680
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e8800 session 0x559f2dbe94a0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:11.186056+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:12.186241+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:13.186445+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:14.186608+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997669 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:15.186787+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:16.186976+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:17.187125+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:18.187344+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:19.187594+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997669 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:20.187729+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:21.187852+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d680c00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 38.191692352s of 38.196037292s, submitted: 1
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:22.188007+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:23.188160+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:24.189487+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 2785280 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:25.189634+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999313 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 2785280 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:26.189795+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 2785280 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:27.190084+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 2785280 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d3c4800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:28.190230+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 2785280 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:29.190414+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 2785280 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:30.190578+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000825 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 2785280 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:31.190947+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 2777088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:32.191127+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 2777088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:33.191473+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 2777088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:34.191636+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 2777088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:35.191820+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000825 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 2777088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:36.192116+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 2777088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.237012863s of 15.247964859s, submitted: 3
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:37.192246+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 2777088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:38.192379+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 2777088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:39.192557+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 2777088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:40.192724+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000693 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 2777088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:41.192877+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 2777088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:42.193012+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 2777088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:43.193397+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 2777088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:44.193698+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 2777088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:45.193845+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000693 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 2768896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:46.194070+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 2768896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:47.194202+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 2768896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:48.194353+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 2768896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:49.194525+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 2768896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:50.194783+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000693 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 2768896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:51.194932+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 2768896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:52.195106+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 2768896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:53.195276+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 2768896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:54.195912+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 2768896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:55.196089+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000693 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 2768896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:56.196239+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 2768896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:57.196635+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 2768896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:58.196787+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2760704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:59.197009+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2760704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:00.197121+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000693 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2760704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:01.197261+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2760704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:02.197405+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 2768896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:03.197535+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 2768896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:04.200094+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2760704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e8c00 session 0x559f2d82f2c0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:05.200199+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000693 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2760704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:06.200480+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2760704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:07.200631+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2760704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:08.200770+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:09.200882+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:10.201065+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000693 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:11.201206+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:12.201362+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:13.201520+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:14.201757+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:15.202014+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000693 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2b37e800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 39.051769257s of 39.055622101s, submitted: 1
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:16.202235+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:17.202385+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d3c4800 session 0x559f2d961680
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d680c00 session 0x559f2a95b680
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:18.202554+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:19.202720+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:20.202855+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000825 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:21.202993+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2760704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:22.203121+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2760704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:23.203279+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2760704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:24.203382+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2760704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:25.203524+0000)
Oct 08 10:22:35 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1141: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000825 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2760704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:26.203682+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2760704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:27.203852+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2760704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:28.204120+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2b37f400
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.737722397s of 12.740792274s, submitted: 1
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:29.204250+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:30.204632+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000957 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:31.204958+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86335488 unmapped: 1703936 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:32.205260+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 2744320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:33.205661+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 2744320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:34.205864+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 2744320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e8400
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:35.206058+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000234 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 2744320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:36.206329+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 2744320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:37.206484+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 2744320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:38.206625+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 2744320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:39.207122+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 2744320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:40.207291+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000234 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 2744320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.218849182s of 12.235140800s, submitted: 3
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:41.207407+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 2744320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:42.207580+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 2744320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:43.207767+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 2744320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:44.207892+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 2744320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:45.208299+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000234 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 2744320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:46.208472+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:47.208635+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:48.208772+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:49.208961+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:50.209160+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000102 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:51.209291+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:52.209427+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:53.209565+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:54.209742+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:55.209880+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000102 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:56.210077+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:57.210272+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:58.210401+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:59.210577+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:00.210748+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000102 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:01.210905+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:02.211074+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:03.211207+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:04.211329+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e9800 session 0x559f2c4243c0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e9000 session 0x559f2d82e3c0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:05.211524+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000102 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:06.211686+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:07.211857+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:08.212002+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:09.212143+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:10.212325+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000102 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:11.212451+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread fragmentation_score=0.000031 took=0.000080s
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 2711552 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:12.212573+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 2711552 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:13.212706+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 2711552 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:14.212832+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 2711552 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:15.212960+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000102 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e8800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 34.856376648s of 34.864582062s, submitted: 2
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 2711552 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:16.213106+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 2711552 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2b37e000 session 0x559f2a9a3a40
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2b37e000
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:17.213268+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 2703360 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:18.213416+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9000
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 2703360 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:19.213542+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 2703360 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:20.213698+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001746 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:21.213832+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:22.213959+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:23.214120+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:24.214239+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:25.214350+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001746 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:26.214491+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e9000 session 0x559f2d9534a0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2b37e800 session 0x559f2d9612c0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:27.214618+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.321186066s of 12.326921463s, submitted: 2
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:28.214760+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:29.214875+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:30.215008+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001614 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:31.215111+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:32.215332+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:33.215412+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:34.215538+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:35.215682+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001614 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:36.215874+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:37.216003+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:38.216141+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:39.216332+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:40.216473+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001746 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 2686976 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:41.216683+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 2686976 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:42.216825+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 2686976 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:43.216934+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d3c4800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.701647758s of 15.710140228s, submitted: 2
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:44.217092+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:45.217200+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1003258 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:46.217366+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:47.217528+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:48.217679+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:49.217853+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:50.217991+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002667 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:51.218182+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:52.218387+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:53.218546+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:54.218747+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e8800 session 0x559f2a9703c0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:55.218942+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002535 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:56.219195+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:57.219336+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:58.219450+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:59.219661+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:00.219794+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002535 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:01.219922+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:02.220055+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:03.220319+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:04.220451+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 2670592 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d680c00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.336950302s of 21.398941040s, submitted: 3
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:05.220585+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002667 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 2662400 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:06.220810+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 2662400 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:07.220990+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 2662400 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:08.221131+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 2662400 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:09.221307+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 2662400 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:10.221509+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004179 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 2662400 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:11.221954+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b2000
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 2646016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:12.222151+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 2646016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:13.222323+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 2646016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:14.222466+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 2646016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:15.222618+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005100 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 2646016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:16.222788+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 2646016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:17.222945+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 2646016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:18.223130+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 2646016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:19.223263+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 2646016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:20.223386+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.527006149s of 15.556138039s, submitted: 4
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004968 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 2646016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:21.223545+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 2646016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:22.223723+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 2646016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:23.223857+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 2637824 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:24.224009+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 2637824 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:25.224182+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004968 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 2637824 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:26.224443+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 2637824 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:27.224578+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 2637824 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:28.224801+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 2637824 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:29.224976+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 2637824 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:30.225124+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004968 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 2637824 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:31.225294+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d5b2000 session 0x559f2cadd2c0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d680c00 session 0x559f2d961c20
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 2637824 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:32.225506+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 2637824 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:33.225747+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 2629632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:34.225884+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 2629632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:35.227111+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004968 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 2629632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:36.227917+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 2629632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:37.229726+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 2629632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:38.230792+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 2629632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:39.231280+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 2629632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:40.231468+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004968 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 2629632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:41.231635+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 2629632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:42.233028+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2b37e800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 22.097640991s of 22.100765228s, submitted: 1
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 2629632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:43.233513+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 2629632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:44.233664+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 2629632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:45.234224+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005100 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:46.235243+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:47.235389+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:48.235522+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e8800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:49.235656+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:50.235796+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005100 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:51.236377+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:52.236520+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:53.236655+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:54.236832+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.092028618s of 12.143527031s, submitted: 2
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:55.237089+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1003918 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:56.237256+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:57.237374+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:58.237510+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:59.237680+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:00.237939+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1003786 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:01.238139+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:02.238296+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:03.238544+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 2613248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:04.238695+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 2613248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:05.238841+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 2613248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1003786 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:06.239104+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 2613248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:07.239284+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 2613248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:08.239456+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 2613248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:09.239618+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 2613248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:10.239762+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 2613248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1003786 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:11.239913+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 2613248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:12.240071+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 2605056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:13.240222+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 2605056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e8800 session 0x559f2a9550e0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2b37e800 session 0x559f2d82fe00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:14.240362+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 2605056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:15.240534+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 2605056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1003786 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:16.240723+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 2605056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:17.240865+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 2605056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:18.240997+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 2605056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: mgrc ms_handle_reset ms_handle_reset con 0x559f2abaa000
Oct 08 10:22:35 compute-0 ceph-osd[81751]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3802415056
Oct 08 10:22:35 compute-0 ceph-osd[81751]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3802415056,v1:192.168.122.100:6801/3802415056]
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: get_auth_request con 0x559f2d0e8c00 auth_method 0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: mgrc handle_mgr_configure stats_period=5
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:19.241152+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:20.241286+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1003786 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:21.241447+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:22.241636+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:23.241789+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:24.241937+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9000
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 29.943304062s of 30.003890991s, submitted: 2
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:25.242086+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1003918 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:26.242237+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:27.242399+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:28.242541+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:29.242662+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:30.242780+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005430 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:31.242897+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:32.243067+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:33.243203+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:34.243333+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:35.243472+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005430 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:36.243651+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:37.243785+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:38.243978+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:39.244119+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:40.244247+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005430 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:41.244387+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:42.244521+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.837564468s of 17.844263077s, submitted: 2
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d3c4800 session 0x559f2d82e1e0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e9800 session 0x559f2d82ef00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:43.245193+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 2588672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:44.245580+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 2588672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:45.245771+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 2588672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005298 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:46.245937+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 2588672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:47.246600+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 2588672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:48.247140+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 2588672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:49.247393+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 2588672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:50.247526+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 2588672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:51.247714+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005298 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 2588672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:52.247914+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 2588672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:53.248051+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 2588672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b2000
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.888109207s of 10.891509056s, submitted: 1
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:54.248185+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 2588672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:55.248356+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 2588672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:56.248708+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005430 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 2588672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d680c00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:57.248979+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86499328 unmapped: 1540096 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:58.249105+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:59.249410+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b2400
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:00.249624+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:01.249793+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006942 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:02.249930+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:03.250084+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:04.250234+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:05.250370+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.085538864s of 12.127921104s, submitted: 2
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:06.250619+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006351 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:07.250824+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:08.250991+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:09.251183+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:10.251358+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:11.251481+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006219 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:12.251683+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:13.252306+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:14.252473+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:15.252641+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d680c00 session 0x559f2d5ef2c0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e9000 session 0x559f2c5c8b40
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:16.252897+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006219 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:17.253236+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:18.253496+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:19.253625+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:20.253780+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:21.254091+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006219 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:22.254264+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:23.254406+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:24.254554+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:25.254667+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:26.254802+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006219 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d680c00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.218805313s of 21.227340698s, submitted: 2
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:27.255014+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:28.255286+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:29.255519+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:30.255660+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:31.255806+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006351 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:32.255901+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2b37e800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:33.256043+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:34.256215+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:35.256366+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:36.256544+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007863 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:37.256712+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:38.256896+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.160308838s of 12.177426338s, submitted: 2
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:39.257007+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:40.257073+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:41.257183+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007272 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:42.257442+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:43.257567+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:44.257707+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:45.257870+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:46.258103+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007140 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:47.258220+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 1515520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:48.258376+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d5b2400 session 0x559f2da1f0e0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d5b2000 session 0x559f2a8670e0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 1515520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:49.258509+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 1515520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:50.258628+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 1515520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:51.258785+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007140 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 1515520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:52.258930+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 1515520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:53.259098+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86532096 unmapped: 1507328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:54.259236+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86532096 unmapped: 1507328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:55.259574+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86532096 unmapped: 1507328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:56.259793+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007140 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86532096 unmapped: 1507328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:57.259963+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86532096 unmapped: 1507328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:58.260112+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86532096 unmapped: 1507328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:59.260248+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e8800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.320930481s of 20.461774826s, submitted: 2
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:00.260536+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:01.260743+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007272 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 9009 writes, 35K keys, 9009 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 9009 writes, 1887 syncs, 4.77 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 764 writes, 1222 keys, 764 commit groups, 1.0 writes per commit group, ingest: 0.41 MB, 0.00 MB/s
                                           Interval WAL: 764 writes, 362 syncs, 2.11 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb69b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb69b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb69b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:02.260872+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86548480 unmapped: 1490944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:03.261109+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86548480 unmapped: 1490944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:04.261305+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86548480 unmapped: 1490944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:05.261586+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86548480 unmapped: 1490944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:06.261795+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008784 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86548480 unmapped: 1490944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:07.262000+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86548480 unmapped: 1490944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:08.262176+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86548480 unmapped: 1490944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:09.262365+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.007425308s of 10.107902527s, submitted: 3
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86548480 unmapped: 1490944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:10.262591+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86548480 unmapped: 1490944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:11.262791+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009705 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86548480 unmapped: 1490944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:12.263006+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86548480 unmapped: 1490944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:13.263191+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 1482752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:14.263344+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 1482752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:15.263508+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 1482752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:16.263728+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009573 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 1482752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:17.263939+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 1482752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:18.264128+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 1482752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:19.264310+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 1482752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:20.264530+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 1482752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:21.264755+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009573 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 1482752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:22.264938+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 1482752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:23.265151+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 1482752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:24.265361+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 1482752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:25.265488+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 1482752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:26.265681+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009573 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 1482752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:27.265859+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86564864 unmapped: 1474560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:28.266079+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86564864 unmapped: 1474560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:29.266221+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86564864 unmapped: 1474560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:30.266422+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86564864 unmapped: 1474560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:31.266575+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009573 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86564864 unmapped: 1474560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:32.266701+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86564864 unmapped: 1474560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:33.266864+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 1466368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:34.267116+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 1466368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:35.267284+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 1466368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:36.267482+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009573 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 1466368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:37.267790+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 1466368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:38.268000+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 1466368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:39.268224+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 1466368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:40.268433+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 1466368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:41.268702+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009573 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 1466368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:42.268903+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 1466368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:43.269195+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 1466368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e8400 session 0x559f2d70cb40
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2b37f400 session 0x559f2d5ee1e0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:44.269384+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 1466368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:45.269630+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 1466368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:46.269839+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009573 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 1466368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:47.270160+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 1458176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:48.270382+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 1458176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:49.270561+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 1458176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:50.270722+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 1458176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:51.270840+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009573 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 1458176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:52.270954+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 1458176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:53.271096+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 1458176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:54.271243+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 1458176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2b37f400
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 45.391696930s of 45.435684204s, submitted: 2
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:55.271377+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 1458176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:56.271529+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009705 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 1458176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:57.271707+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 1458176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:58.271840+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 1458176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:59.272120+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 1458176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:00.272254+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 1458176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:01.272431+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e8400
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011217 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86589440 unmapped: 1449984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:02.272570+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:03.272729+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:04.272866+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:05.272995+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:06.273172+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010035 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:07.273334+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:08.273534+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.340482712s of 13.399305344s, submitted: 4
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:09.273715+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:10.273889+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:11.274081+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009903 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:12.274219+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:13.274322+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:14.274445+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:15.274614+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:16.274793+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009903 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:17.274971+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:18.275216+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:19.275426+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:20.275903+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:21.276059+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009903 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:22.276215+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.399309158s of 14.402190208s, submitted: 1
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:23.276339+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86638592 unmapped: 1400832 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:24.276494+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [0,0,0,0,0,0,4])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 1458176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:25.276658+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [0,0,0,0,0,0,1,2])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86614016 unmapped: 1425408 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:26.276838+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009975 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86794240 unmapped: 2293760 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:27.276895+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:28.277019+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:29.277171+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:30.277353+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:31.277503+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009903 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:32.277621+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:33.277786+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:34.277948+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:35.278149+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:36.278282+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009903 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:37.278435+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:38.278584+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:39.278726+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:40.278893+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:41.279074+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009903 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:42.279309+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:43.279420+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:44.279550+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:45.279688+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:46.279844+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009903 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:47.280026+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:48.280280+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:49.280470+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:50.280627+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:51.280862+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009903 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:52.281007+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:53.281133+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:54.281347+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:55.281508+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:56.281680+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009903 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:57.281808+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:58.281928+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:59.282091+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:00.282240+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:01.282453+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009903 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:02.282567+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:03.282730+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:04.282878+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:05.283151+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:06.283402+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009903 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:07.283579+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:08.283736+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:09.283897+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2b37e800 session 0x559f2dc09680
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d680c00 session 0x559f2d9612c0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:10.284156+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:11.284348+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009903 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:12.284532+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:13.284739+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:14.284989+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:15.285141+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:16.285319+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009903 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:17.285516+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16503 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e8400 session 0x559f2c8afe00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2b37f400 session 0x559f2d953a40
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:18.285698+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:19.285899+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9000
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 53.254104614s of 57.032154083s, submitted: 332
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:20.286138+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:21.286306+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010035 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:22.286458+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:23.286603+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:24.286715+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:25.286839+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:26.287100+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010035 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:27.287239+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:28.287448+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b2000
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:29.287662+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:30.287842+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:31.288098+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010167 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:32.288221+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:33.288408+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:34.288547+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b2400
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:35.288770+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:36.288964+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.976808548s of 16.986804962s, submitted: 2
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010035 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:37.289144+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:38.289345+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e9800 session 0x559f2d70de00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e8800 session 0x559f2d554960
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:39.289514+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:40.289698+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:41.290126+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010035 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:42.290298+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:43.290519+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:44.290774+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:45.290888+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 2236416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:46.291087+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 2236416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009903 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:47.291316+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 2236416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:48.291505+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 2236416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2b37e800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:49.291617+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.073468208s of 12.254982948s, submitted: 3
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 2236416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:50.291764+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 2236416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:51.291945+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 2236416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010035 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:52.292105+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 2236416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:53.292288+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 2236416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:54.292471+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 2236416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:55.292646+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 2236416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:56.292825+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 2236416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011547 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:57.293004+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 2236416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:58.293169+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 2236416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:59.293317+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86859776 unmapped: 2228224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:00.293483+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86859776 unmapped: 2228224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:01.293669+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86859776 unmapped: 2228224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011547 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:02.294069+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86859776 unmapped: 2228224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:03.294219+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86859776 unmapped: 2228224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:04.294351+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86859776 unmapped: 2228224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.275589943s of 15.385351181s, submitted: 2
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:05.294483+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:06.294636+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011415 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:07.295115+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:08.295399+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:09.296338+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:10.296585+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:11.296755+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011415 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:12.296917+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:13.297075+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:14.297216+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:15.297372+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:16.297677+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011415 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:17.297864+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:18.298022+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:19.298271+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:20.298445+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:21.298582+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011415 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:22.298823+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:23.299067+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:24.299241+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:25.299491+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:26.299709+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011415 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:27.299901+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:28.300097+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:29.300237+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:30.300370+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:31.300844+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:32.301336+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011415 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:33.301790+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:34.302082+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d5b2400 session 0x559f2d82e960
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d5b2000 session 0x559f2d82ef00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:35.302454+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:36.302849+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:37.303216+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011415 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:38.303509+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:39.303836+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:40.304132+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:41.304429+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:42.304617+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011415 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:43.304787+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:44.305003+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2b37f400
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 40.478878021s of 40.551963806s, submitted: 1
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:45.305308+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86884352 unmapped: 2203648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:46.305519+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86884352 unmapped: 2203648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:47.305749+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011547 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86884352 unmapped: 2203648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:48.305998+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86884352 unmapped: 2203648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:49.306229+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86884352 unmapped: 2203648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:50.306390+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 2195456 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:51.306667+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e8400
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:52.306796+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013059 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:53.306960+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:54.307111+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:55.307280+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:56.307517+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:57.307721+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.107625008s of 12.130958557s, submitted: 2
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012468 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:58.307935+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:59.308139+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:00.308322+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:01.308518+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:02.308669+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012336 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:03.309160+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:04.309381+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:05.309817+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:06.310589+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:07.311773+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012336 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:08.311941+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:09.312083+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:10.314258+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:11.314556+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:12.314868+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012336 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:13.315140+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:14.315308+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:15.315449+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:16.315691+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:17.315824+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012336 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:18.316071+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:19.316262+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:20.316462+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _renew_subs
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.331020355s of 23.338811874s, submitted: 2
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86908928 unmapped: 2179072 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:21.316598+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _renew_subs
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86933504 unmapped: 2154496 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:22.316819+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1021781 data_alloc: 218103808 data_used: 167936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 150 handle_osd_map epochs [150,151], i have 150, src has [1,151]
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _renew_subs
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 151 handle_osd_map epochs [151,151], i have 151, src has [1,151]
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x107e4e/0x1c6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [0,0,0,0,1])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 2146304 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 151 ms_handle_reset con 0x559f2d0e9800 session 0x559f2d952960
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:23.317019+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 2146304 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d680c00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:24.317142+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 151 ms_handle_reset con 0x559f2d0e8400 session 0x559f2d5ee1e0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 151 ms_handle_reset con 0x559f2b37f400 session 0x559f2d5ef2c0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 18898944 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:25.317350+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _renew_subs
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 18898944 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 152 ms_handle_reset con 0x559f2d680c00 session 0x559f2d555680
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:26.317512+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fbe3e000/0x0/0x4ffc00000, data 0x90c0a4/0x9ce000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 18898944 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:27.317706+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083662 data_alloc: 218103808 data_used: 176128
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 18898944 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:28.317837+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 18898944 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 152 handle_osd_map epochs [152,153], i have 152, src has [1,153]
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:29.318081+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 18890752 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:30.318238+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 18890752 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fbe3a000/0x0/0x4ffc00000, data 0x90e076/0x9d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:31.318429+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 18890752 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:32.318605+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087260 data_alloc: 218103808 data_used: 176128
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 18890752 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:33.318764+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 18890752 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:34.318927+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 18890752 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fbe3a000/0x0/0x4ffc00000, data 0x90e076/0x9d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2b37f400
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.067012787s of 14.482573509s, submitted: 64
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:35.319225+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 18890752 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:36.319453+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 18890752 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:37.319597+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087392 data_alloc: 218103808 data_used: 176128
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 18890752 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e8400
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:38.319667+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 18890752 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:39.319790+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 18890752 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:40.319892+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fbe3b000/0x0/0x4ffc00000, data 0x90e076/0x9d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 18890752 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:41.320014+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 18882560 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:42.320102+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1089576 data_alloc: 218103808 data_used: 176128
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 18882560 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:43.320277+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 18882560 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:44.320426+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fbe3b000/0x0/0x4ffc00000, data 0x90e076/0x9d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 18882560 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:45.320556+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 18882560 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:46.320709+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 18882560 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.071710587s of 12.114167213s, submitted: 3
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:47.320837+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088985 data_alloc: 218103808 data_used: 176128
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 88039424 unmapped: 17833984 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:48.321028+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 88039424 unmapped: 17833984 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:49.321213+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fbe3b000/0x0/0x4ffc00000, data 0x90e076/0x9d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 88039424 unmapped: 17833984 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:50.321345+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 18866176 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:51.321492+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 18866176 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:52.321643+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088853 data_alloc: 218103808 data_used: 176128
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 18866176 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:53.321760+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 18866176 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:54.321861+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 18866176 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fbe3b000/0x0/0x4ffc00000, data 0x90e076/0x9d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:55.322014+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 18866176 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:56.322181+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 18866176 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:57.322342+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088853 data_alloc: 218103808 data_used: 176128
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 18866176 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:58.322459+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 18866176 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:59.322611+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87015424 unmapped: 18857984 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:00.322774+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fbe3b000/0x0/0x4ffc00000, data 0x90e076/0x9d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87015424 unmapped: 18857984 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:01.322895+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87015424 unmapped: 18857984 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:02.323071+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088853 data_alloc: 218103808 data_used: 176128
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87015424 unmapped: 18857984 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:03.323218+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87015424 unmapped: 18857984 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b2000
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 153 ms_handle_reset con 0x559f2d5b2000 session 0x559f2d6370e0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b2400
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 153 ms_handle_reset con 0x559f2d5b2400 session 0x559f2dbe8b40
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d3c4800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 153 ms_handle_reset con 0x559f2d3c4800 session 0x559f2c5dfc20
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:04.323361+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87023616 unmapped: 18849792 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:05.323503+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b2800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 153 ms_handle_reset con 0x559f2d5b2800 session 0x559f2a866000
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d3c4800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fbe3b000/0x0/0x4ffc00000, data 0x90e076/0x9d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 153 ms_handle_reset con 0x559f2d3c4800 session 0x559f2a975680
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87023616 unmapped: 18849792 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:06.323642+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b2000
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 153 ms_handle_reset con 0x559f2d5b2000 session 0x559f2a95bc20
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b2400
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87023616 unmapped: 18849792 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:07.323787+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 153 handle_osd_map epochs [153,154], i have 153, src has [1,154]
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.177728653s of 20.183889389s, submitted: 2
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092771 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87023616 unmapped: 18849792 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:08.323939+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 154 handle_osd_map epochs [154,155], i have 154, src has [1,155]
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 155 ms_handle_reset con 0x559f2d5b2400 session 0x559f2b6512c0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d680c00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 155 ms_handle_reset con 0x559f2d680c00 session 0x559f2a9a3e00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b2c00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 155 ms_handle_reset con 0x559f2d5b2c00 session 0x559f2d960780
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d3c4800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 155 ms_handle_reset con 0x559f2d3c4800 session 0x559f2d82fa40
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b2000
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 155 ms_handle_reset con 0x559f2d5b2000 session 0x559f2d554780
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87269376 unmapped: 18604032 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:09.324119+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87269376 unmapped: 18604032 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:10.324251+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fb528000/0x0/0x4ffc00000, data 0x121c314/0x12e3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87277568 unmapped: 18595840 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:11.324405+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87277568 unmapped: 18595840 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:12.324521+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b2400
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 155 ms_handle_reset con 0x559f2d5b2400 session 0x559f2dbe81e0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165866 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87302144 unmapped: 18571264 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b2c00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d680c00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:13.324675+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87318528 unmapped: 18554880 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:14.324811+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _renew_subs
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 155 handle_osd_map epochs [156,156], i have 155, src has [1,156]
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb528000/0x0/0x4ffc00000, data 0x121c337/0x12e4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 95657984 unmapped: 10215424 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:15.325006+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 97189888 unmapped: 8683520 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:16.325214+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 97189888 unmapped: 8683520 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:17.325349+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233556 data_alloc: 234881024 data_used: 9666560
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 97206272 unmapped: 8667136 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:18.325469+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb524000/0x0/0x4ffc00000, data 0x121e309/0x12e7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 97239040 unmapped: 8634368 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:19.325583+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 97239040 unmapped: 8634368 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:20.325703+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 97239040 unmapped: 8634368 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:21.325825+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 97239040 unmapped: 8634368 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:22.325971+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233556 data_alloc: 234881024 data_used: 9666560
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 97239040 unmapped: 8634368 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb524000/0x0/0x4ffc00000, data 0x121e309/0x12e7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:23.326138+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 97239040 unmapped: 8634368 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:24.326244+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.378948212s of 17.598480225s, submitted: 58
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 97239040 unmapped: 8634368 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:25.326361+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103514112 unmapped: 8765440 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:26.326517+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102342656 unmapped: 9936896 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:27.326626+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1346036 data_alloc: 234881024 data_used: 10461184
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102342656 unmapped: 9936896 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:28.326783+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102498304 unmapped: 9781248 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f95a7000/0x0/0x4ffc00000, data 0x1ffc309/0x20c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:29.326922+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f95a7000/0x0/0x4ffc00000, data 0x1ffc309/0x20c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102506496 unmapped: 9773056 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:30.327094+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e8400 session 0x559f2da1f0e0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9000 session 0x559f2d555e00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102506496 unmapped: 9773056 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:31.327259+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102506496 unmapped: 9773056 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:32.327381+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1346036 data_alloc: 234881024 data_used: 10461184
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102506496 unmapped: 9773056 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:33.327503+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102506496 unmapped: 9773056 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:34.327722+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102506496 unmapped: 9773056 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:35.327897+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f95a7000/0x0/0x4ffc00000, data 0x1ffc309/0x20c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102547456 unmapped: 9732096 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:36.328091+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102547456 unmapped: 9732096 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:37.328227+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1346948 data_alloc: 234881024 data_used: 10530816
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102547456 unmapped: 9732096 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:38.328414+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f95a7000/0x0/0x4ffc00000, data 0x1ffc309/0x20c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102547456 unmapped: 9732096 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:39.328554+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102555648 unmapped: 9723904 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:40.328735+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102555648 unmapped: 9723904 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e8400
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.637916565s of 16.192432404s, submitted: 74
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:41.328919+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102555648 unmapped: 9723904 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:42.329110+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1347080 data_alloc: 234881024 data_used: 10530816
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102555648 unmapped: 9723904 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:43.329256+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9000
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9000 session 0x559f2a9543c0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d3c4800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2d953a40
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b2000
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b2000 session 0x559f2d952960
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b2400
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b2400 session 0x559f2d82e960
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b3000
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2d554b40
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2b37f400 session 0x559f2a954b40
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102547456 unmapped: 9732096 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:44.329381+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f95a7000/0x0/0x4ffc00000, data 0x1ffc309/0x20c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b3000 session 0x559f2d82fe00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f95a7000/0x0/0x4ffc00000, data 0x1ffc309/0x20c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2b37f400
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2b37f400 session 0x559f2d82ef00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9000
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102604800 unmapped: 9674752 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:45.329532+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9000 session 0x559f2d554960
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2dbe90e0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d3c4800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2d6370e0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d3c4800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2c5fc960
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2b37f400
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2b37f400 session 0x559f2a9a3e00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 9101312 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:46.329706+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9409000/0x0/0x4ffc00000, data 0x2199319/0x2263000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 9101312 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:47.329853+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9000
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2d82e000
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367280 data_alloc: 234881024 data_used: 10534912
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 9101312 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:48.329999+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 9101312 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:49.330117+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b3000
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b3000 session 0x559f2c424000
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 9101312 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:50.330263+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b2000
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b2000 session 0x559f2cbf7680
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2b37f400
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2b37f400 session 0x559f2d70d2c0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102727680 unmapped: 9551872 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:51.330386+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102727680 unmapped: 9551872 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:52.330529+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9408000/0x0/0x4ffc00000, data 0x219933c/0x2264000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1378901 data_alloc: 234881024 data_used: 11943936
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103833600 unmapped: 8445952 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:53.330701+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103833600 unmapped: 8445952 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:54.330828+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d3c4800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.715806007s of 13.765681267s, submitted: 16
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103833600 unmapped: 8445952 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:55.330977+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103833600 unmapped: 8445952 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:56.331211+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103841792 unmapped: 8437760 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:57.331355+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9408000/0x0/0x4ffc00000, data 0x219933c/0x2264000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1378853 data_alloc: 234881024 data_used: 11948032
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103874560 unmapped: 8404992 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:58.331502+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103874560 unmapped: 8404992 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:59.331630+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9408000/0x0/0x4ffc00000, data 0x219933c/0x2264000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103907328 unmapped: 8372224 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:00.331774+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b2000
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b3000
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103907328 unmapped: 8372224 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:01.331898+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103907328 unmapped: 8372224 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:02.332133+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1379774 data_alloc: 234881024 data_used: 11948032
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103907328 unmapped: 8372224 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:03.332286+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9408000/0x0/0x4ffc00000, data 0x219933c/0x2264000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103907328 unmapped: 8372224 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:04.332447+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.078499794s of 10.066446304s, submitted: 47
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108421120 unmapped: 3858432 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:05.332594+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108511232 unmapped: 3768320 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:06.332751+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 4235264 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:07.332941+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1423758 data_alloc: 234881024 data_used: 13017088
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 4235264 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:08.333078+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 4235264 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:09.333234+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f8e52000/0x0/0x4ffc00000, data 0x274f33c/0x281a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f8e52000/0x0/0x4ffc00000, data 0x274f33c/0x281a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 4235264 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:10.333404+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 4235264 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:11.333583+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 4235264 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:12.333756+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1422330 data_alloc: 234881024 data_used: 13017088
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108322816 unmapped: 3956736 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:13.334147+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108322816 unmapped: 3956736 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:14.334437+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f8e31000/0x0/0x4ffc00000, data 0x277033c/0x283b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2a974000
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b2400
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.945456505s of 10.029915810s, submitted: 20
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106733568 unmapped: 5545984 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:15.334737+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b2400 session 0x559f2d8d0960
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106741760 unmapped: 5537792 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:16.335011+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f95a6000/0x0/0x4ffc00000, data 0x1ffc309/0x20c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106741760 unmapped: 5537792 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:17.335272+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1352840 data_alloc: 234881024 data_used: 10534912
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106741760 unmapped: 5537792 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:18.335496+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f95a6000/0x0/0x4ffc00000, data 0x1ffc309/0x20c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106741760 unmapped: 5537792 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:19.335707+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:20.335956+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106741760 unmapped: 5537792 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:21.336145+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106741760 unmapped: 5537792 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:22.336311+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106741760 unmapped: 5537792 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1352840 data_alloc: 234881024 data_used: 10534912
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:23.336522+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106741760 unmapped: 5537792 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:24.336725+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106741760 unmapped: 5537792 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b2c00 session 0x559f2d555c20
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2c36be00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d680c00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2d5ef4a0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f95a7000/0x0/0x4ffc00000, data 0x1ffc309/0x20c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:25.336851+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100761600 unmapped: 11517952 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4faa3c000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:26.337055+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4faa3c000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4faa3c000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:27.337180+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118844 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:28.337341+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:29.337545+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:30.337752+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:31.337995+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:32.338160+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4faa3c000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118844 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:33.338299+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:34.338459+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:35.338608+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:36.338848+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4faa3c000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:37.339076+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118844 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:38.339300+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:39.339504+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:40.339735+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4faa3c000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:41.339922+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:42.340161+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4faa3c000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118844 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:43.340298+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:44.340416+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:45.340550+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:46.340775+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4faa3c000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:47.340898+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118844 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4faa3c000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:48.341089+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:49.341233+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:50.341373+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2b37f400
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2b37f400 session 0x559f2c8b0b40
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2c8b03c0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b2400
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b2400 session 0x559f2c8b0d20
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b2c00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:51.341502+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b2c00 session 0x559f2b2d8b40
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2b37f400
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2b37f400 session 0x559f2b2d8000
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4faa3c000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:52.341634+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2a999e00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b2400
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 37.161369324s of 37.341365814s, submitted: 63
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b2400 session 0x559f2a9983c0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d680c00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2a996b40
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b3400
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b3400 session 0x559f2a9974a0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b3400
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b3400 session 0x559f2a958960
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2b37f400
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2b37f400 session 0x559f2a9583c0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198144 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:53.341767+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 26476544 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:54.341919+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 26476544 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa225000/0x0/0x4ffc00000, data 0x1380284/0x1447000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:55.342115+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 26476544 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2a9703c0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:56.342271+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 26476544 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:57.342408+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 26476544 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b2400
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b2400 session 0x559f2d5ef4a0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198144 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:58.342565+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 26476544 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d680c00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2d5ee5a0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d680c00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:59.342706+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2d5ee1e0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100220928 unmapped: 26755072 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b2400
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:00.342888+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100220928 unmapped: 26755072 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b2000 session 0x559f2b2d92c0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2d82fa40
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa224000/0x0/0x4ffc00000, data 0x1380294/0x1448000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:01.343016+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100253696 unmapped: 26722304 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa224000/0x0/0x4ffc00000, data 0x1380294/0x1448000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:02.343178+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 104652800 unmapped: 22323200 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1272627 data_alloc: 234881024 data_used: 10821632
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:03.343314+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 104652800 unmapped: 22323200 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:04.343460+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 104652800 unmapped: 22323200 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:05.343610+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 104652800 unmapped: 22323200 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa224000/0x0/0x4ffc00000, data 0x1380294/0x1448000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:06.343784+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 104652800 unmapped: 22323200 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:07.343939+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 104652800 unmapped: 22323200 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1272627 data_alloc: 234881024 data_used: 10821632
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:08.344073+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 104652800 unmapped: 22323200 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:09.344227+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 104652800 unmapped: 22323200 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:10.344365+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 104652800 unmapped: 22323200 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:11.344550+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 104652800 unmapped: 22323200 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa224000/0x0/0x4ffc00000, data 0x1380294/0x1448000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.155124664s of 19.309776306s, submitted: 20
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:12.344711+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107364352 unmapped: 19611648 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1299469 data_alloc: 234881024 data_used: 11239424
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:13.344840+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109273088 unmapped: 17702912 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:14.344977+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109273088 unmapped: 17702912 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9ea1000/0x0/0x4ffc00000, data 0x16ed294/0x17b5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:15.345132+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109305856 unmapped: 17670144 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:16.345442+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109305856 unmapped: 17670144 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:17.345666+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109305856 unmapped: 17670144 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:18.345896+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1311047 data_alloc: 234881024 data_used: 11096064
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109305856 unmapped: 17670144 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9e96000/0x0/0x4ffc00000, data 0x170e294/0x17d6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:19.346087+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108470272 unmapped: 18505728 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:20.346288+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108470272 unmapped: 18505728 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:21.346445+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108478464 unmapped: 18497536 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9e96000/0x0/0x4ffc00000, data 0x170e294/0x17d6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:22.346628+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108478464 unmapped: 18497536 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:23.346814+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303199 data_alloc: 234881024 data_used: 11096064
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108478464 unmapped: 18497536 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:24.347273+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9e96000/0x0/0x4ffc00000, data 0x170e294/0x17d6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108478464 unmapped: 18497536 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.776124001s of 13.241639137s, submitted: 70
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:25.347413+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108478464 unmapped: 18497536 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:26.347616+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 18448384 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:27.347883+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 18448384 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:28.348234+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303147 data_alloc: 234881024 data_used: 11096064
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9e90000/0x0/0x4ffc00000, data 0x1714294/0x17dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108535808 unmapped: 18440192 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9e90000/0x0/0x4ffc00000, data 0x1714294/0x17dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:29.348626+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108535808 unmapped: 18440192 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:30.348816+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108535808 unmapped: 18440192 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9e90000/0x0/0x4ffc00000, data 0x1714294/0x17dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:31.348957+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108544000 unmapped: 18432000 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:32.349122+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108552192 unmapped: 18423808 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:33.349240+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303235 data_alloc: 234881024 data_used: 11096064
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108552192 unmapped: 18423808 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9e8d000/0x0/0x4ffc00000, data 0x1717294/0x17df000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:34.349486+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108552192 unmapped: 18423808 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:35.349608+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108552192 unmapped: 18423808 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:36.349761+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9e8d000/0x0/0x4ffc00000, data 0x1717294/0x17df000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108552192 unmapped: 18423808 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9e8d000/0x0/0x4ffc00000, data 0x1717294/0x17df000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:37.349889+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108552192 unmapped: 18423808 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.900504112s of 12.918242455s, submitted: 5
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:38.350023+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304083 data_alloc: 234881024 data_used: 11104256
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 18309120 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:39.350515+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 18309120 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:40.350702+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9e82000/0x0/0x4ffc00000, data 0x1722294/0x17ea000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2c5c8f00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b2400 session 0x559f2cc785a0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 18309120 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:41.350862+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2a996000
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:42.351320+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:43.351598+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1128836 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac92000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:44.351761+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:45.351883+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:46.352052+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:47.352197+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:48.352354+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1128836 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac92000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:49.352532+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:50.352697+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:51.352797+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:52.352923+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:53.353085+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1128836 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac92000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:54.353210+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:55.353369+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:56.353656+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:57.353827+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:58.353990+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1128836 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:59.354151+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac92000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:00.354268+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:01.354469+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:02.354610+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:03.354779+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1128836 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:04.354921+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac92000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:05.355104+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac92000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:06.355244+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:07.355386+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac92000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:08.355666+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac92000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1128836 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:09.355849+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:10.355980+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:11.356136+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d3c4800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2da1f860
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b2000
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b2000 session 0x559f2d636f00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b2400
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b2400 session 0x559f2d5ee3c0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d680c00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2cc5ed20
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d680c00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 33.925148010s of 34.002922058s, submitted: 29
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:12.356266+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2a9925a0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2d82e3c0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d3c4800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2d82fe00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2f76c000
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76c000 session 0x559f2c5c9860
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2f76c400
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76c400 session 0x559f2b2d8000
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101851136 unmapped: 25124864 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:13.356453+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193305 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa497000/0x0/0x4ffc00000, data 0x110d2e6/0x11d5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101851136 unmapped: 25124864 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:14.356637+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101851136 unmapped: 25124864 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:15.356832+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101851136 unmapped: 25124864 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:16.357076+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101851136 unmapped: 25124864 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:17.357453+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101851136 unmapped: 25124864 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:18.357604+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2f76c400
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195599 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76c400 session 0x559f2a958960
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101908480 unmapped: 25067520 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d3c4800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa497000/0x0/0x4ffc00000, data 0x110d2e6/0x11d5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [0,0,0,0,0,0,1])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:19.357752+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101916672 unmapped: 25059328 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:20.357955+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103686144 unmapped: 23289856 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:21.359648+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 23093248 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:22.360427+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 23093248 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:23.361727+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244379 data_alloc: 218103808 data_used: 7331840
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 23093248 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:24.363002+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 23093248 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa496000/0x0/0x4ffc00000, data 0x110d309/0x11d6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:25.364001+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 23093248 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:26.364564+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 23093248 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:27.365347+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa496000/0x0/0x4ffc00000, data 0x110d309/0x11d6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 23093248 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:28.366159+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244379 data_alloc: 218103808 data_used: 7331840
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 23093248 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:29.366707+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 23093248 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:30.367349+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.476533890s of 18.995376587s, submitted: 43
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106659840 unmapped: 20316160 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:31.367735+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa178000/0x0/0x4ffc00000, data 0x142b309/0x14f4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108077056 unmapped: 18898944 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:32.368258+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa100000/0x0/0x4ffc00000, data 0x14a3309/0x156c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109371392 unmapped: 17604608 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa100000/0x0/0x4ffc00000, data 0x14a3309/0x156c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:33.368615+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1282047 data_alloc: 218103808 data_used: 8519680
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109445120 unmapped: 17530880 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa0f1000/0x0/0x4ffc00000, data 0x14b1309/0x157a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:34.368888+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109445120 unmapped: 17530880 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa0f1000/0x0/0x4ffc00000, data 0x14b1309/0x157a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:35.369184+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109445120 unmapped: 17530880 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:36.369559+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109453312 unmapped: 17522688 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:37.369731+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa0f1000/0x0/0x4ffc00000, data 0x14b1309/0x157a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109453312 unmapped: 17522688 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:38.370000+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1282047 data_alloc: 218103808 data_used: 8519680
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109453312 unmapped: 17522688 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:39.370141+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109453312 unmapped: 17522688 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:40.370323+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109453312 unmapped: 17522688 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:41.370468+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109453312 unmapped: 17522688 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:42.370616+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa0f1000/0x0/0x4ffc00000, data 0x14b1309/0x157a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109461504 unmapped: 17514496 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:43.370764+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1282047 data_alloc: 218103808 data_used: 8519680
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa0f1000/0x0/0x4ffc00000, data 0x14b1309/0x157a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109461504 unmapped: 17514496 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:44.370934+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109461504 unmapped: 17514496 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:45.371223+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109461504 unmapped: 17514496 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:46.371551+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109461504 unmapped: 17514496 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:47.371708+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109461504 unmapped: 17514496 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:48.371977+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1282047 data_alloc: 218103808 data_used: 8519680
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109461504 unmapped: 17514496 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa0f1000/0x0/0x4ffc00000, data 0x14b1309/0x157a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:49.372269+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d680c00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2dbe81e0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2f76c000
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76c000 session 0x559f2c424b40
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2f76c800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76c800 session 0x559f2c5df860
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2f76cc00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76cc00 session 0x559f2c8b1e00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2f76cc00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.359991074s of 18.809175491s, submitted: 62
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76cc00 session 0x559f2c8b1c20
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d680c00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2a9a2960
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2f76c000
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76c000 session 0x559f2d5ee000
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2f76c400
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76c400 session 0x559f2d5ee780
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2f76c800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76c800 session 0x559f2d5eeb40
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108208128 unmapped: 22970368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:50.372480+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108208128 unmapped: 22970368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:51.372637+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108216320 unmapped: 22962176 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:52.372848+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1d0f319/0x1dd9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108216320 unmapped: 22962176 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:53.373012+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1344055 data_alloc: 218103808 data_used: 8523776
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108216320 unmapped: 22962176 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:54.373273+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108216320 unmapped: 22962176 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d680c00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:55.373414+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2c5da1e0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2f76c000
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2f76c400
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108216320 unmapped: 22962176 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:56.373616+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 22773760 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:57.373766+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1d0f319/0x1dd9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 16949248 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:58.373950+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1403516 data_alloc: 234881024 data_used: 15618048
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 16949248 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:59.374143+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1d0f319/0x1dd9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 16932864 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:00.374359+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1d0f319/0x1dd9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 16932864 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:01.374504+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 16932864 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:02.374722+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 16932864 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:03.374891+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.809376717s of 14.030103683s, submitted: 19
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1403852 data_alloc: 234881024 data_used: 15618048
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 16932864 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:04.375067+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 16924672 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:05.375239+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1d0f319/0x1dd9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 16924672 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:06.375449+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 16924672 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:07.375621+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 115367936 unmapped: 15810560 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:08.375810+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1431280 data_alloc: 234881024 data_used: 16175104
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 13950976 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:09.375992+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118071296 unmapped: 13107200 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9519000/0x0/0x4ffc00000, data 0x2081319/0x214b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:10.376144+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118104064 unmapped: 13074432 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:11.376303+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118104064 unmapped: 13074432 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:12.376637+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118104064 unmapped: 13074432 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:13.376847+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1439394 data_alloc: 234881024 data_used: 16089088
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118104064 unmapped: 13074432 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:14.377126+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118104064 unmapped: 13074432 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:15.377279+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9519000/0x0/0x4ffc00000, data 0x2081319/0x214b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118145024 unmapped: 13033472 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:16.377478+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118145024 unmapped: 13033472 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9519000/0x0/0x4ffc00000, data 0x2081319/0x214b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:17.377647+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118145024 unmapped: 13033472 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76c000 session 0x559f2a866b40
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.375069618s of 14.576653481s, submitted: 66
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76c400 session 0x559f2d8d0000
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:18.377863+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2f76d800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286810 data_alloc: 218103808 data_used: 6938624
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76d800 session 0x559f2dbe94a0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111828992 unmapped: 19349504 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:19.378072+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111828992 unmapped: 19349504 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:20.378321+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e8400 session 0x559f2c8ae780
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9000 session 0x559f2a997c20
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111828992 unmapped: 19349504 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:21.378600+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa0f2000/0x0/0x4ffc00000, data 0x14b1309/0x157a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111828992 unmapped: 19349504 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:22.378856+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2d9605a0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2c8b1a40
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d680c00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111828992 unmapped: 19349504 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:23.379152+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150044 data_alloc: 218103808 data_used: 184320
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2b6512c0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa91a000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:24.379361+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:25.379582+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:26.380238+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:27.380736+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:28.381546+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1148764 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa91a000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:29.381804+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:30.382107+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:31.382438+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2f76c000
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.605167389s of 13.440299034s, submitted: 69
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:32.383793+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa91a000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:33.384479+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa91a000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1148896 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:34.384827+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:35.385186+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:36.385561+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:37.385730+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2f76c400
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:38.386265+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac92000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1151336 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:39.386559+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:40.386966+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:41.387363+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:42.387627+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac92000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:43.388139+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1151336 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:44.388460+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac92000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:45.388612+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac92000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.465369225s of 14.476176262s, submitted: 3
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:46.388853+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:47.388994+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:48.389188+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1151204 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:49.389444+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:50.389673+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac92000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:51.389827+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac92000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:52.390122+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac92000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:53.390424+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1151204 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:54.390616+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2f76d800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76d800 session 0x559f2cc5e000
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2f76dc00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76dc00 session 0x559f2d82e780
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2f76dc00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76dc00 session 0x559f2d0534a0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2c8afc20
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d3c4800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2d8d0d20
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d680c00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:55.390778+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2cbf7680
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 24354816 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:56.391406+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 24354816 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:57.391564+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa77b000/0x0/0x4ffc00000, data 0xe2a2d6/0xef1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 24354816 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:58.391719+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 24354816 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190883 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:59.391925+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 24354816 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:00.392143+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 24354816 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:01.392341+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 24354816 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2f76d400
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76d400 session 0x559f2d8d05a0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 10K writes, 40K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 10K writes, 2666 syncs, 4.09 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1892 writes, 5856 keys, 1892 commit groups, 1.0 writes per commit group, ingest: 6.53 MB, 0.01 MB/s
                                           Interval WAL: 1892 writes, 779 syncs, 2.43 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa77b000/0x0/0x4ffc00000, data 0xe2a2d6/0xef1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:02.392516+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2f76d400
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76d400 session 0x559f2cadd2c0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 24354816 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:03.392741+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 24354816 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2c59a000
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2c59a000 session 0x559f2cc5fc20
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2c59bc00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.399578094s of 17.499835968s, submitted: 27
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2c59bc00 session 0x559f2d637680
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192697 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:04.392928+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106807296 unmapped: 24371200 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:05.393112+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106807296 unmapped: 24371200 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d3c4800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa77a000/0x0/0x4ffc00000, data 0xe2a2e6/0xef2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:06.393354+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108134400 unmapped: 23044096 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa77a000/0x0/0x4ffc00000, data 0xe2a2e6/0xef2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:07.393519+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108167168 unmapped: 23011328 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:08.393721+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108167168 unmapped: 23011328 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228417 data_alloc: 218103808 data_used: 5488640
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:09.393904+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108167168 unmapped: 23011328 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:10.394119+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108167168 unmapped: 23011328 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:11.394270+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108175360 unmapped: 23003136 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa77a000/0x0/0x4ffc00000, data 0xe2a2e6/0xef2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:12.394440+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108208128 unmapped: 22970368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:13.394670+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108208128 unmapped: 22970368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228417 data_alloc: 218103808 data_used: 5488640
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:14.421431+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108208128 unmapped: 22970368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:15.421742+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108208128 unmapped: 22970368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.437581062s of 12.444223404s, submitted: 1
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:16.421907+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109305856 unmapped: 21872640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:17.422052+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109371392 unmapped: 21807104 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa4e0000/0x0/0x4ffc00000, data 0x10be2e6/0x1186000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:18.422245+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109895680 unmapped: 21282816 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa4ca000/0x0/0x4ffc00000, data 0x10cc2e6/0x1194000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1256751 data_alloc: 218103808 data_used: 5910528
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:19.422488+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109895680 unmapped: 21282816 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:20.422708+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109895680 unmapped: 21282816 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa4ca000/0x0/0x4ffc00000, data 0x10cc2e6/0x1194000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:21.422944+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109895680 unmapped: 21282816 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:22.423134+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109895680 unmapped: 21282816 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:23.423297+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109895680 unmapped: 21282816 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1256751 data_alloc: 218103808 data_used: 5910528
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:24.423488+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109903872 unmapped: 21274624 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:25.423700+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109903872 unmapped: 21274624 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:26.423984+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa4ca000/0x0/0x4ffc00000, data 0x10cc2e6/0x1194000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109903872 unmapped: 21274624 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:27.424208+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109903872 unmapped: 21274624 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:28.424427+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 21266432 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1256751 data_alloc: 218103808 data_used: 5910528
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:29.424569+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa4ca000/0x0/0x4ffc00000, data 0x10cc2e6/0x1194000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 21266432 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:30.424702+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 21266432 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:31.424845+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 21266432 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa4ca000/0x0/0x4ffc00000, data 0x10cc2e6/0x1194000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:32.424985+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 21266432 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:33.425172+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 21266432 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa4ca000/0x0/0x4ffc00000, data 0x10cc2e6/0x1194000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1256751 data_alloc: 218103808 data_used: 5910528
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:34.425274+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 21266432 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:35.425415+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 21258240 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:36.425587+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 21258240 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa4ca000/0x0/0x4ffc00000, data 0x10cc2e6/0x1194000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:37.425796+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 21258240 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:38.425943+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 21258240 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1256751 data_alloc: 218103808 data_used: 5910528
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:39.426093+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 21258240 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:40.426250+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 21258240 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa4ca000/0x0/0x4ffc00000, data 0x10cc2e6/0x1194000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:41.426408+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 21258240 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:42.426591+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 21258240 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d680c00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2a9703c0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2f76d800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76d800 session 0x559f2c36ba40
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2c59a000
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2c59a000 session 0x559f2c36a1e0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:43.426712+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2c59bc00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2c59bc00 session 0x559f2cc5ed20
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d680c00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 27.291732788s of 27.440547943s, submitted: 53
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109658112 unmapped: 21520384 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2dbe92c0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1283893 data_alloc: 218103808 data_used: 5914624
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:44.426836+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110706688 unmapped: 20471808 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa1a1000/0x0/0x4ffc00000, data 0x14032e6/0x14cb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:45.426988+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110706688 unmapped: 20471808 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa1a1000/0x0/0x4ffc00000, data 0x14032e6/0x14cb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:46.427197+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110706688 unmapped: 20471808 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:47.427329+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110706688 unmapped: 20471808 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:48.427706+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110706688 unmapped: 20471808 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa1a1000/0x0/0x4ffc00000, data 0x14032e6/0x14cb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284061 data_alloc: 218103808 data_used: 5914624
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:49.427925+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110706688 unmapped: 20471808 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:50.428097+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110706688 unmapped: 20471808 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2f76d400
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:51.428231+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 19226624 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:52.428370+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa1a1000/0x0/0x4ffc00000, data 0x14032e6/0x14cb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 19226624 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:53.428520+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 19226624 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1305797 data_alloc: 218103808 data_used: 9158656
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:54.428715+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 19226624 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:55.428845+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 19226624 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:56.429059+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 19226624 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa1a1000/0x0/0x4ffc00000, data 0x14032e6/0x14cb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:57.429247+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 19226624 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:58.429392+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 19226624 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa1a1000/0x0/0x4ffc00000, data 0x14032e6/0x14cb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1305797 data_alloc: 218103808 data_used: 9158656
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:59.429524+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 19226624 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:00.429653+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 19226624 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:01.429833+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.994756699s of 18.046251297s, submitted: 9
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 18489344 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:02.430008+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 16203776 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:03.431572+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114745344 unmapped: 16433152 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f973e000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1392039 data_alloc: 234881024 data_used: 9400320
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:04.432448+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114745344 unmapped: 16433152 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:05.432676+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114745344 unmapped: 16433152 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:06.433933+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f973e000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 16424960 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:07.434254+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 16424960 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:08.435312+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 16424960 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:09.436203+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1392055 data_alloc: 234881024 data_used: 9400320
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 16424960 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:10.436938+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 16424960 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:11.437344+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 16424960 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f973e000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:12.437547+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 16400384 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:13.437887+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f973e000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 16400384 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:14.438145+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1392055 data_alloc: 234881024 data_used: 9400320
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.862829208s of 13.072974205s, submitted: 92
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113451008 unmapped: 17727488 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2b646c00 session 0x559f2c5fc5a0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2f76cc00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:15.438499+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2b647c00 session 0x559f2b6505a0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2f76d000
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113459200 unmapped: 17719296 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9754000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:16.438761+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113459200 unmapped: 17719296 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2b37e000 session 0x559f2d953860
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2b647c00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:17.438943+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113459200 unmapped: 17719296 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:18.439271+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113459200 unmapped: 17719296 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:19.439423+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1382143 data_alloc: 234881024 data_used: 9400320
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113459200 unmapped: 17719296 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9754000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:20.439609+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113459200 unmapped: 17719296 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:21.439798+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113467392 unmapped: 17711104 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:22.440020+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113467392 unmapped: 17711104 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:23.440275+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9754000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113483776 unmapped: 17694720 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:24.440405+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1381879 data_alloc: 234881024 data_used: 9400320
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.620989799s of 10.001231194s, submitted: 134
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113631232 unmapped: 17547264 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:25.440650+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9344000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113803264 unmapped: 17375232 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:26.440845+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113803264 unmapped: 17375232 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:27.441147+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113803264 unmapped: 17375232 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:28.441366+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9344000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 17367040 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:29.441551+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1382143 data_alloc: 234881024 data_used: 9400320
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 17367040 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:30.441732+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 17367040 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:31.441955+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 17358848 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:32.442120+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 17358848 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:33.442325+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 17358848 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:34.442488+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1382143 data_alloc: 234881024 data_used: 9400320
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9344000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 17358848 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:35.442686+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.531607628s of 10.991118431s, submitted: 201
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 17358848 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:36.442904+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 17350656 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9344000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:37.443121+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 17350656 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:38.443315+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 17350656 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9344000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:39.443495+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1382143 data_alloc: 234881024 data_used: 9400320
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 17350656 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:40.443641+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 17342464 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:41.443806+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 17342464 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:42.443971+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 17342464 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:43.444162+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 17342464 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:44.444342+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1382143 data_alloc: 234881024 data_used: 9400320
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 17342464 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9344000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:45.444671+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 17342464 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9344000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:46.444907+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 17342464 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:47.445108+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 17334272 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:48.445384+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9344000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 17334272 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.583388329s of 13.592965126s, submitted: 2
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:49.445549+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1384159 data_alloc: 234881024 data_used: 9388032
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113967104 unmapped: 17211392 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:50.445717+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113967104 unmapped: 17211392 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:51.445905+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113967104 unmapped: 17211392 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:52.446085+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113975296 unmapped: 17203200 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9344000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:53.446329+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113983488 unmapped: 17195008 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9344000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:54.446495+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1384663 data_alloc: 234881024 data_used: 9388032
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113983488 unmapped: 17195008 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:55.446638+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113983488 unmapped: 17195008 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:56.446854+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76d400 session 0x559f2d637860
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2b37e000
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112812032 unmapped: 18366464 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2b37e000 session 0x559f2b2d8b40
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:57.447118+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 19021824 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:58.447299+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa0c8000/0x0/0x4ffc00000, data 0x10cc2e6/0x1194000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa0c8000/0x0/0x4ffc00000, data 0x10cc2e6/0x1194000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 19021824 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:59.447443+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1259535 data_alloc: 218103808 data_used: 5898240
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2b37fc00 session 0x559f2d052b40
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2c59a000
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 19021824 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:00.447601+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 19021824 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:01.447828+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 19021824 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.979496956s of 13.032286644s, submitted: 26
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:02.447970+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 19021824 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:03.448201+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 19021824 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:04.448337+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1259703 data_alloc: 218103808 data_used: 5898240
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa0c8000/0x0/0x4ffc00000, data 0x10cc2e6/0x1194000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 19021824 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:05.448482+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa0c8000/0x0/0x4ffc00000, data 0x10cc2e6/0x1194000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 19021824 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:06.448656+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2d5321e0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2c6481e0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 19021824 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2c59bc00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:07.448787+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2c59bc00 session 0x559f2dbe8960
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa486000/0x0/0x4ffc00000, data 0x914284/0x9db000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:08.448943+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:09.449088+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166102 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:10.449231+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa487000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:11.449400+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:12.449552+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:13.449736+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:14.449888+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa487000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166102 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:15.450082+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa487000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa487000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:16.450240+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:17.450377+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b3000 session 0x559f2da1c3c0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2b37e800 session 0x559f2dbe9680
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:18.450529+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:19.450659+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166102 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:20.450823+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:21.450977+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa487000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:22.451193+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:23.451350+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:24.451481+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166102 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:25.451681+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa487000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:26.451849+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:27.452006+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:28.452183+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2b37e800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.247751236s of 26.306289673s, submitted: 19
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:29.452409+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166234 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:30.452613+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa487000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:31.452764+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:32.452937+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 22896640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:33.453186+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa487000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 22896640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:34.453330+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166234 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2c59bc00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa487000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 22896640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:35.453547+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 22896640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:36.453816+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 22896640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:37.454002+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 22896640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:38.454190+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 22896640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:39.454691+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165942 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 22896640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:40.455158+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 22896640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:41.455576+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.318322182s of 13.376296997s, submitted: 2
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 20733952 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2c5df4a0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:42.455982+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108290048 unmapped: 22888448 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:43.456332+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa672000/0x0/0x4ffc00000, data 0xb24274/0xbea000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108290048 unmapped: 22888448 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:44.456605+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180354 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108290048 unmapped: 22888448 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:45.456743+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108290048 unmapped: 22888448 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:46.456926+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d3c4800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2a999e00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108290048 unmapped: 22888448 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:47.457154+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b3000
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b3000 session 0x559f2c8b14a0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108290048 unmapped: 22888448 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:48.457383+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d680c00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f29d55c20
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa672000/0x0/0x4ffc00000, data 0xb24274/0xbea000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2f76d400
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76d400 session 0x559f2c5c9860
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 22896640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:49.457593+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180354 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 22896640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:50.457796+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 22896640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:51.458008+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 22896640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:52.458224+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 22896640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:53.458423+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa672000/0x0/0x4ffc00000, data 0xb24274/0xbea000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2cadc000
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 22896640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:54.458598+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d3c4800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.305717468s of 12.769754410s, submitted: 2
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166826 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 23887872 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:55.458752+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2cc5e000
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 23887872 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:56.458927+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 23887872 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:57.459090+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 23887872 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:58.459255+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 23887872 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:59.459406+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166826 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 23887872 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:00.459549+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 23887872 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:01.459719+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 23887872 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:02.459918+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 23887872 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:03.460487+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 23887872 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:04.462283+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166826 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 23879680 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:05.462444+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 23879680 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:06.462634+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 23879680 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:07.462851+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 23879680 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:08.463070+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b3000
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.356574059s of 13.715682030s, submitted: 3
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b3000 session 0x559f2a971a40
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d680c00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2dd0ad20
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107896832 unmapped: 30638080 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:09.463228+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237939 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107896832 unmapped: 30638080 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:10.463353+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9ee3000/0x0/0x4ffc00000, data 0x12b22d6/0x1379000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107896832 unmapped: 30638080 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:11.463501+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107896832 unmapped: 30638080 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:12.463658+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107896832 unmapped: 30638080 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:13.463806+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2f76d400
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76d400 session 0x559f2cc5eb40
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108199936 unmapped: 30334976 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:14.463962+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d3c4800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239823 data_alloc: 218103808 data_used: 184320
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108199936 unmapped: 30334976 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:15.464101+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9ebf000/0x0/0x4ffc00000, data 0x12d62d6/0x139d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108363776 unmapped: 30171136 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:16.464307+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:17.464521+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111017984 unmapped: 27516928 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:18.464653+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111017984 unmapped: 27516928 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: mgrc ms_handle_reset ms_handle_reset con 0x559f2d0e8c00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3802415056
Oct 08 10:22:35 compute-0 ceph-osd[81751]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3802415056,v1:192.168.122.100:6801/3802415056]
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: get_auth_request con 0x559f2b37e000 auth_method 0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: mgrc handle_mgr_configure stats_period=5
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:19.464845+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111017984 unmapped: 27516928 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1300015 data_alloc: 218103808 data_used: 9142272
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:20.465003+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111017984 unmapped: 27516928 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9ebf000/0x0/0x4ffc00000, data 0x12d62d6/0x139d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:21.465159+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111017984 unmapped: 27516928 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2c36a3c0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2d82fa40
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b3000
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.177964211s of 13.560062408s, submitted: 29
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:22.465306+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110714880 unmapped: 27820032 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:23.465465+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b3000 session 0x559f2c5fc5a0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:24.465619+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173331 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa881000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:25.465747+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa881000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:26.465965+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:27.466118+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:28.466329+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:29.466508+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173331 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa881000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:30.466653+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:31.466814+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:32.466984+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:33.467156+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:34.467330+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa881000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173331 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:35.467466+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:36.467681+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:37.467829+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:38.468003+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:39.468164+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173331 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa881000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:40.468324+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:41.468505+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:42.468672+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:43.468898+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:44.469111+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173331 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:45.469273+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa881000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:46.469454+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:47.469617+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:48.469796+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:49.469975+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173331 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:50.470138+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:51.470335+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa881000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:52.470675+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:53.470942+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:54.471101+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173331 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:55.471250+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 31408128 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:56.471488+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 31408128 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:57.471654+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa881000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 31408128 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:58.471926+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 31408128 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:59.472082+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 31408128 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173331 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d680c00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 36.629310608s of 38.173881531s, submitted: 16
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2c8b03c0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa881000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:00.472209+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 31375360 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:01.472349+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 31375360 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa4a7000/0x0/0x4ffc00000, data 0xcef274/0xdb5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:02.473149+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 31375360 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:03.473350+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 31375360 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:04.473590+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 31375360 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205109 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:05.473782+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 31375360 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa4a7000/0x0/0x4ffc00000, data 0xcef274/0xdb5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:06.473982+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2f76dc00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76dc00 session 0x559f2c8ae780
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 31375360 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2d636960
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:07.474187+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 31375360 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d3c4800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2a866000
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b3000
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b3000 session 0x559f2a955680
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:08.474427+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 31375360 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:09.474566+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 31375360 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205109 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d680c00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:10.474717+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107323392 unmapped: 31211520 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa4a7000/0x0/0x4ffc00000, data 0xcef274/0xdb5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:11.474896+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 30285824 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:12.475075+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 30285824 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:13.475234+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 30285824 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa4a7000/0x0/0x4ffc00000, data 0xcef274/0xdb5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:14.475381+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 30285824 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232013 data_alloc: 218103808 data_used: 4112384
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:15.475518+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 30285824 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:16.475704+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 30285824 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:17.475851+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 30285824 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:18.476135+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa4a7000/0x0/0x4ffc00000, data 0xcef274/0xdb5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 30285824 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:19.476303+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 30285824 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232013 data_alloc: 218103808 data_used: 4112384
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:20.476457+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 30285824 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.716075897s of 20.768712997s, submitted: 10
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:21.476809+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 22183936 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa4a7000/0x0/0x4ffc00000, data 0xcef274/0xdb5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:22.476947+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114319360 unmapped: 24215552 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:23.477117+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113745920 unmapped: 24788992 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9bff000/0x0/0x4ffc00000, data 0x158f274/0x1655000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:24.477268+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113745920 unmapped: 24788992 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9bf5000/0x0/0x4ffc00000, data 0x1599274/0x165f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304949 data_alloc: 218103808 data_used: 5197824
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:25.477435+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113745920 unmapped: 24788992 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:26.477617+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113745920 unmapped: 24788992 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:27.477741+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113745920 unmapped: 24788992 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:28.477906+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113745920 unmapped: 24788992 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9bf5000/0x0/0x4ffc00000, data 0x1599274/0x165f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9bf5000/0x0/0x4ffc00000, data 0x1599274/0x165f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:29.478137+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113762304 unmapped: 24772608 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1305101 data_alloc: 218103808 data_used: 5201920
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:30.478310+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113762304 unmapped: 24772608 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:31.478489+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9bf5000/0x0/0x4ffc00000, data 0x1599274/0x165f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113762304 unmapped: 24772608 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:32.478709+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113762304 unmapped: 24772608 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:33.478994+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113762304 unmapped: 24772608 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:34.479170+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113762304 unmapped: 24772608 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2a9774a0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2c8c1c00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1305101 data_alloc: 218103808 data_used: 5201920
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.963165283s of 14.391463280s, submitted: 83
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:35.479322+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2c8c1c00 session 0x559f2d8d10e0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:36.479526+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:37.479690+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:38.479910+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:39.480104+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1178119 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:40.480265+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:41.480412+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:42.480563+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:43.480817+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:44.480967+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1178119 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:45.481147+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:46.481369+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:47.481564+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:48.481652+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:49.481783+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1178119 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:50.481946+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:51.482110+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:52.482288+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:53.482402+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:54.482561+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1178119 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:55.482696+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:56.482876+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:57.483070+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2c8c1c00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 22.524868011s of 22.775295258s, submitted: 9
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2c8c1c00 session 0x559f2d053a40
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:58.483236+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa46b000/0x0/0x4ffc00000, data 0xd2b274/0xdf1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111312896 unmapped: 27222016 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa46b000/0x0/0x4ffc00000, data 0xd2b274/0xdf1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:59.483385+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111312896 unmapped: 27222016 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251947 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:00.483582+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9f17000/0x0/0x4ffc00000, data 0x127f274/0x1345000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110100480 unmapped: 28434432 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9f17000/0x0/0x4ffc00000, data 0x127f274/0x1345000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:01.483770+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9f17000/0x0/0x4ffc00000, data 0x127f274/0x1345000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110100480 unmapped: 28434432 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:02.484070+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110100480 unmapped: 28434432 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:03.484232+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110100480 unmapped: 28434432 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:04.484505+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110100480 unmapped: 28434432 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251947 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:05.484665+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110100480 unmapped: 28434432 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:06.484891+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2d82f0e0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 28286976 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9f17000/0x0/0x4ffc00000, data 0x127f274/0x1345000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:07.485048+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d3c4800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b3000
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 28286976 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:08.485233+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9ef2000/0x0/0x4ffc00000, data 0x12a3297/0x136a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 28286976 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:09.485359+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112893952 unmapped: 25640960 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:10.485526+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1314620 data_alloc: 218103808 data_used: 8757248
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112893952 unmapped: 25640960 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:11.485701+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112893952 unmapped: 25640960 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:12.485844+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112893952 unmapped: 25640960 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:13.486092+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9ef2000/0x0/0x4ffc00000, data 0x12a3297/0x136a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112893952 unmapped: 25640960 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:14.486311+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112893952 unmapped: 25640960 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:15.486481+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1314620 data_alloc: 218103808 data_used: 8757248
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112902144 unmapped: 25632768 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:16.486647+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112902144 unmapped: 25632768 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:17.486837+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112902144 unmapped: 25632768 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9ef2000/0x0/0x4ffc00000, data 0x12a3297/0x136a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:18.487072+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 25600000 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.691644669s of 20.797815323s, submitted: 21
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d680c00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9866000/0x0/0x4ffc00000, data 0x1927297/0x19ee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2caddc20
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:19.487234+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 120578048 unmapped: 17956864 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:20.487458+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1409342 data_alloc: 234881024 data_used: 10747904
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118767616 unmapped: 19767296 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:21.488144+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118767616 unmapped: 19767296 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:22.488313+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118767616 unmapped: 19767296 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:23.488519+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b0c00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b0c00 session 0x559f2a954b40
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118784000 unmapped: 19750912 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d6ca800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d6ca800 session 0x559f2da1f860
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:24.488766+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f95d6000/0x0/0x4ffc00000, data 0x1bbe297/0x1c85000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2c8c1c00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2c8c1c00 session 0x559f2c8b10e0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118800384 unmapped: 19734528 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2d5efe00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:25.489087+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1410709 data_alloc: 234881024 data_used: 10760192
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b0c00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d680c00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118808576 unmapped: 19726336 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:26.489301+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 19537920 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:27.489431+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 120479744 unmapped: 18055168 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:28.489602+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 120479744 unmapped: 18055168 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f95d6000/0x0/0x4ffc00000, data 0x1bbe2a7/0x1c86000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:29.489743+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 120479744 unmapped: 18055168 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:30.489894+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1426209 data_alloc: 234881024 data_used: 12935168
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 120479744 unmapped: 18055168 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f95d6000/0x0/0x4ffc00000, data 0x1bbe2a7/0x1c86000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:31.490092+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 120479744 unmapped: 18055168 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:32.490225+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f95d6000/0x0/0x4ffc00000, data 0x1bbe2a7/0x1c86000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 18022400 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:33.490696+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 18022400 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:34.491094+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 18022400 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:35.491242+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1426209 data_alloc: 234881024 data_used: 12935168
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 18022400 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:36.491463+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 120545280 unmapped: 17989632 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:37.491719+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.309732437s of 18.578636169s, submitted: 92
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124076032 unmapped: 14458880 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:38.491855+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f8e3f000/0x0/0x4ffc00000, data 0x234f2a7/0x2417000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 13942784 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:39.492194+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124731392 unmapped: 13803520 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:40.492517+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1498987 data_alloc: 234881024 data_used: 13889536
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124731392 unmapped: 13803520 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:41.492911+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124731392 unmapped: 13803520 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:42.493221+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124731392 unmapped: 13803520 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:43.493539+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124731392 unmapped: 13803520 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:44.493777+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f8e23000/0x0/0x4ffc00000, data 0x23682a7/0x2430000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f8e23000/0x0/0x4ffc00000, data 0x23682a7/0x2430000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124731392 unmapped: 13803520 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:45.493934+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1498987 data_alloc: 234881024 data_used: 13889536
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124731392 unmapped: 13803520 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:46.494206+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124731392 unmapped: 13803520 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:47.494409+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124747776 unmapped: 13787136 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:48.494662+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124747776 unmapped: 13787136 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f8e23000/0x0/0x4ffc00000, data 0x23682a7/0x2430000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:49.494874+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124747776 unmapped: 13787136 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:50.495193+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1500203 data_alloc: 234881024 data_used: 13967360
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.244200706s of 13.420284271s, submitted: 77
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124747776 unmapped: 13787136 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:51.495470+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124747776 unmapped: 13787136 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:52.495634+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b0c00 session 0x559f2a976b40
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2d5321e0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d608400
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124084224 unmapped: 14450688 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f8e2c000/0x0/0x4ffc00000, data 0x23682a7/0x2430000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [0,1])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d608400 session 0x559f2d8d0d20
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:53.495851+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124084224 unmapped: 14450688 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:54.496251+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124084224 unmapped: 14450688 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:55.496519+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1387580 data_alloc: 234881024 data_used: 10768384
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124084224 unmapped: 14450688 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:56.496797+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124084224 unmapped: 14450688 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2b6505a0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b3000 session 0x559f2d8d0f00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:57.497023+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d608400
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d608400 session 0x559f2c6481e0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f98fd000/0x0/0x4ffc00000, data 0x1898297/0x195f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:58.497361+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:59.497528+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:00.497655+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197050 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:01.497829+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:02.497981+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa85d000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:03.498439+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:04.498648+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:05.498908+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197050 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:06.499103+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:07.499348+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:08.499731+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa85d000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:09.499870+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:10.500151+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197050 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa85d000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:11.500446+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:12.500630+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:13.500861+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:14.501118+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:15.501283+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197050 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:16.501472+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa85d000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:17.501662+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:18.501808+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa85d000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2c8c1c00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2c8c1c00 session 0x559f2dab5680
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2b2d90e0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2c8c1c00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2c8c1c00 session 0x559f2d636780
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d3c4800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2d5eeb40
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:19.501941+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b3000
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 28.304420471s of 28.535713196s, submitted: 47
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b3000 session 0x559f2a958f00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d608400
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d608400 session 0x559f2a9990e0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b0c00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b0c00 session 0x559f2cbf61e0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2c8c1c00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2c8c1c00 session 0x559f2d8d1c20
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d3c4800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2a9961e0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 21422080 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:20.502091+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa6ed000/0x0/0x4ffc00000, data 0xaa8284/0xb6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215353 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa6ed000/0x0/0x4ffc00000, data 0xaa8284/0xb6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 21422080 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:21.502222+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 21422080 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:22.502381+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa6ed000/0x0/0x4ffc00000, data 0xaa8284/0xb6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 21422080 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:23.502580+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b3000
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b3000 session 0x559f2d9530e0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 21405696 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:24.502744+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa6ed000/0x0/0x4ffc00000, data 0xaa8284/0xb6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d608400
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d608400 session 0x559f2d952960
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 21405696 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:25.502932+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa6ed000/0x0/0x4ffc00000, data 0xaa8284/0xb6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215353 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d680c00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2d953680
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2c8c1c00
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2c8c1c00 session 0x559f2d952d20
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa6ed000/0x0/0x4ffc00000, data 0xaa8284/0xb6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 21405696 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:26.503166+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d3c4800
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 21405696 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:27.503422+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 21405696 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:28.503587+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 21405696 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:29.503749+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa6ed000/0x0/0x4ffc00000, data 0xaa8284/0xb6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 21405696 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:30.503902+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219133 data_alloc: 218103808 data_used: 704512
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 21405696 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa6ed000/0x0/0x4ffc00000, data 0xaa8284/0xb6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:31.504088+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 21405696 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:32.504447+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa6ed000/0x0/0x4ffc00000, data 0xaa8284/0xb6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 21405696 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:33.504587+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 21405696 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:34.504738+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 21405696 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:35.504899+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219133 data_alloc: 218103808 data_used: 704512
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 21397504 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:36.505086+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa6ed000/0x0/0x4ffc00000, data 0xaa8284/0xb6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 21397504 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:37.505210+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.240032196s of 18.298688889s, submitted: 18
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 19447808 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:38.506160+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa113000/0x0/0x4ffc00000, data 0x1082284/0x1149000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118333440 unmapped: 20201472 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:39.506344+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118333440 unmapped: 20201472 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:40.506533+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1266339 data_alloc: 218103808 data_used: 815104
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa10a000/0x0/0x4ffc00000, data 0x108a284/0x1151000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118333440 unmapped: 20201472 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:41.506792+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118333440 unmapped: 20201472 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:42.506991+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118333440 unmapped: 20201472 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:43.507226+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118333440 unmapped: 20201472 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:44.507421+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118333440 unmapped: 20201472 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:45.507636+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1266339 data_alloc: 218103808 data_used: 815104
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118333440 unmapped: 20201472 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:46.507841+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa10a000/0x0/0x4ffc00000, data 0x108a284/0x1151000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118333440 unmapped: 20201472 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:47.508121+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa10a000/0x0/0x4ffc00000, data 0x108a284/0x1151000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118333440 unmapped: 20201472 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:48.508324+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118333440 unmapped: 20201472 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:49.508520+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa10a000/0x0/0x4ffc00000, data 0x108a284/0x1151000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118333440 unmapped: 20201472 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.515064240s of 12.640249252s, submitted: 32
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:50.508667+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2d5ee960
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b3000
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265651 data_alloc: 218103808 data_used: 815104
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b3000 session 0x559f2d6372c0
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:51.510302+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:52.510495+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:53.510710+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:54.510897+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:55.511066+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:56.511280+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:57.511738+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:58.512670+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:59.513111+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:00.513612+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:01.513818+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:02.514130+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:03.514275+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:04.514486+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:05.514660+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:06.515179+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:07.515294+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:08.515947+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:09.516507+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:10.517091+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:11.517884+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:12.518352+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:13.518538+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:14.518764+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:15.518898+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:16.519299+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:17.519521+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:18.519887+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:19.520334+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:20.520601+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 20873216 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:21.520756+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 20873216 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:22.520954+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 20873216 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:23.521087+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 20873216 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:24.521531+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 20873216 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:25.521665+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 20873216 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:26.522025+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 20873216 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:27.522249+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 20873216 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:28.522497+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 20873216 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:29.522652+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 20873216 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:30.522770+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 20873216 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:31.522904+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:32.523149+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:33.523328+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:34.523480+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:35.523605+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:36.523729+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:37.523860+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:38.524003+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:39.524078+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:40.524207+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:41.524362+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:42.524469+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:43.524632+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:44.524785+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:45.524976+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:46.525189+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:47.525327+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 20856832 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:48.525512+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 20856832 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:49.525636+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 20856832 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:50.525811+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 20856832 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:51.525941+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 20856832 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:52.526089+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 20856832 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:53.526246+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 20856832 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:54.526407+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 20856832 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:55.526539+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 20856832 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:56.526730+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 20856832 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:57.526860+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 20856832 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:58.527001+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 20856832 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:59.527144+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 20856832 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:00.527272+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 20856832 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:22:35 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:22:35 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:22:35 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:01.527399+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 20856832 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:02.527555+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117833728 unmapped: 20701184 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: do_command 'config diff' '{prefix=config diff}'
Oct 08 10:22:35 compute-0 ceph-osd[81751]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 08 10:22:35 compute-0 ceph-osd[81751]: do_command 'config show' '{prefix=config show}'
Oct 08 10:22:35 compute-0 ceph-osd[81751]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 08 10:22:35 compute-0 ceph-osd[81751]: do_command 'counter dump' '{prefix=counter dump}'
Oct 08 10:22:35 compute-0 ceph-osd[81751]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 08 10:22:35 compute-0 ceph-osd[81751]: do_command 'counter schema' '{prefix=counter schema}'
Oct 08 10:22:35 compute-0 ceph-osd[81751]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:03.527692+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117456896 unmapped: 21078016 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:22:35 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:04.527828+0000)
Oct 08 10:22:35 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117882880 unmapped: 20652032 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:22:35 compute-0 ceph-osd[81751]: do_command 'log dump' '{prefix=log dump}'
Oct 08 10:22:35 compute-0 ceph-mon[73572]: from='client.16476 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:35 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/798948786' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 08 10:22:35 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/1739770367' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 08 10:22:35 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/3149090838' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 08 10:22:35 compute-0 ceph-mon[73572]: from='client.26378 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:35 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/167441326' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 08 10:22:35 compute-0 ceph-mon[73572]: from='client.16488 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:35 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/674684876' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 08 10:22:35 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/410494144' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 08 10:22:35 compute-0 ceph-mon[73572]: from='client.26393 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:35 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/114972148' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 08 10:22:35 compute-0 ceph-mon[73572]: from='client.26164 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:35 compute-0 ceph-mon[73572]: pgmap v1141: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:22:35 compute-0 ceph-mon[73572]: from='client.16503 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:22:35] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 08 10:22:35 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:22:35] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 08 10:22:35 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26411 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:35 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Oct 08 10:22:35 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1471155663' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 08 10:22:35 compute-0 nova_compute[262220]: 2025-10-08 10:22:35.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:22:35 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26176 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:35 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26423 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:36 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26432 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:36 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct 08 10:22:36 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1274900231' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 08 10:22:36 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 08 10:22:36 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26197 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:36 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:22:36 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:22:36 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:22:36.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:22:36 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16527 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:36 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26447 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:36 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Oct 08 10:22:36 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2389033116' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 08 10:22:36 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26218 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:36 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/2523736736' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 08 10:22:36 compute-0 ceph-mon[73572]: from='client.26411 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:36 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/1471155663' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 08 10:22:36 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/2293668547' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 08 10:22:36 compute-0 ceph-mon[73572]: from='client.26176 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:36 compute-0 ceph-mon[73572]: from='client.26423 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:36 compute-0 ceph-mon[73572]: from='client.26432 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:36 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/539448447' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 08 10:22:36 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/1274900231' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 08 10:22:36 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/619427747' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 08 10:22:36 compute-0 ceph-mon[73572]: from='client.26197 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:36 compute-0 ceph-mon[73572]: from='client.16527 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:36 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/2389033116' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 08 10:22:36 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/2790607250' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 08 10:22:36 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16551 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:37 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26468 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:37 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Oct 08 10:22:37 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3592158987' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 08 10:22:37 compute-0 crontab[288046]: (root) LIST (root)
Oct 08 10:22:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:22:37.208Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:22:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:22:37.209Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:22:37 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:22:37 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:22:37 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:22:37.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:22:37 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16566 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:37 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26242 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:37 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26483 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:37 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1142: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:22:37 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16581 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:37 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26263 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:37 compute-0 ceph-mon[73572]: from='client.26447 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:37 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/3234256066' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 08 10:22:37 compute-0 ceph-mon[73572]: from='client.26218 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:37 compute-0 ceph-mon[73572]: from='client.16551 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:37 compute-0 ceph-mon[73572]: from='client.26468 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:37 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3592158987' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 08 10:22:37 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/3695097903' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 08 10:22:37 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/3832269222' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 08 10:22:37 compute-0 ceph-mon[73572]: from='client.16566 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:37 compute-0 ceph-mon[73572]: from='client.26242 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:37 compute-0 ceph-mon[73572]: from='client.26483 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:37 compute-0 ceph-mon[73572]: pgmap v1142: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:22:37 compute-0 ceph-mon[73572]: from='client.16581 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:37 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/855285222' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 08 10:22:37 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26266 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:38 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16602 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Oct 08 10:22:38 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1579170724' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 08 10:22:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Oct 08 10:22:38 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3820414779' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 08 10:22:38 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26287 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:38 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26528 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:38 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:22:38 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:22:38 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:22:38.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:22:38 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16614 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Oct 08 10:22:38 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2159410719' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 08 10:22:38 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26293 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:38 compute-0 nova_compute[262220]: 2025-10-08 10:22:38.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:22:38 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26543 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:38 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16626 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:22:38.860Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:22:38 compute-0 ceph-mon[73572]: from='client.26263 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:38 compute-0 ceph-mon[73572]: from='client.26266 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:38 compute-0 ceph-mon[73572]: from='client.16602 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:38 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/1579170724' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 08 10:22:38 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/3820414779' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 08 10:22:38 compute-0 ceph-mon[73572]: from='client.26287 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:38 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/3606556890' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 08 10:22:38 compute-0 ceph-mon[73572]: from='client.26528 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:38 compute-0 ceph-mon[73572]: from='client.16614 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:38 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/2159410719' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 08 10:22:38 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/2696691565' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 08 10:22:38 compute-0 ceph-mon[73572]: from='client.26293 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:38 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/936656447' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 08 10:22:38 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26314 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:22:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:22:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:22:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:39 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:22:39 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Oct 08 10:22:39 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/93720885' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 08 10:22:39 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:22:39 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:22:39 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:22:39.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:22:39 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Oct 08 10:22:39 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4096862975' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 08 10:22:39 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:22:39 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26332 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:39 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1143: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:22:39 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Oct 08 10:22:39 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/323351107' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 08 10:22:39 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Oct 08 10:22:39 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3237813266' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 08 10:22:39 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26344 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:39 compute-0 ceph-mon[73572]: from='client.26543 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:39 compute-0 ceph-mon[73572]: from='client.16626 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:39 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/574979137' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 08 10:22:39 compute-0 ceph-mon[73572]: from='client.26314 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:39 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/1196131169' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 08 10:22:39 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/93720885' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 08 10:22:39 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/4099203903' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 08 10:22:39 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/4096862975' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 08 10:22:39 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/2223700284' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 08 10:22:39 compute-0 ceph-mon[73572]: from='client.26332 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:39 compute-0 ceph-mon[73572]: pgmap v1143: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:22:39 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/3614057764' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 08 10:22:39 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/1852260455' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 08 10:22:39 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/1156797148' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 08 10:22:39 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/323351107' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 08 10:22:39 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3237813266' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 08 10:22:40 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Oct 08 10:22:40 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1678632811' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 08 10:22:40 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Oct 08 10:22:40 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2836056212' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 08 10:22:40 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:22:40 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:22:40 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:22:40.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:22:40 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Oct 08 10:22:40 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3893590644' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 08 10:22:40 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Oct 08 10:22:40 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/375156075' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 08 10:22:40 compute-0 nova_compute[262220]: 2025-10-08 10:22:40.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:22:40 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Oct 08 10:22:40 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/769709891' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 08 10:22:40 compute-0 ceph-mon[73572]: from='client.26344 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:40 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/3833991625' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 08 10:22:40 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/2676575817' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 08 10:22:40 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/1678632811' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 08 10:22:40 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/1021095986' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 08 10:22:40 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/2836056212' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 08 10:22:40 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/1916410088' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 08 10:22:40 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/182313924' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 08 10:22:40 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/3233022193' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 08 10:22:40 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3893590644' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 08 10:22:40 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/2612453300' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 08 10:22:40 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/375156075' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 08 10:22:40 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/2776197333' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 08 10:22:40 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/2453932214' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 08 10:22:40 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/3824601172' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 08 10:22:40 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/769709891' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 08 10:22:40 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/2147564251' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 08 10:22:41 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Oct 08 10:22:41 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4012908329' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 08 10:22:41 compute-0 systemd[1]: Starting Hostname Service...
Oct 08 10:22:41 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:22:41 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:22:41 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:22:41.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:22:41 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Oct 08 10:22:41 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/14823960' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 08 10:22:41 compute-0 systemd[1]: Started Hostname Service.
Oct 08 10:22:41 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1144: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:22:41 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Oct 08 10:22:41 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/722090368' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 08 10:22:41 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16734 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:41 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26428 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:41 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/4012908329' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 08 10:22:41 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/2744813623' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 08 10:22:41 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/3255820750' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 08 10:22:41 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/14823960' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 08 10:22:41 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/3144882419' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 08 10:22:41 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/464767023' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 08 10:22:41 compute-0 ceph-mon[73572]: pgmap v1144: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:22:41 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/722090368' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 08 10:22:41 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/1906398297' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 08 10:22:41 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/3189463611' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 08 10:22:41 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/1730447509' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 08 10:22:42 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Oct 08 10:22:42 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/227637344' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 08 10:22:42 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26711 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:42 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16758 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:42 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26717 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:42 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:22:42 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:22:42 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:22:42.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:22:42 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16764 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:42 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26726 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:42 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16782 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:42 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26744 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:42 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26476 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:43 compute-0 ceph-mon[73572]: from='client.16734 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:43 compute-0 ceph-mon[73572]: from='client.26428 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:43 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/227637344' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 08 10:22:43 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/1077543171' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 08 10:22:43 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/1001102522' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 08 10:22:43 compute-0 ceph-mon[73572]: from='client.26711 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:43 compute-0 ceph-mon[73572]: from='client.16758 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:43 compute-0 ceph-mon[73572]: from='client.26717 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:43 compute-0 ceph-mon[73572]: from='client.16764 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:43 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/2860491976' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 08 10:22:43 compute-0 ceph-mon[73572]: from='client.26726 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:43 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/2109382734' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 08 10:22:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Oct 08 10:22:43 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3205486850' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 08 10:22:43 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16800 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:43 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:22:43 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:22:43 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:22:43.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:22:43 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26762 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:43 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26497 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:43 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1145: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:22:43 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26506 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:43 compute-0 nova_compute[262220]: 2025-10-08 10:22:43.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:22:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0)
Oct 08 10:22:43 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3543490241' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 08 10:22:43 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16812 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:43 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26783 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:43 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26524 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:22:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:22:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:22:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:44 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:22:44 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16827 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:44 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Oct 08 10:22:44 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3027080278' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 08 10:22:44 compute-0 ceph-mon[73572]: from='client.16782 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:44 compute-0 ceph-mon[73572]: from='client.26744 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:44 compute-0 ceph-mon[73572]: from='client.26476 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:44 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/3056797886' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 08 10:22:44 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3205486850' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 08 10:22:44 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/154206613' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 08 10:22:44 compute-0 ceph-mon[73572]: from='client.16800 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:44 compute-0 ceph-mon[73572]: from='client.26762 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:44 compute-0 ceph-mon[73572]: from='client.26497 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:44 compute-0 ceph-mon[73572]: pgmap v1145: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:22:44 compute-0 ceph-mon[73572]: from='client.26506 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:44 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3543490241' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 08 10:22:44 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/1465498113' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 08 10:22:44 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26801 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:44 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:22:44 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:22:44 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:22:44.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:22:44 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26542 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:44 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:22:44 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Oct 08 10:22:44 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3274130406' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 08 10:22:44 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16839 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:44 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 08 10:22:44 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 08 10:22:44 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26819 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:44 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 08 10:22:44 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 08 10:22:44 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26566 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:44 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16875 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:44 compute-0 podman[289083]: 2025-10-08 10:22:44.91170518 +0000 UTC m=+0.062690103 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, tcib_managed=true, container_name=iscsid)
Oct 08 10:22:45 compute-0 ceph-mon[73572]: from='client.16812 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:45 compute-0 ceph-mon[73572]: from='client.26783 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:45 compute-0 ceph-mon[73572]: from='client.26524 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:45 compute-0 ceph-mon[73572]: from='client.16827 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:45 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3027080278' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 08 10:22:45 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/3331991054' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 08 10:22:45 compute-0 ceph-mon[73572]: from='client.26801 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:45 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/3671673790' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 08 10:22:45 compute-0 ceph-mon[73572]: from='client.26542 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:45 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3274130406' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 08 10:22:45 compute-0 ceph-mon[73572]: from='client.16839 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:45 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/2422396037' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 08 10:22:45 compute-0 ceph-mon[73572]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 08 10:22:45 compute-0 ceph-mon[73572]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 08 10:22:45 compute-0 ceph-mon[73572]: from='client.26819 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:45 compute-0 ceph-mon[73572]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 08 10:22:45 compute-0 ceph-mon[73572]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 08 10:22:45 compute-0 ceph-mon[73572]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 08 10:22:45 compute-0 ceph-mon[73572]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 08 10:22:45 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/3379270858' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 08 10:22:45 compute-0 ceph-mon[73572]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 08 10:22:45 compute-0 ceph-mon[73572]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 08 10:22:45 compute-0 ceph-mon[73572]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 08 10:22:45 compute-0 ceph-mon[73572]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 08 10:22:45 compute-0 ceph-mon[73572]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 08 10:22:45 compute-0 ceph-mon[73572]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 08 10:22:45 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26602 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:45 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:22:45 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:22:45 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:22:45.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:22:45 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Oct 08 10:22:45 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2585883534' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 08 10:22:45 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1146: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:22:45 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26620 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:45 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16908 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:22:45] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 08 10:22:45 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:22:45] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 08 10:22:45 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 08 10:22:45 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 08 10:22:45 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26882 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:45 compute-0 nova_compute[262220]: 2025-10-08 10:22:45.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:22:46 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26635 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:46 compute-0 ceph-mon[73572]: from='client.26566 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:46 compute-0 ceph-mon[73572]: from='client.16875 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:46 compute-0 ceph-mon[73572]: from='client.26602 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:46 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/2585883534' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 08 10:22:46 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/3861183227' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 08 10:22:46 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/3500127791' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 08 10:22:46 compute-0 ceph-mon[73572]: pgmap v1146: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:22:46 compute-0 ceph-mon[73572]: from='client.26620 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:46 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/2820438086' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 08 10:22:46 compute-0 ceph-mon[73572]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 08 10:22:46 compute-0 ceph-mon[73572]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 08 10:22:46 compute-0 ceph-mon[73572]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 08 10:22:46 compute-0 ceph-mon[73572]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 08 10:22:46 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Oct 08 10:22:46 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1797094569' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 08 10:22:46 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:22:46 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:22:46 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:22:46.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:22:46 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df"} v 0)
Oct 08 10:22:46 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/573220540' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 08 10:22:46 compute-0 nova_compute[262220]: 2025-10-08 10:22:46.882 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:22:46 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump"} v 0)
Oct 08 10:22:46 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1231813826' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 08 10:22:46 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26674 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:47 compute-0 ceph-mon[73572]: from='client.16908 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:47 compute-0 ceph-mon[73572]: from='client.26882 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:47 compute-0 ceph-mon[73572]: from='client.26635 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:47 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/1797094569' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 08 10:22:47 compute-0 ceph-mon[73572]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 08 10:22:47 compute-0 ceph-mon[73572]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 08 10:22:47 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/3023388905' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 08 10:22:47 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/573220540' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 08 10:22:47 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/1711338695' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 08 10:22:47 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/3246985620' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 08 10:22:47 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/1231813826' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 08 10:22:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:22:47.210Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:22:47 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:22:47 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:22:47 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:22:47.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:22:47 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1147: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:22:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:22:47
Oct 08 10:22:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 08 10:22:47 compute-0 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct 08 10:22:47 compute-0 ceph-mgr[73869]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', '.nfs', '.rgw.root', 'backups', 'volumes', 'vms', 'cephfs.cephfs.meta', 'images', 'default.rgw.log', 'default.rgw.control', 'default.rgw.meta']
Oct 08 10:22:47 compute-0 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct 08 10:22:47 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16956 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:22:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:22:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:22:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:22:48 compute-0 ceph-mon[73572]: from='client.26674 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:48 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/2361417571' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 08 10:22:48 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/292787309' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 08 10:22:48 compute-0 ceph-mon[73572]: pgmap v1147: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:22:48 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/4106237471' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 08 10:22:48 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/1038547666' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 08 10:22:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:22:48 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/530860837' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 08 10:22:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:22:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:22:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:22:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:22:48 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26951 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct 08 10:22:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:22:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 08 10:22:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:22:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 10:22:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:22:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:22:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:22:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:22:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:22:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 08 10:22:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:22:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 08 10:22:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:22:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:22:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:22:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 08 10:22:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:22:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 08 10:22:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:22:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:22:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:22:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 08 10:22:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:22:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 10:22:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 08 10:22:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:22:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:22:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:22:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:22:48 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat"} v 0)
Oct 08 10:22:48 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2053191720' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 08 10:22:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 08 10:22:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:22:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:22:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:22:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:22:48 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:22:48 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:22:48 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:22:48.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:22:48 compute-0 nova_compute[262220]: 2025-10-08 10:22:48.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:22:48 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump"} v 0)
Oct 08 10:22:48 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2804644638' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 08 10:22:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:22:48.860Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:22:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:22:48.861Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:22:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:22:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:22:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:22:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:49 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:22:49 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16977 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:49 compute-0 ceph-mon[73572]: from='client.16956 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:49 compute-0 ceph-mon[73572]: from='client.26951 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:49 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/2053191720' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 08 10:22:49 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/854628801' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 08 10:22:49 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/1115419495' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 08 10:22:49 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/2804644638' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 08 10:22:49 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/3058621579' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 08 10:22:49 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:22:49 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:22:49 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:22:49.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:22:49 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26719 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:22:49 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1148: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:22:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Oct 08 10:22:49 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3418491503' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 08 10:22:49 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26984 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:49 compute-0 nova_compute[262220]: 2025-10-08 10:22:49.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:22:49 compute-0 nova_compute[262220]: 2025-10-08 10:22:49.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:22:49 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16998 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:50 compute-0 sudo[289838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:22:50 compute-0 sudo[289838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:22:50 compute-0 sudo[289838]: pam_unix(sudo:session): session closed for user root
Oct 08 10:22:50 compute-0 ceph-mon[73572]: from='client.16977 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:50 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/3430441172' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 08 10:22:50 compute-0 ceph-mon[73572]: from='client.26719 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:50 compute-0 ceph-mon[73572]: pgmap v1148: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:22:50 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3418491503' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 08 10:22:50 compute-0 ceph-mon[73572]: from='client.26984 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:50 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/1356043015' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 08 10:22:50 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/1379258890' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 08 10:22:50 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17004 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:50 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:22:50 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:22:50 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:22:50.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:22:50 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27005 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:50 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump"} v 0)
Oct 08 10:22:50 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3873220482' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 08 10:22:50 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26752 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:50 compute-0 nova_compute[262220]: 2025-10-08 10:22:50.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:22:50 compute-0 nova_compute[262220]: 2025-10-08 10:22:50.885 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:22:50 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27014 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:51 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status"} v 0)
Oct 08 10:22:51 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1876032476' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 08 10:22:51 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:22:51 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:22:51 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:22:51.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:22:51 compute-0 ceph-mon[73572]: from='client.16998 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:51 compute-0 ceph-mon[73572]: from='client.17004 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:51 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/2445107495' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 08 10:22:51 compute-0 ceph-mon[73572]: from='client.27005 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:51 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3873220482' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 08 10:22:51 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/1876032476' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 08 10:22:51 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17028 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:51 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1149: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:22:51 compute-0 ovs-appctl[290574]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 08 10:22:51 compute-0 ovs-appctl[290581]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 08 10:22:51 compute-0 ovs-appctl[290588]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 08 10:22:51 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26773 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:51 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17037 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:51 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:22:51 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 08 10:22:51 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:22:51 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 10:22:51 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:22:51 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:22:51 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:22:51 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:22:51 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:22:51 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 08 10:22:51 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:22:51 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 08 10:22:51 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:22:51 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:22:51 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:22:51 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 08 10:22:51 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:22:51 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 08 10:22:51 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:22:51 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:22:51 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:22:51 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 08 10:22:51 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:22:51 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 10:22:51 compute-0 nova_compute[262220]: 2025-10-08 10:22:51.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:22:51 compute-0 nova_compute[262220]: 2025-10-08 10:22:51.887 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 08 10:22:51 compute-0 nova_compute[262220]: 2025-10-08 10:22:51.888 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 08 10:22:51 compute-0 nova_compute[262220]: 2025-10-08 10:22:51.909 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 08 10:22:52 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26779 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:52 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27038 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:52 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:22:52 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:22:52 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:22:52.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:22:52 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0)
Oct 08 10:22:52 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4239875668' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 08 10:22:52 compute-0 ceph-mon[73572]: from='client.26752 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:52 compute-0 ceph-mon[73572]: from='client.27014 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:52 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/1693016667' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 08 10:22:52 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/2213548349' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 08 10:22:52 compute-0 ceph-mon[73572]: from='client.17028 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:52 compute-0 ceph-mon[73572]: pgmap v1149: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:22:52 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/3238278056' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 08 10:22:52 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27047 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:52 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:22:52 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 08 10:22:52 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:22:52 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 10:22:52 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:22:52 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:22:52 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:22:52 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:22:52 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:22:52 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 08 10:22:52 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:22:52 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 08 10:22:52 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:22:52 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:22:52 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:22:52 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 08 10:22:52 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:22:52 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 08 10:22:52 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:22:52 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:22:52 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:22:52 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 08 10:22:52 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:22:52 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 10:22:52 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd stat"} v 0)
Oct 08 10:22:52 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1916664575' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 08 10:22:52 compute-0 nova_compute[262220]: 2025-10-08 10:22:52.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:22:52 compute-0 nova_compute[262220]: 2025-10-08 10:22:52.937 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:22:52 compute-0 nova_compute[262220]: 2025-10-08 10:22:52.938 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:22:52 compute-0 nova_compute[262220]: 2025-10-08 10:22:52.938 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:22:52 compute-0 nova_compute[262220]: 2025-10-08 10:22:52.938 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 08 10:22:52 compute-0 nova_compute[262220]: 2025-10-08 10:22:52.938 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:22:53 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17061 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:53 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:22:53 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:22:53 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:22:53.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:22:53 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26812 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:53 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 08 10:22:53 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/362724180' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:22:53 compute-0 nova_compute[262220]: 2025-10-08 10:22:53.422 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:22:53 compute-0 ceph-mon[73572]: from='client.26773 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:53 compute-0 ceph-mon[73572]: from='client.17037 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:53 compute-0 ceph-mon[73572]: from='client.26779 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:53 compute-0 ceph-mon[73572]: from='client.27038 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:53 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/4239875668' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 08 10:22:53 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/1309053150' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 08 10:22:53 compute-0 ceph-mon[73572]: from='client.27047 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:53 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/1916664575' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 08 10:22:53 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/4000374081' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 08 10:22:53 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/121824156' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 08 10:22:53 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/774512979' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 08 10:22:53 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/362724180' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:22:53 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1150: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:22:53 compute-0 nova_compute[262220]: 2025-10-08 10:22:53.573 2 WARNING nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 10:22:53 compute-0 nova_compute[262220]: 2025-10-08 10:22:53.575 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4338MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 08 10:22:53 compute-0 nova_compute[262220]: 2025-10-08 10:22:53.575 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:22:53 compute-0 nova_compute[262220]: 2025-10-08 10:22:53.576 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:22:53 compute-0 nova_compute[262220]: 2025-10-08 10:22:53.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:22:53 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17079 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:53 compute-0 nova_compute[262220]: 2025-10-08 10:22:53.647 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 08 10:22:53 compute-0 nova_compute[262220]: 2025-10-08 10:22:53.648 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 08 10:22:53 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27074 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:53 compute-0 nova_compute[262220]: 2025-10-08 10:22:53.672 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:22:53 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26824 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:53 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:22:53 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 08 10:22:53 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:22:53 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 10:22:53 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:22:53 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:22:53 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:22:53 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:22:53 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:22:53 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 08 10:22:53 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:22:53 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 08 10:22:53 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:22:53 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:22:53 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:22:53 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 08 10:22:53 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:22:53 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 08 10:22:53 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:22:53 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:22:53 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:22:53 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 08 10:22:53 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:22:53 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 10:22:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:22:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:54 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:22:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:54 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:22:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:54 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:22:54 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27089 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 08 10:22:54 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/86879457' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:22:54 compute-0 nova_compute[262220]: 2025-10-08 10:22:54.146 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:22:54 compute-0 nova_compute[262220]: 2025-10-08 10:22:54.151 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 08 10:22:54 compute-0 nova_compute[262220]: 2025-10-08 10:22:54.166 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 08 10:22:54 compute-0 nova_compute[262220]: 2025-10-08 10:22:54.167 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 08 10:22:54 compute-0 nova_compute[262220]: 2025-10-08 10:22:54.167 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:22:54 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:22:54 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:22:54 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:22:54.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:22:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:22:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "time-sync-status"} v 0)
Oct 08 10:22:54 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3052212597' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct 08 10:22:54 compute-0 ceph-mon[73572]: from='client.17061 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:54 compute-0 ceph-mon[73572]: from='client.26812 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:54 compute-0 ceph-mon[73572]: pgmap v1150: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:22:54 compute-0 ceph-mon[73572]: from='client.17079 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:54 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3397358693' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 08 10:22:54 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/2303591047' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 08 10:22:54 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/86879457' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:22:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0)
Oct 08 10:22:54 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2993112683' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct 08 10:22:55 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26857 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:55 compute-0 nova_compute[262220]: 2025-10-08 10:22:55.163 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:22:55 compute-0 nova_compute[262220]: 2025-10-08 10:22:55.164 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:22:55 compute-0 nova_compute[262220]: 2025-10-08 10:22:55.164 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:22:55 compute-0 nova_compute[262220]: 2025-10-08 10:22:55.164 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 08 10:22:55 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:22:55 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:22:55 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:22:55.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:22:55 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17124 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:55 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26863 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:55 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1151: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:22:55 compute-0 ceph-mon[73572]: from='client.27074 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:55 compute-0 ceph-mon[73572]: from='client.26824 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:55 compute-0 ceph-mon[73572]: from='client.27089 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:55 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/3211284877' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 08 10:22:55 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/151351275' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 08 10:22:55 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3052212597' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct 08 10:22:55 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/4237574187' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:22:55 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/2993112683' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct 08 10:22:55 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/1349709411' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct 08 10:22:55 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/4076510746' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct 08 10:22:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:22:55] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Oct 08 10:22:55 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:22:55] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Oct 08 10:22:55 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0)
Oct 08 10:22:55 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3757659921' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 08 10:22:55 compute-0 nova_compute[262220]: 2025-10-08 10:22:55.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:22:55 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27131 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:56 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0)
Oct 08 10:22:56 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1108141833' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct 08 10:22:56 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct 08 10:22:56 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:22:56 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:22:56 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:22:56.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:22:56 compute-0 podman[292132]: 2025-10-08 10:22:56.439229307 +0000 UTC m=+0.133758554 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller)
Oct 08 10:22:56 compute-0 ceph-mon[73572]: from='client.26857 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:56 compute-0 ceph-mon[73572]: from='client.17124 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:56 compute-0 ceph-mon[73572]: from='client.26863 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:22:56 compute-0 ceph-mon[73572]: pgmap v1151: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:22:56 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/3232171328' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:22:56 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3757659921' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 08 10:22:56 compute-0 ceph-mon[73572]: from='client.27131 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:56 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/2421764452' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 08 10:22:56 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/3590559736' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:22:56 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/1108141833' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct 08 10:22:56 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/2392198530' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 08 10:22:56 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/4180202619' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct 08 10:22:56 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0)
Oct 08 10:22:56 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4000945604' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Oct 08 10:22:57 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0)
Oct 08 10:22:57 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1452857958' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Oct 08 10:22:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:22:57.211Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:22:57 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:22:57 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:22:57 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:22:57.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:22:57 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26911 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:22:57.422 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:22:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:22:57.422 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:22:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:22:57.422 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:22:57 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1152: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:22:57 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17163 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:57 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/4000945604' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Oct 08 10:22:57 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/539139547' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:22:57 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/4114452547' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct 08 10:22:57 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/2518374838' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct 08 10:22:57 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/1452857958' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Oct 08 10:22:57 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/2662081233' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Oct 08 10:22:57 compute-0 ceph-mon[73572]: from='client.26911 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:57 compute-0 ceph-mon[73572]: pgmap v1152: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:22:57 compute-0 ceph-mon[73572]: from='client.17163 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:57 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/3578546385' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Oct 08 10:22:58 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0)
Oct 08 10:22:58 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2637272810' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Oct 08 10:22:58 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27185 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:58 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:22:58 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:22:58 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:22:58.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:22:58 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0)
Oct 08 10:22:58 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2644108507' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Oct 08 10:22:58 compute-0 nova_compute[262220]: 2025-10-08 10:22:58.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:22:58 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/2594133218' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 08 10:22:58 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/2637272810' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Oct 08 10:22:58 compute-0 ceph-mon[73572]: from='client.27185 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:58 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/3800183374' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct 08 10:22:58 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/2644108507' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Oct 08 10:22:58 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/1036092895' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Oct 08 10:22:58 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/1817319861' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Oct 08 10:22:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:22:58.861Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:22:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:22:58.862Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:22:58 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17184 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:22:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:22:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:22:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:59 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:22:59 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:22:59 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:22:59 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:22:59.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:22:59 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27212 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:59 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0)
Oct 08 10:22:59 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2979068801' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Oct 08 10:22:59 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:22:59 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1153: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:22:59 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26944 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:59 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17199 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:59 compute-0 nova_compute[262220]: 2025-10-08 10:22:59.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:22:59 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/4178148940' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Oct 08 10:22:59 compute-0 ceph-mon[73572]: from='client.17184 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:59 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/2039541995' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Oct 08 10:22:59 compute-0 ceph-mon[73572]: from='client.27212 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:59 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/2979068801' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Oct 08 10:22:59 compute-0 ceph-mon[73572]: pgmap v1153: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:22:59 compute-0 ceph-mon[73572]: from='client.26944 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:22:59 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/1085897874' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Oct 08 10:23:00 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27236 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:23:00 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17205 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:23:00 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:23:00 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:23:00 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:23:00.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:23:00 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27245 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:23:00 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0)
Oct 08 10:23:00 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/874241760' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Oct 08 10:23:00 compute-0 podman[292442]: 2025-10-08 10:23:00.55336167 +0000 UTC m=+0.072995886 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd)
Oct 08 10:23:00 compute-0 podman[292444]: 2025-10-08 10:23:00.580100315 +0000 UTC m=+0.099857615 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct 08 10:23:00 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26974 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:23:00 compute-0 nova_compute[262220]: 2025-10-08 10:23:00.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:23:00 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0)
Oct 08 10:23:00 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3967471108' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Oct 08 10:23:00 compute-0 ceph-mon[73572]: from='client.17199 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:23:00 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/3021305761' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Oct 08 10:23:00 compute-0 ceph-mon[73572]: from='client.27236 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:23:00 compute-0 ceph-mon[73572]: from='client.17205 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:23:00 compute-0 ceph-mon[73572]: from='client.27245 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:23:00 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/78228190' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Oct 08 10:23:00 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/874241760' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Oct 08 10:23:00 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/2424998560' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Oct 08 10:23:00 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3967471108' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Oct 08 10:23:01 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:23:01 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:23:01 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:23:01.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:23:01 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27266 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:23:01 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1154: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:23:01 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26995 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:23:01 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27278 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:23:01 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17229 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:23:01 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:23:01 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 08 10:23:01 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:23:01 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 10:23:01 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:23:01 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:23:01 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:23:01 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:23:01 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:23:01 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 08 10:23:01 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:23:01 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 08 10:23:01 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:23:01 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:23:01 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:23:01 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 08 10:23:01 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:23:01 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 08 10:23:01 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:23:01 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:23:01 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:23:01 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 08 10:23:01 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:23:01 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 10:23:01 compute-0 ceph-mon[73572]: from='client.26974 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:23:01 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/1961894866' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Oct 08 10:23:01 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/1273625594' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Oct 08 10:23:01 compute-0 ceph-mon[73572]: from='client.27266 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:23:01 compute-0 ceph-mon[73572]: pgmap v1154: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:23:01 compute-0 ceph-mon[73572]: from='client.26995 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:23:01 compute-0 ceph-mon[73572]: from='client.27278 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:23:01 compute-0 ceph-mon[73572]: from='client.17229 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:23:01 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27284 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:23:01 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:23:01 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 08 10:23:01 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:23:01 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 10:23:01 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:23:01 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:23:01 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:23:01 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:23:01 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:23:01 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 08 10:23:01 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:23:01 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 08 10:23:01 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:23:01 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:23:01 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:23:01 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 08 10:23:01 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:23:01 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 08 10:23:01 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:23:01 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:23:01 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:23:01 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 08 10:23:01 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:23:01 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 10:23:01 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27004 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:23:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0)
Oct 08 10:23:02 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2341215762' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 08 10:23:02 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:23:02 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:23:02 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:23:02.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:23:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0)
Oct 08 10:23:02 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3460873467' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Oct 08 10:23:02 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17253 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:23:02 compute-0 virtqemud[261885]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 08 10:23:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:23:02 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:23:02 compute-0 ceph-mon[73572]: from='client.27284 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:23:02 compute-0 ceph-mon[73572]: from='client.27004 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:23:02 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/2341215762' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 08 10:23:02 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3460873467' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Oct 08 10:23:02 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/1205474079' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Oct 08 10:23:02 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/2675230901' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 08 10:23:02 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/1420344571' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Oct 08 10:23:02 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/4231946492' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Oct 08 10:23:02 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:23:03 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27022 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:23:03 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27311 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:23:03 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17262 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:23:03 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:23:03 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:23:03 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:23:03.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:23:03 compute-0 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 08 10:23:03 compute-0 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 7382 writes, 32K keys, 7382 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s
                                           Cumulative WAL: 7382 writes, 7382 syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1600 writes, 7120 keys, 1600 commit groups, 1.0 writes per commit group, ingest: 11.92 MB, 0.02 MB/s
                                           Interval WAL: 1600 writes, 1600 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     91.5      0.56              0.13        18    0.031       0      0       0.0       0.0
                                             L6      1/0   13.34 MB   0.0      0.3     0.0      0.2       0.2      0.0       0.0   4.3    135.2    115.4      1.90              0.51        17    0.112     93K   9474       0.0       0.0
                                            Sum      1/0   13.34 MB   0.0      0.3     0.0      0.2       0.3      0.1       0.0   5.3    104.5    110.0      2.46              0.65        35    0.070     93K   9474       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   5.9    103.0    105.5      0.62              0.19         8    0.078     26K   2564       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.0      0.2       0.2      0.0       0.0   0.0    135.2    115.4      1.90              0.51        17    0.112     93K   9474       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     92.0      0.55              0.13        17    0.033       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     16.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.050, interval 0.011
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.26 GB write, 0.11 MB/s write, 0.25 GB read, 0.11 MB/s read, 2.5 seconds
                                           Interval compaction: 0.06 GB write, 0.11 MB/s write, 0.06 GB read, 0.11 MB/s read, 0.6 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f7a1ce3350#2 capacity: 304.00 MB usage: 24.40 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000192 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1488,23.66 MB,7.78308%) FilterBlock(36,274.42 KB,0.0881546%) IndexBlock(36,482.33 KB,0.154942%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 08 10:23:03 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1155: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:23:03 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27028 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:23:03 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:23:03 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 08 10:23:03 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:23:03 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 10:23:03 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:23:03 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:23:03 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:23:03 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:23:03 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:23:03 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 08 10:23:03 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:23:03 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 08 10:23:03 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:23:03 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:23:03 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:23:03 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 08 10:23:03 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:23:03 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 08 10:23:03 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:23:03 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:23:03 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:23:03 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 08 10:23:03 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:23:03 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 10:23:03 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27323 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:23:03 compute-0 nova_compute[262220]: 2025-10-08 10:23:03.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:23:03 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Oct 08 10:23:03 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1907194708' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 08 10:23:03 compute-0 systemd[1]: Starting Time & Date Service...
Oct 08 10:23:03 compute-0 systemd[1]: Started Time & Date Service.
Oct 08 10:23:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:23:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:04 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:23:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:04 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:23:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:04 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:23:04 compute-0 ceph-mon[73572]: from='client.17253 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:23:04 compute-0 ceph-mon[73572]: from='client.27022 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:23:04 compute-0 ceph-mon[73572]: from='client.27311 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:23:04 compute-0 ceph-mon[73572]: from='client.17262 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:23:04 compute-0 ceph-mon[73572]: pgmap v1155: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:23:04 compute-0 ceph-mon[73572]: from='client.27028 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:23:04 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/1907194708' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 08 10:23:04 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0)
Oct 08 10:23:04 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3508599741' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Oct 08 10:23:04 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:23:04 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:23:04 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:23:04.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:23:04 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:23:04 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27055 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:23:05 compute-0 ceph-mon[73572]: from='client.27323 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:23:05 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/519781074' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 08 10:23:05 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3508599741' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Oct 08 10:23:05 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/244999110' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 08 10:23:05 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/4065304304' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Oct 08 10:23:05 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/2774109384' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Oct 08 10:23:05 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27061 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:23:05 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:23:05 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:23:05 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:23:05.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:23:05 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1156: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:23:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:23:05] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct 08 10:23:05 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:23:05] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct 08 10:23:05 compute-0 nova_compute[262220]: 2025-10-08 10:23:05.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:23:06 compute-0 ceph-mon[73572]: from='client.27055 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:23:06 compute-0 ceph-mon[73572]: from='client.27061 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:23:06 compute-0 ceph-mon[73572]: pgmap v1156: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:23:06 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/3791785045' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 08 10:23:06 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:23:06 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:23:06 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:23:06.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:23:07 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/2527743774' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Oct 08 10:23:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:23:07.212Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:23:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:23:07.212Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:23:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:23:07.212Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:23:07 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:23:07 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:23:07 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:23:07.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:23:07 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1157: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:23:08 compute-0 ceph-mon[73572]: pgmap v1157: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:23:08 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:23:08 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:23:08 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:23:08.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:23:08 compute-0 nova_compute[262220]: 2025-10-08 10:23:08.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:23:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:23:08.864Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:23:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:23:08.864Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:23:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:23:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:23:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:23:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:09 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:23:09 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:23:09 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.002000065s ======
Oct 08 10:23:09 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:23:09.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000065s
Oct 08 10:23:09 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:23:09 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1158: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:23:10 compute-0 sudo[293145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:23:10 compute-0 sudo[293145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:23:10 compute-0 sudo[293145]: pam_unix(sudo:session): session closed for user root
Oct 08 10:23:10 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:23:10 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:23:10 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:23:10.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:23:10 compute-0 ceph-mon[73572]: pgmap v1158: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:23:10 compute-0 nova_compute[262220]: 2025-10-08 10:23:10.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:23:11 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:23:11 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:23:11 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:23:11.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:23:11 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1159: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:23:12 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:23:12 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:23:12 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:23:12.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:23:12 compute-0 ceph-mon[73572]: pgmap v1159: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:23:13 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:23:13 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:23:13 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:23:13.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:23:13 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1160: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:23:13 compute-0 nova_compute[262220]: 2025-10-08 10:23:13.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:23:13 compute-0 ceph-mon[73572]: pgmap v1160: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:23:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:23:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:23:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:23:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:14 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:23:14 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:23:14 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:23:14 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:23:14.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:23:14 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:23:15 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:23:15 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:23:15 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:23:15.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:23:15 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1161: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:23:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:23:15] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct 08 10:23:15 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:23:15] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct 08 10:23:15 compute-0 podman[293175]: 2025-10-08 10:23:15.912816196 +0000 UTC m=+0.068464948 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid)
Oct 08 10:23:15 compute-0 nova_compute[262220]: 2025-10-08 10:23:15.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:23:16 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:23:16 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:23:16 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:23:16.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:23:16 compute-0 ceph-mon[73572]: pgmap v1161: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:23:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:23:17.213Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:23:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:23:17.217Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:23:17 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:23:17 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:23:17 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:23:17.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:23:17 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1162: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:23:17 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:23:17 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:23:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:23:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:23:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:23:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:23:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:23:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:23:18 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:23:18 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:23:18 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:23:18.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:23:18 compute-0 nova_compute[262220]: 2025-10-08 10:23:18.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:23:18 compute-0 ceph-mon[73572]: pgmap v1162: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:23:18 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:23:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:23:18.866Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:23:18 compute-0 sudo[293198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:23:18 compute-0 sudo[293198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:23:18 compute-0 sudo[293198]: pam_unix(sudo:session): session closed for user root
Oct 08 10:23:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:23:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:19 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:23:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:19 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:23:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:19 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:23:19 compute-0 sudo[293223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 08 10:23:19 compute-0 sudo[293223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:23:19 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:23:19 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:23:19 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:23:19.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:23:19 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:23:19 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1163: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:23:19 compute-0 sudo[293223]: pam_unix(sudo:session): session closed for user root
Oct 08 10:23:19 compute-0 ceph-mon[73572]: pgmap v1163: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:23:19 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 10:23:19 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:23:19 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 08 10:23:19 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 10:23:19 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1164: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 08 10:23:19 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 08 10:23:19 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:23:19 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 08 10:23:19 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:23:19 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 08 10:23:19 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 10:23:19 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 08 10:23:19 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 10:23:19 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 10:23:19 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:23:19 compute-0 sudo[293280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:23:19 compute-0 sudo[293280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:23:19 compute-0 sudo[293280]: pam_unix(sudo:session): session closed for user root
Oct 08 10:23:19 compute-0 sudo[293305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 08 10:23:19 compute-0 sudo[293305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:23:20 compute-0 podman[293370]: 2025-10-08 10:23:20.29058959 +0000 UTC m=+0.042038433 container create d0ce206ff219867aca0c4cd09590e50be930b97446845818cbbb746dbc9f048c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_villani, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:23:20 compute-0 systemd[1]: Started libpod-conmon-d0ce206ff219867aca0c4cd09590e50be930b97446845818cbbb746dbc9f048c.scope.
Oct 08 10:23:20 compute-0 podman[293370]: 2025-10-08 10:23:20.269847337 +0000 UTC m=+0.021296200 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:23:20 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:23:20 compute-0 podman[293370]: 2025-10-08 10:23:20.388905595 +0000 UTC m=+0.140354438 container init d0ce206ff219867aca0c4cd09590e50be930b97446845818cbbb746dbc9f048c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_villani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct 08 10:23:20 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:23:20 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:23:20 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:23:20.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:23:20 compute-0 podman[293370]: 2025-10-08 10:23:20.397605746 +0000 UTC m=+0.149054579 container start d0ce206ff219867aca0c4cd09590e50be930b97446845818cbbb746dbc9f048c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_villani, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 10:23:20 compute-0 happy_villani[293386]: 167 167
Oct 08 10:23:20 compute-0 systemd[1]: libpod-d0ce206ff219867aca0c4cd09590e50be930b97446845818cbbb746dbc9f048c.scope: Deactivated successfully.
Oct 08 10:23:20 compute-0 conmon[293386]: conmon d0ce206ff219867aca0c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d0ce206ff219867aca0c4cd09590e50be930b97446845818cbbb746dbc9f048c.scope/container/memory.events
Oct 08 10:23:20 compute-0 podman[293370]: 2025-10-08 10:23:20.404049895 +0000 UTC m=+0.155498758 container attach d0ce206ff219867aca0c4cd09590e50be930b97446845818cbbb746dbc9f048c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_villani, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 08 10:23:20 compute-0 podman[293370]: 2025-10-08 10:23:20.405671758 +0000 UTC m=+0.157120601 container died d0ce206ff219867aca0c4cd09590e50be930b97446845818cbbb746dbc9f048c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_villani, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct 08 10:23:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-17eda232fc737d9e66aa5234a02c2217a8defa46d7ddaa4f7c12a2addf028196-merged.mount: Deactivated successfully.
Oct 08 10:23:20 compute-0 podman[293370]: 2025-10-08 10:23:20.451370788 +0000 UTC m=+0.202819631 container remove d0ce206ff219867aca0c4cd09590e50be930b97446845818cbbb746dbc9f048c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct 08 10:23:20 compute-0 systemd[1]: libpod-conmon-d0ce206ff219867aca0c4cd09590e50be930b97446845818cbbb746dbc9f048c.scope: Deactivated successfully.
Oct 08 10:23:20 compute-0 podman[293411]: 2025-10-08 10:23:20.59405349 +0000 UTC m=+0.023647896 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:23:20 compute-0 podman[293411]: 2025-10-08 10:23:20.693685649 +0000 UTC m=+0.123280055 container create 7226b249e868a4e16d455df2ac73aca60eb06b2d71536d864d59914d35aaf62d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_rhodes, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct 08 10:23:20 compute-0 systemd[1]: Started libpod-conmon-7226b249e868a4e16d455df2ac73aca60eb06b2d71536d864d59914d35aaf62d.scope.
Oct 08 10:23:20 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:23:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/256fe7cf2933ac5872b7aadf7d5ed338625ea264337ad5439b90fcb303938334/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:23:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/256fe7cf2933ac5872b7aadf7d5ed338625ea264337ad5439b90fcb303938334/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:23:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/256fe7cf2933ac5872b7aadf7d5ed338625ea264337ad5439b90fcb303938334/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:23:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/256fe7cf2933ac5872b7aadf7d5ed338625ea264337ad5439b90fcb303938334/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:23:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/256fe7cf2933ac5872b7aadf7d5ed338625ea264337ad5439b90fcb303938334/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 10:23:20 compute-0 podman[293411]: 2025-10-08 10:23:20.82552844 +0000 UTC m=+0.255122856 container init 7226b249e868a4e16d455df2ac73aca60eb06b2d71536d864d59914d35aaf62d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 08 10:23:20 compute-0 podman[293411]: 2025-10-08 10:23:20.837715154 +0000 UTC m=+0.267309570 container start 7226b249e868a4e16d455df2ac73aca60eb06b2d71536d864d59914d35aaf62d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_rhodes, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 08 10:23:20 compute-0 podman[293411]: 2025-10-08 10:23:20.842271102 +0000 UTC m=+0.271865508 container attach 7226b249e868a4e16d455df2ac73aca60eb06b2d71536d864d59914d35aaf62d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_rhodes, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 08 10:23:20 compute-0 nova_compute[262220]: 2025-10-08 10:23:20.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:23:21 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:23:21 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 10:23:21 compute-0 ceph-mon[73572]: pgmap v1164: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 08 10:23:21 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:23:21 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:23:21 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 10:23:21 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 10:23:21 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:23:21 compute-0 ceph-mon[73572]: from='client.? 192.168.122.10:0/1657059829' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 08 10:23:21 compute-0 ceph-mon[73572]: from='client.? 192.168.122.10:0/1657059829' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 08 10:23:21 compute-0 amazing_rhodes[293428]: --> passed data devices: 0 physical, 1 LVM
Oct 08 10:23:21 compute-0 amazing_rhodes[293428]: --> All data devices are unavailable
Oct 08 10:23:21 compute-0 systemd[1]: libpod-7226b249e868a4e16d455df2ac73aca60eb06b2d71536d864d59914d35aaf62d.scope: Deactivated successfully.
Oct 08 10:23:21 compute-0 podman[293411]: 2025-10-08 10:23:21.210311455 +0000 UTC m=+0.639905931 container died 7226b249e868a4e16d455df2ac73aca60eb06b2d71536d864d59914d35aaf62d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_rhodes, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325)
Oct 08 10:23:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-256fe7cf2933ac5872b7aadf7d5ed338625ea264337ad5439b90fcb303938334-merged.mount: Deactivated successfully.
Oct 08 10:23:21 compute-0 podman[293411]: 2025-10-08 10:23:21.262790435 +0000 UTC m=+0.692384841 container remove 7226b249e868a4e16d455df2ac73aca60eb06b2d71536d864d59914d35aaf62d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:23:21 compute-0 systemd[1]: libpod-conmon-7226b249e868a4e16d455df2ac73aca60eb06b2d71536d864d59914d35aaf62d.scope: Deactivated successfully.
Oct 08 10:23:21 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:23:21 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:23:21 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:23:21.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:23:21 compute-0 sudo[293305]: pam_unix(sudo:session): session closed for user root
Oct 08 10:23:21 compute-0 sudo[293458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:23:21 compute-0 sudo[293458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:23:21 compute-0 sudo[293458]: pam_unix(sudo:session): session closed for user root
Oct 08 10:23:21 compute-0 sudo[293483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- lvm list --format json
Oct 08 10:23:21 compute-0 sudo[293483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:23:21 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1165: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 08 10:23:21 compute-0 podman[293550]: 2025-10-08 10:23:21.808586006 +0000 UTC m=+0.040665038 container create f2feb507a74c2c2d14764dbc12ed956b7927e28774b13e8c78fbe7ae83065ab5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_nash, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 10:23:21 compute-0 systemd[1]: Started libpod-conmon-f2feb507a74c2c2d14764dbc12ed956b7927e28774b13e8c78fbe7ae83065ab5.scope.
Oct 08 10:23:21 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:23:21 compute-0 podman[293550]: 2025-10-08 10:23:21.793233079 +0000 UTC m=+0.025312121 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:23:21 compute-0 podman[293550]: 2025-10-08 10:23:21.889997324 +0000 UTC m=+0.122076436 container init f2feb507a74c2c2d14764dbc12ed956b7927e28774b13e8c78fbe7ae83065ab5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_nash, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325)
Oct 08 10:23:21 compute-0 podman[293550]: 2025-10-08 10:23:21.897162007 +0000 UTC m=+0.129241029 container start f2feb507a74c2c2d14764dbc12ed956b7927e28774b13e8c78fbe7ae83065ab5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_nash, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 08 10:23:21 compute-0 pedantic_nash[293567]: 167 167
Oct 08 10:23:21 compute-0 podman[293550]: 2025-10-08 10:23:21.900398762 +0000 UTC m=+0.132477824 container attach f2feb507a74c2c2d14764dbc12ed956b7927e28774b13e8c78fbe7ae83065ab5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_nash, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 08 10:23:21 compute-0 systemd[1]: libpod-f2feb507a74c2c2d14764dbc12ed956b7927e28774b13e8c78fbe7ae83065ab5.scope: Deactivated successfully.
Oct 08 10:23:21 compute-0 podman[293550]: 2025-10-08 10:23:21.900975119 +0000 UTC m=+0.133054161 container died f2feb507a74c2c2d14764dbc12ed956b7927e28774b13e8c78fbe7ae83065ab5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:23:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-d0809961297a027ca39bcbd93f50b5d27685c3f2922143f28763a4560b46ed2e-merged.mount: Deactivated successfully.
Oct 08 10:23:21 compute-0 podman[293550]: 2025-10-08 10:23:21.940174349 +0000 UTC m=+0.172253381 container remove f2feb507a74c2c2d14764dbc12ed956b7927e28774b13e8c78fbe7ae83065ab5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_nash, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct 08 10:23:21 compute-0 systemd[1]: libpod-conmon-f2feb507a74c2c2d14764dbc12ed956b7927e28774b13e8c78fbe7ae83065ab5.scope: Deactivated successfully.
Oct 08 10:23:22 compute-0 podman[293593]: 2025-10-08 10:23:22.112977698 +0000 UTC m=+0.045437653 container create 2a7f8be23e6289f253408b45814f2802a7bd51ce2df7b7e32cf2b6cbef235f06 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_snyder, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:23:22 compute-0 systemd[1]: Started libpod-conmon-2a7f8be23e6289f253408b45814f2802a7bd51ce2df7b7e32cf2b6cbef235f06.scope.
Oct 08 10:23:22 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:23:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84fae1783fc5ade0e792a82d05e16ffb5f9a2dec5e547529a97870454494780e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:23:22 compute-0 podman[293593]: 2025-10-08 10:23:22.090890813 +0000 UTC m=+0.023350788 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:23:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84fae1783fc5ade0e792a82d05e16ffb5f9a2dec5e547529a97870454494780e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:23:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84fae1783fc5ade0e792a82d05e16ffb5f9a2dec5e547529a97870454494780e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:23:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84fae1783fc5ade0e792a82d05e16ffb5f9a2dec5e547529a97870454494780e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:23:22 compute-0 podman[293593]: 2025-10-08 10:23:22.202660434 +0000 UTC m=+0.135120399 container init 2a7f8be23e6289f253408b45814f2802a7bd51ce2df7b7e32cf2b6cbef235f06 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:23:22 compute-0 podman[293593]: 2025-10-08 10:23:22.210059663 +0000 UTC m=+0.142519608 container start 2a7f8be23e6289f253408b45814f2802a7bd51ce2df7b7e32cf2b6cbef235f06 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_snyder, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct 08 10:23:22 compute-0 podman[293593]: 2025-10-08 10:23:22.214164396 +0000 UTC m=+0.146624341 container attach 2a7f8be23e6289f253408b45814f2802a7bd51ce2df7b7e32cf2b6cbef235f06 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Oct 08 10:23:22 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:23:22 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:23:22 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:23:22.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:23:22 compute-0 mystifying_snyder[293610]: {
Oct 08 10:23:22 compute-0 mystifying_snyder[293610]:     "1": [
Oct 08 10:23:22 compute-0 mystifying_snyder[293610]:         {
Oct 08 10:23:22 compute-0 mystifying_snyder[293610]:             "devices": [
Oct 08 10:23:22 compute-0 mystifying_snyder[293610]:                 "/dev/loop3"
Oct 08 10:23:22 compute-0 mystifying_snyder[293610]:             ],
Oct 08 10:23:22 compute-0 mystifying_snyder[293610]:             "lv_name": "ceph_lv0",
Oct 08 10:23:22 compute-0 mystifying_snyder[293610]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:23:22 compute-0 mystifying_snyder[293610]:             "lv_size": "21470642176",
Oct 08 10:23:22 compute-0 mystifying_snyder[293610]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 08 10:23:22 compute-0 mystifying_snyder[293610]:             "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 10:23:22 compute-0 mystifying_snyder[293610]:             "name": "ceph_lv0",
Oct 08 10:23:22 compute-0 mystifying_snyder[293610]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:23:22 compute-0 mystifying_snyder[293610]:             "tags": {
Oct 08 10:23:22 compute-0 mystifying_snyder[293610]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:23:22 compute-0 mystifying_snyder[293610]:                 "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 10:23:22 compute-0 mystifying_snyder[293610]:                 "ceph.cephx_lockbox_secret": "",
Oct 08 10:23:22 compute-0 mystifying_snyder[293610]:                 "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct 08 10:23:22 compute-0 mystifying_snyder[293610]:                 "ceph.cluster_name": "ceph",
Oct 08 10:23:22 compute-0 mystifying_snyder[293610]:                 "ceph.crush_device_class": "",
Oct 08 10:23:22 compute-0 mystifying_snyder[293610]:                 "ceph.encrypted": "0",
Oct 08 10:23:22 compute-0 mystifying_snyder[293610]:                 "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct 08 10:23:22 compute-0 mystifying_snyder[293610]:                 "ceph.osd_id": "1",
Oct 08 10:23:22 compute-0 mystifying_snyder[293610]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 08 10:23:22 compute-0 mystifying_snyder[293610]:                 "ceph.type": "block",
Oct 08 10:23:22 compute-0 mystifying_snyder[293610]:                 "ceph.vdo": "0",
Oct 08 10:23:22 compute-0 mystifying_snyder[293610]:                 "ceph.with_tpm": "0"
Oct 08 10:23:22 compute-0 mystifying_snyder[293610]:             },
Oct 08 10:23:22 compute-0 mystifying_snyder[293610]:             "type": "block",
Oct 08 10:23:22 compute-0 mystifying_snyder[293610]:             "vg_name": "ceph_vg0"
Oct 08 10:23:22 compute-0 mystifying_snyder[293610]:         }
Oct 08 10:23:22 compute-0 mystifying_snyder[293610]:     ]
Oct 08 10:23:22 compute-0 mystifying_snyder[293610]: }
Oct 08 10:23:22 compute-0 systemd[1]: libpod-2a7f8be23e6289f253408b45814f2802a7bd51ce2df7b7e32cf2b6cbef235f06.scope: Deactivated successfully.
Oct 08 10:23:22 compute-0 podman[293593]: 2025-10-08 10:23:22.489054851 +0000 UTC m=+0.421514816 container died 2a7f8be23e6289f253408b45814f2802a7bd51ce2df7b7e32cf2b6cbef235f06 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_snyder, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct 08 10:23:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-84fae1783fc5ade0e792a82d05e16ffb5f9a2dec5e547529a97870454494780e-merged.mount: Deactivated successfully.
Oct 08 10:23:22 compute-0 podman[293593]: 2025-10-08 10:23:22.53066992 +0000 UTC m=+0.463129865 container remove 2a7f8be23e6289f253408b45814f2802a7bd51ce2df7b7e32cf2b6cbef235f06 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct 08 10:23:22 compute-0 systemd[1]: libpod-conmon-2a7f8be23e6289f253408b45814f2802a7bd51ce2df7b7e32cf2b6cbef235f06.scope: Deactivated successfully.
Oct 08 10:23:22 compute-0 sudo[293483]: pam_unix(sudo:session): session closed for user root
Oct 08 10:23:22 compute-0 sudo[293631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:23:22 compute-0 sudo[293631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:23:22 compute-0 sudo[293631]: pam_unix(sudo:session): session closed for user root
Oct 08 10:23:22 compute-0 sudo[293656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- raw list --format json
Oct 08 10:23:22 compute-0 sudo[293656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:23:23 compute-0 podman[293721]: 2025-10-08 10:23:23.104594893 +0000 UTC m=+0.039589044 container create ef05bdb101f5fc90a958d2fcf24dfbed515afab88c9f72236f5db06afde7af36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_curran, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 08 10:23:23 compute-0 systemd[1]: Started libpod-conmon-ef05bdb101f5fc90a958d2fcf24dfbed515afab88c9f72236f5db06afde7af36.scope.
Oct 08 10:23:23 compute-0 podman[293721]: 2025-10-08 10:23:23.087256131 +0000 UTC m=+0.022250202 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:23:23 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:23:23 compute-0 podman[293721]: 2025-10-08 10:23:23.226660127 +0000 UTC m=+0.161654238 container init ef05bdb101f5fc90a958d2fcf24dfbed515afab88c9f72236f5db06afde7af36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_curran, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 10:23:23 compute-0 podman[293721]: 2025-10-08 10:23:23.233980744 +0000 UTC m=+0.168974775 container start ef05bdb101f5fc90a958d2fcf24dfbed515afab88c9f72236f5db06afde7af36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True)
Oct 08 10:23:23 compute-0 podman[293721]: 2025-10-08 10:23:23.238740909 +0000 UTC m=+0.173735030 container attach ef05bdb101f5fc90a958d2fcf24dfbed515afab88c9f72236f5db06afde7af36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_curran, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Oct 08 10:23:23 compute-0 adoring_curran[293737]: 167 167
Oct 08 10:23:23 compute-0 systemd[1]: libpod-ef05bdb101f5fc90a958d2fcf24dfbed515afab88c9f72236f5db06afde7af36.scope: Deactivated successfully.
Oct 08 10:23:23 compute-0 podman[293721]: 2025-10-08 10:23:23.243341797 +0000 UTC m=+0.178335868 container died ef05bdb101f5fc90a958d2fcf24dfbed515afab88c9f72236f5db06afde7af36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 08 10:23:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-493c6bc88c9e786d2d14eda2c830b9bd5258e42342aad5a45834caed28fb85f7-merged.mount: Deactivated successfully.
Oct 08 10:23:23 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:23:23 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:23:23 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:23:23.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:23:23 compute-0 podman[293721]: 2025-10-08 10:23:23.301420189 +0000 UTC m=+0.236414260 container remove ef05bdb101f5fc90a958d2fcf24dfbed515afab88c9f72236f5db06afde7af36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_curran, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid)
Oct 08 10:23:23 compute-0 systemd[1]: libpod-conmon-ef05bdb101f5fc90a958d2fcf24dfbed515afab88c9f72236f5db06afde7af36.scope: Deactivated successfully.
Oct 08 10:23:23 compute-0 ceph-mon[73572]: pgmap v1165: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 08 10:23:23 compute-0 podman[293764]: 2025-10-08 10:23:23.506744921 +0000 UTC m=+0.061664429 container create 894877bebfae276a4daebfbeeefadad43e8b80962bccd95d6b62f5ff29b3af08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct 08 10:23:23 compute-0 systemd[1]: Started libpod-conmon-894877bebfae276a4daebfbeeefadad43e8b80962bccd95d6b62f5ff29b3af08.scope.
Oct 08 10:23:23 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:23:23 compute-0 podman[293764]: 2025-10-08 10:23:23.485977608 +0000 UTC m=+0.040897166 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:23:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/351de4da0cb190bf50f8ac8c60a325efcc4167989c189fe6dd386af2a463bc3f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:23:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/351de4da0cb190bf50f8ac8c60a325efcc4167989c189fe6dd386af2a463bc3f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:23:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/351de4da0cb190bf50f8ac8c60a325efcc4167989c189fe6dd386af2a463bc3f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:23:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/351de4da0cb190bf50f8ac8c60a325efcc4167989c189fe6dd386af2a463bc3f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:23:23 compute-0 podman[293764]: 2025-10-08 10:23:23.596725746 +0000 UTC m=+0.151645254 container init 894877bebfae276a4daebfbeeefadad43e8b80962bccd95d6b62f5ff29b3af08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_hugle, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 08 10:23:23 compute-0 podman[293764]: 2025-10-08 10:23:23.605681876 +0000 UTC m=+0.160601394 container start 894877bebfae276a4daebfbeeefadad43e8b80962bccd95d6b62f5ff29b3af08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_hugle, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True)
Oct 08 10:23:23 compute-0 podman[293764]: 2025-10-08 10:23:23.61135262 +0000 UTC m=+0.166272118 container attach 894877bebfae276a4daebfbeeefadad43e8b80962bccd95d6b62f5ff29b3af08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_hugle, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:23:23 compute-0 nova_compute[262220]: 2025-10-08 10:23:23.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:23:23 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1166: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 1 op/s
Oct 08 10:23:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:23:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:24 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:23:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:24 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:23:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:24 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:23:24 compute-0 lvm[293855]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 08 10:23:24 compute-0 lvm[293855]: VG ceph_vg0 finished
Oct 08 10:23:24 compute-0 ecstatic_hugle[293780]: {}
Oct 08 10:23:24 compute-0 systemd[1]: libpod-894877bebfae276a4daebfbeeefadad43e8b80962bccd95d6b62f5ff29b3af08.scope: Deactivated successfully.
Oct 08 10:23:24 compute-0 podman[293764]: 2025-10-08 10:23:24.265193592 +0000 UTC m=+0.820113080 container died 894877bebfae276a4daebfbeeefadad43e8b80962bccd95d6b62f5ff29b3af08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:23:24 compute-0 systemd[1]: libpod-894877bebfae276a4daebfbeeefadad43e8b80962bccd95d6b62f5ff29b3af08.scope: Consumed 1.071s CPU time.
Oct 08 10:23:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-351de4da0cb190bf50f8ac8c60a325efcc4167989c189fe6dd386af2a463bc3f-merged.mount: Deactivated successfully.
Oct 08 10:23:24 compute-0 podman[293764]: 2025-10-08 10:23:24.315093408 +0000 UTC m=+0.870012886 container remove 894877bebfae276a4daebfbeeefadad43e8b80962bccd95d6b62f5ff29b3af08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_hugle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 08 10:23:24 compute-0 systemd[1]: libpod-conmon-894877bebfae276a4daebfbeeefadad43e8b80962bccd95d6b62f5ff29b3af08.scope: Deactivated successfully.
Oct 08 10:23:24 compute-0 sudo[293656]: pam_unix(sudo:session): session closed for user root
Oct 08 10:23:24 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 10:23:24 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:23:24 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:23:24 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 10:23:24 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:23:24 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:23:24 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:23:24.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:23:24 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:23:24 compute-0 sudo[293871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 08 10:23:24 compute-0 sudo[293871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:23:24 compute-0 sudo[293871]: pam_unix(sudo:session): session closed for user root
Oct 08 10:23:25 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:23:25 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:23:25 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:23:25.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:23:25 compute-0 ceph-mon[73572]: pgmap v1166: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 1 op/s
Oct 08 10:23:25 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:23:25 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:23:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:23:25] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 08 10:23:25 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:23:25] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 08 10:23:25 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1167: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 08 10:23:25 compute-0 nova_compute[262220]: 2025-10-08 10:23:25.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:23:26 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:23:26 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:23:26 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:23:26.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:23:26 compute-0 podman[293898]: 2025-10-08 10:23:26.928066639 +0000 UTC m=+0.076531561 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 08 10:23:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:23:27.218Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:23:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:23:27.219Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:23:27 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:23:27 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:23:27 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:23:27.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:23:27 compute-0 ceph-mon[73572]: pgmap v1167: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 08 10:23:27 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1168: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 08 10:23:28 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:23:28 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:23:28 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:23:28.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:23:28 compute-0 nova_compute[262220]: 2025-10-08 10:23:28.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:23:28 compute-0 ceph-mon[73572]: pgmap v1168: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 08 10:23:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:23:28.867Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:23:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:23:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:23:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:23:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:29 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:23:29 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:23:29 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:23:29 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:23:29.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:23:29 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:23:29 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1169: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 08 10:23:30 compute-0 sudo[293929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:23:30 compute-0 sudo[293929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:23:30 compute-0 sudo[293929]: pam_unix(sudo:session): session closed for user root
Oct 08 10:23:30 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:23:30 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:23:30 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:23:30.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:23:30 compute-0 ceph-mon[73572]: pgmap v1169: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 08 10:23:30 compute-0 podman[293955]: 2025-10-08 10:23:30.891691674 +0000 UTC m=+0.050479477 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 08 10:23:30 compute-0 podman[293954]: 2025-10-08 10:23:30.895118395 +0000 UTC m=+0.055322373 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct 08 10:23:30 compute-0 nova_compute[262220]: 2025-10-08 10:23:30.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:23:31 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:23:31 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:23:31 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:23:31.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:23:31 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1170: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:23:32 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:23:32 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:23:32 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:23:32.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:23:32 compute-0 ceph-mon[73572]: pgmap v1170: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:23:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:23:32 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:23:33 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:23:33 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:23:33 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:23:33.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:23:33 compute-0 nova_compute[262220]: 2025-10-08 10:23:33.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:23:33 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1171: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:23:33 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 08 10:23:33 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 08 10:23:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:23:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:23:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:23:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:34 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:23:34 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:23:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:23:34 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:23:34 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:23:34 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:23:34.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:23:35 compute-0 ceph-mon[73572]: pgmap v1171: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:23:35 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:23:35 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:23:35 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:23:35.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:23:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:23:35] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Oct 08 10:23:35 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:23:35] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Oct 08 10:23:35 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1172: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:23:35 compute-0 nova_compute[262220]: 2025-10-08 10:23:35.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:23:36 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:23:36 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:23:36 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:23:36.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:23:37 compute-0 ceph-mon[73572]: pgmap v1172: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:23:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:23:37.221Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:23:37 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:23:37 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:23:37 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:23:37.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:23:37 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1173: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:23:38 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:23:38 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:23:38 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:23:38.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:23:38 compute-0 nova_compute[262220]: 2025-10-08 10:23:38.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:23:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:23:38.868Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:23:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:23:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:39 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:23:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:39 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:23:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:39 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:23:39 compute-0 ceph-mon[73572]: pgmap v1173: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:23:39 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:23:39 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:23:39 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:23:39.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:23:39 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:23:39 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1174: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:23:40 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:23:40 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:23:40 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:23:40.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:23:40 compute-0 nova_compute[262220]: 2025-10-08 10:23:40.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:23:41 compute-0 ceph-mon[73572]: pgmap v1174: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:23:41 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:23:41 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:23:41 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:23:41.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:23:41 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1175: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:23:42 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:23:42 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:23:42 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:23:42.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:23:42 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-crash-compute-0[78863]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Oct 08 10:23:43 compute-0 ceph-mon[73572]: pgmap v1175: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:23:43 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:23:43 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:23:43 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:23:43.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:23:43 compute-0 nova_compute[262220]: 2025-10-08 10:23:43.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:23:43 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1176: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:23:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:23:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:23:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:23:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:44 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:23:44 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:23:44 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:23:44 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:23:44 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:23:44.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:23:45 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:23:45 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:23:45 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:23:45.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:23:45 compute-0 ceph-mon[73572]: pgmap v1176: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:23:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:23:45] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Oct 08 10:23:45 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:23:45] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Oct 08 10:23:45 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1177: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:23:45 compute-0 nova_compute[262220]: 2025-10-08 10:23:45.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:23:46 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:23:46 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:23:46 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:23:46.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:23:46 compute-0 podman[294011]: 2025-10-08 10:23:46.917114886 +0000 UTC m=+0.070652860 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 08 10:23:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:23:47.222Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:23:47 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:23:47 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:23:47 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:23:47.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:23:47 compute-0 ceph-mon[73572]: pgmap v1177: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:23:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:23:47
Oct 08 10:23:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 08 10:23:47 compute-0 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct 08 10:23:47 compute-0 ceph-mgr[73869]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', '.nfs', '.rgw.root', 'images', 'cephfs.cephfs.data', '.mgr', 'backups', 'default.rgw.log', 'default.rgw.control', 'default.rgw.meta', 'vms']
Oct 08 10:23:47 compute-0 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct 08 10:23:47 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1178: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:23:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:23:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:23:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:23:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:23:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:23:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:23:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:23:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:23:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct 08 10:23:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:23:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 08 10:23:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:23:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 10:23:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:23:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:23:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:23:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:23:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:23:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 08 10:23:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:23:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 08 10:23:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:23:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:23:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:23:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 08 10:23:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:23:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 08 10:23:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:23:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:23:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:23:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 08 10:23:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:23:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 10:23:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 08 10:23:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:23:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:23:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:23:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:23:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 08 10:23:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:23:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:23:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:23:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:23:48 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:23:48 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:23:48 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:23:48.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:23:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:23:48 compute-0 nova_compute[262220]: 2025-10-08 10:23:48.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:23:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:23:48.868Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:23:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:23:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:23:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:23:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:49 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:23:49 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:23:49 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:23:49 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:23:49.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:23:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:23:49 compute-0 ceph-mon[73572]: pgmap v1178: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:23:49 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1179: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:23:50 compute-0 sudo[294036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:23:50 compute-0 sudo[294036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:23:50 compute-0 sudo[294036]: pam_unix(sudo:session): session closed for user root
Oct 08 10:23:50 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:23:50 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:23:50 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:23:50.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:23:50 compute-0 nova_compute[262220]: 2025-10-08 10:23:50.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:23:50 compute-0 nova_compute[262220]: 2025-10-08 10:23:50.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:23:50 compute-0 nova_compute[262220]: 2025-10-08 10:23:50.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:23:51 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:23:51 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:23:51 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:23:51.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:23:51 compute-0 ceph-mon[73572]: pgmap v1179: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:23:51 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1180: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:23:51 compute-0 nova_compute[262220]: 2025-10-08 10:23:51.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:23:51 compute-0 nova_compute[262220]: 2025-10-08 10:23:51.888 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 08 10:23:51 compute-0 nova_compute[262220]: 2025-10-08 10:23:51.888 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 08 10:23:52 compute-0 nova_compute[262220]: 2025-10-08 10:23:52.017 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 08 10:23:52 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:23:52 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:23:52 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:23:52.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:23:52 compute-0 nova_compute[262220]: 2025-10-08 10:23:52.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:23:53 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:23:53 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:23:53 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:23:53.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:23:53 compute-0 ceph-mon[73572]: pgmap v1180: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:23:53 compute-0 nova_compute[262220]: 2025-10-08 10:23:53.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:23:53 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1181: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:23:53 compute-0 nova_compute[262220]: 2025-10-08 10:23:53.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:23:53 compute-0 nova_compute[262220]: 2025-10-08 10:23:53.907 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:23:53 compute-0 nova_compute[262220]: 2025-10-08 10:23:53.907 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:23:53 compute-0 nova_compute[262220]: 2025-10-08 10:23:53.907 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:23:53 compute-0 nova_compute[262220]: 2025-10-08 10:23:53.907 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 08 10:23:53 compute-0 nova_compute[262220]: 2025-10-08 10:23:53.907 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:23:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:23:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:23:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:23:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:54 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:23:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:23:54 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:23:54 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:23:54 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:23:54.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:23:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 08 10:23:54 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1942011370' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:23:54 compute-0 nova_compute[262220]: 2025-10-08 10:23:54.488 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:23:54 compute-0 nova_compute[262220]: 2025-10-08 10:23:54.659 2 WARNING nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 10:23:54 compute-0 nova_compute[262220]: 2025-10-08 10:23:54.660 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4373MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 08 10:23:54 compute-0 nova_compute[262220]: 2025-10-08 10:23:54.661 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:23:54 compute-0 nova_compute[262220]: 2025-10-08 10:23:54.661 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:23:54 compute-0 nova_compute[262220]: 2025-10-08 10:23:54.728 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 08 10:23:54 compute-0 nova_compute[262220]: 2025-10-08 10:23:54.729 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 08 10:23:54 compute-0 nova_compute[262220]: 2025-10-08 10:23:54.795 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:23:55 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/1942011370' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:23:55 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 08 10:23:55 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3956368669' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:23:55 compute-0 nova_compute[262220]: 2025-10-08 10:23:55.228 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:23:55 compute-0 nova_compute[262220]: 2025-10-08 10:23:55.234 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 08 10:23:55 compute-0 nova_compute[262220]: 2025-10-08 10:23:55.252 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 08 10:23:55 compute-0 nova_compute[262220]: 2025-10-08 10:23:55.254 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 08 10:23:55 compute-0 nova_compute[262220]: 2025-10-08 10:23:55.254 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:23:55 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:23:55 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:23:55 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:23:55.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:23:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:23:55] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Oct 08 10:23:55 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:23:55] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Oct 08 10:23:55 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1182: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:23:55 compute-0 nova_compute[262220]: 2025-10-08 10:23:55.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:23:56 compute-0 ceph-mon[73572]: pgmap v1181: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:23:56 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3956368669' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:23:56 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/2151031792' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:23:56 compute-0 nova_compute[262220]: 2025-10-08 10:23:56.249 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:23:56 compute-0 nova_compute[262220]: 2025-10-08 10:23:56.250 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:23:56 compute-0 nova_compute[262220]: 2025-10-08 10:23:56.250 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:23:56 compute-0 nova_compute[262220]: 2025-10-08 10:23:56.250 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 08 10:23:56 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:23:56 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:23:56 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:23:56.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:23:57 compute-0 ceph-mon[73572]: pgmap v1182: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:23:57 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/3764063024' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:23:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:23:57.223Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:23:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:23:57.224Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:23:57 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:23:57 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:23:57 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:23:57.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:23:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:23:57.422 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:23:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:23:57.422 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:23:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:23:57.423 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:23:57 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1183: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:23:57 compute-0 podman[294112]: 2025-10-08 10:23:57.977009963 +0000 UTC m=+0.141653959 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct 08 10:23:58 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/4227357222' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:23:58 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:23:58 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:23:58 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:23:58.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:23:58 compute-0 nova_compute[262220]: 2025-10-08 10:23:58.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:23:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:23:58.870Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:23:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:23:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:23:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:23:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:59 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:23:59 compute-0 ceph-mon[73572]: pgmap v1183: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:23:59 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/1880971550' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:23:59 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:23:59 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:23:59 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:23:59.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:23:59 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:23:59 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1184: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:24:00 compute-0 sudo[285931]: pam_unix(sudo:session): session closed for user root
Oct 08 10:24:00 compute-0 sshd-session[285928]: Received disconnect from 192.168.122.10 port 53306:11: disconnected by user
Oct 08 10:24:00 compute-0 sshd-session[285928]: Disconnected from user zuul 192.168.122.10 port 53306
Oct 08 10:24:00 compute-0 sshd-session[285902]: pam_unix(sshd:session): session closed for user zuul
Oct 08 10:24:00 compute-0 systemd[1]: session-58.scope: Deactivated successfully.
Oct 08 10:24:00 compute-0 systemd[1]: session-58.scope: Consumed 2min 54.816s CPU time, 750.2M memory peak, read 228.7M from disk, written 101.3M to disk.
Oct 08 10:24:00 compute-0 systemd-logind[798]: Session 58 logged out. Waiting for processes to exit.
Oct 08 10:24:00 compute-0 systemd-logind[798]: Removed session 58.
Oct 08 10:24:00 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:24:00 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:24:00 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:24:00.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:24:00 compute-0 sshd-session[294142]: Accepted publickey for zuul from 192.168.122.10 port 53012 ssh2: ECDSA SHA256:7LvTHAj52RCSkKXOsIbzSDrEw7lwj23D0dAJ0Qgx0Rg
Oct 08 10:24:00 compute-0 systemd-logind[798]: New session 59 of user zuul.
Oct 08 10:24:00 compute-0 systemd[1]: Started Session 59 of User zuul.
Oct 08 10:24:00 compute-0 sshd-session[294142]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 08 10:24:00 compute-0 sudo[294146]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cat /var/tmp/sos-osp/sosreport-compute-0-2025-10-08-kldfrwr.tar.xz
Oct 08 10:24:00 compute-0 sudo[294146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:24:00 compute-0 sudo[294146]: pam_unix(sudo:session): session closed for user root
Oct 08 10:24:00 compute-0 sshd-session[294145]: Received disconnect from 192.168.122.10 port 53012:11: disconnected by user
Oct 08 10:24:00 compute-0 sshd-session[294145]: Disconnected from user zuul 192.168.122.10 port 53012
Oct 08 10:24:00 compute-0 sshd-session[294142]: pam_unix(sshd:session): session closed for user zuul
Oct 08 10:24:00 compute-0 systemd[1]: session-59.scope: Deactivated successfully.
Oct 08 10:24:00 compute-0 systemd-logind[798]: Session 59 logged out. Waiting for processes to exit.
Oct 08 10:24:00 compute-0 systemd-logind[798]: Removed session 59.
Oct 08 10:24:00 compute-0 sshd-session[294171]: Accepted publickey for zuul from 192.168.122.10 port 53024 ssh2: ECDSA SHA256:7LvTHAj52RCSkKXOsIbzSDrEw7lwj23D0dAJ0Qgx0Rg
Oct 08 10:24:00 compute-0 systemd-logind[798]: New session 60 of user zuul.
Oct 08 10:24:00 compute-0 systemd[1]: Started Session 60 of User zuul.
Oct 08 10:24:00 compute-0 sshd-session[294171]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 08 10:24:00 compute-0 nova_compute[262220]: 2025-10-08 10:24:00.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:24:00 compute-0 podman[294175]: 2025-10-08 10:24:00.994094225 +0000 UTC m=+0.059801889 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 08 10:24:01 compute-0 podman[294174]: 2025-10-08 10:24:01.000821172 +0000 UTC m=+0.069825423 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 08 10:24:01 compute-0 sudo[294211]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rm -rf /var/tmp/sos-osp
Oct 08 10:24:01 compute-0 sudo[294211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:24:01 compute-0 sudo[294211]: pam_unix(sudo:session): session closed for user root
Oct 08 10:24:01 compute-0 sshd-session[294186]: Received disconnect from 192.168.122.10 port 53024:11: disconnected by user
Oct 08 10:24:01 compute-0 sshd-session[294186]: Disconnected from user zuul 192.168.122.10 port 53024
Oct 08 10:24:01 compute-0 sshd-session[294171]: pam_unix(sshd:session): session closed for user zuul
Oct 08 10:24:01 compute-0 systemd[1]: session-60.scope: Deactivated successfully.
Oct 08 10:24:01 compute-0 systemd-logind[798]: Session 60 logged out. Waiting for processes to exit.
Oct 08 10:24:01 compute-0 systemd-logind[798]: Removed session 60.
Oct 08 10:24:01 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:24:01 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:24:01 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:24:01.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:24:01 compute-0 ceph-mon[73572]: pgmap v1184: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:24:01 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1185: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:24:01 compute-0 nova_compute[262220]: 2025-10-08 10:24:01.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:24:02 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:24:02 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:24:02 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:24:02.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:24:02 compute-0 ceph-mon[73572]: pgmap v1185: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:24:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:24:02 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:24:03 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:24:03 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:24:03 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:24:03.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:24:03 compute-0 nova_compute[262220]: 2025-10-08 10:24:03.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:24:03 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:24:03 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1186: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:24:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:24:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:24:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:24:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:04 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:24:04 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:24:04 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:24:04 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:24:04 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:24:04.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:24:04 compute-0 ceph-mon[73572]: pgmap v1186: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:24:05 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:24:05 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:24:05 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:24:05.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:24:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:24:05] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Oct 08 10:24:05 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:24:05] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Oct 08 10:24:05 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1187: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:24:05 compute-0 nova_compute[262220]: 2025-10-08 10:24:05.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:24:06 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:24:06 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:24:06 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:24:06.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:24:06 compute-0 ceph-mon[73572]: pgmap v1187: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:24:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:24:07.224Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:24:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:24:07.224Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:24:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:24:07.225Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:24:07 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:24:07 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:24:07 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:24:07.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:24:07 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1188: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:24:08 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:24:08 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:24:08 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:24:08.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:24:08 compute-0 nova_compute[262220]: 2025-10-08 10:24:08.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:24:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:24:08.870Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:24:08 compute-0 ceph-mon[73572]: pgmap v1188: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:24:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:24:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:24:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:24:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:09 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:24:09 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:24:09 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:24:09 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:24:09.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:24:09 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:24:09 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1189: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:24:10 compute-0 sudo[294247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:24:10 compute-0 sudo[294247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:24:10 compute-0 sudo[294247]: pam_unix(sudo:session): session closed for user root
Oct 08 10:24:10 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:24:10 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:24:10 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:24:10.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:24:10 compute-0 ceph-mon[73572]: pgmap v1189: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:24:10 compute-0 nova_compute[262220]: 2025-10-08 10:24:10.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:24:11 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:24:11 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:24:11 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:24:11.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:24:11 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1190: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:24:12 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:24:12 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:24:12 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:24:12.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:24:13 compute-0 ceph-mon[73572]: pgmap v1190: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:24:13 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:24:13 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:24:13 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:24:13.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:24:13 compute-0 nova_compute[262220]: 2025-10-08 10:24:13.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:24:13 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1191: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:24:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:24:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:24:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:24:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:14 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:24:14 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:24:14 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:24:14 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:24:14 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:24:14.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:24:15 compute-0 ceph-mon[73572]: pgmap v1191: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:24:15 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:24:15 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:24:15 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:24:15.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:24:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:24:15] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Oct 08 10:24:15 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:24:15] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Oct 08 10:24:15 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1192: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:24:15 compute-0 nova_compute[262220]: 2025-10-08 10:24:15.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:24:16 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:24:16 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:24:16 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:24:16.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:24:17 compute-0 ceph-mon[73572]: pgmap v1192: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:24:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:24:17.226Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:24:17 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:24:17 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:24:17 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:24:17.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:24:17 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1193: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:24:17 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:24:17 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:24:17 compute-0 podman[294279]: 2025-10-08 10:24:17.889946334 +0000 UTC m=+0.054491305 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 08 10:24:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:24:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:24:18 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:24:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:24:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:24:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:24:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:24:18 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:24:18 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:24:18 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:24:18.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:24:18 compute-0 nova_compute[262220]: 2025-10-08 10:24:18.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:24:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:24:18.872Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:24:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:24:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:24:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:24:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:19 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:24:19 compute-0 ceph-mon[73572]: pgmap v1193: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:24:19 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:24:19 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:24:19 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:24:19.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:24:19 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:24:19 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1194: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:24:20 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:24:20 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:24:20 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:24:20.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:24:20 compute-0 nova_compute[262220]: 2025-10-08 10:24:20.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:24:21 compute-0 ceph-mon[73572]: pgmap v1194: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:24:21 compute-0 ceph-mon[73572]: from='client.? 192.168.122.10:0/1207373153' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 08 10:24:21 compute-0 ceph-mon[73572]: from='client.? 192.168.122.10:0/1207373153' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 08 10:24:21 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:24:21 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:24:21 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:24:21.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:24:21 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1195: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:24:22 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:24:22 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:24:22 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:24:22.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:24:23 compute-0 ceph-mon[73572]: pgmap v1195: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:24:23 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:24:23 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:24:23 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:24:23.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:24:23 compute-0 nova_compute[262220]: 2025-10-08 10:24:23.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:24:23 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1196: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:24:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:24:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:24:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:24:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:24 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:24:24 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:24:24 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:24:24 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:24:24 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:24:24.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:24:24 compute-0 sudo[294307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:24:24 compute-0 sudo[294307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:24:24 compute-0 sudo[294307]: pam_unix(sudo:session): session closed for user root
Oct 08 10:24:25 compute-0 sudo[294332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Oct 08 10:24:25 compute-0 sudo[294332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:24:25 compute-0 ceph-mon[73572]: pgmap v1196: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:24:25 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 08 10:24:25 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:24:25 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 08 10:24:25 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:24:25 compute-0 sudo[294332]: pam_unix(sudo:session): session closed for user root
Oct 08 10:24:25 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 10:24:25 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:24:25 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:24:25 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:24:25.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:24:25 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:24:25 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 10:24:25 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:24:25 compute-0 sudo[294380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:24:25 compute-0 sudo[294380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:24:25 compute-0 sudo[294380]: pam_unix(sudo:session): session closed for user root
Oct 08 10:24:25 compute-0 sudo[294405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 08 10:24:25 compute-0 sudo[294405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:24:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:24:25] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Oct 08 10:24:25 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:24:25] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Oct 08 10:24:25 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1197: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:24:25 compute-0 nova_compute[262220]: 2025-10-08 10:24:25.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:24:26 compute-0 sudo[294405]: pam_unix(sudo:session): session closed for user root
Oct 08 10:24:26 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 10:24:26 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:24:26 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 08 10:24:26 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 10:24:26 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1198: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 08 10:24:26 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 08 10:24:26 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:24:26 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 08 10:24:26 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:24:26 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 08 10:24:26 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 10:24:26 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 08 10:24:26 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 10:24:26 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 10:24:26 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:24:26 compute-0 sudo[294463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:24:26 compute-0 sudo[294463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:24:26 compute-0 sudo[294463]: pam_unix(sudo:session): session closed for user root
Oct 08 10:24:26 compute-0 sudo[294488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 08 10:24:26 compute-0 sudo[294488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:24:26 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:24:26 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:24:26 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:24:26 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:24:26 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:24:26 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 10:24:26 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:24:26 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:24:26 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 10:24:26 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 10:24:26 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:24:26 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:24:26 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:24:26 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:24:26.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:24:26 compute-0 podman[294555]: 2025-10-08 10:24:26.6963711 +0000 UTC m=+0.036789933 container create 650d8624a47bdff3e74105202e5af978437e690297f74e523d864a47500b8d07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_gould, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Oct 08 10:24:26 compute-0 systemd[1]: Started libpod-conmon-650d8624a47bdff3e74105202e5af978437e690297f74e523d864a47500b8d07.scope.
Oct 08 10:24:26 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:24:26 compute-0 podman[294555]: 2025-10-08 10:24:26.679583686 +0000 UTC m=+0.020002539 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:24:26 compute-0 podman[294555]: 2025-10-08 10:24:26.779944657 +0000 UTC m=+0.120363510 container init 650d8624a47bdff3e74105202e5af978437e690297f74e523d864a47500b8d07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_gould, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:24:26 compute-0 podman[294555]: 2025-10-08 10:24:26.787508122 +0000 UTC m=+0.127926955 container start 650d8624a47bdff3e74105202e5af978437e690297f74e523d864a47500b8d07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 08 10:24:26 compute-0 podman[294555]: 2025-10-08 10:24:26.791201022 +0000 UTC m=+0.131619875 container attach 650d8624a47bdff3e74105202e5af978437e690297f74e523d864a47500b8d07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_gould, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:24:26 compute-0 eloquent_gould[294572]: 167 167
Oct 08 10:24:26 compute-0 systemd[1]: libpod-650d8624a47bdff3e74105202e5af978437e690297f74e523d864a47500b8d07.scope: Deactivated successfully.
Oct 08 10:24:26 compute-0 podman[294555]: 2025-10-08 10:24:26.79300567 +0000 UTC m=+0.133424503 container died 650d8624a47bdff3e74105202e5af978437e690297f74e523d864a47500b8d07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_gould, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:24:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4ef0645135794cbe5475b7d69c8023008419860790f87a6d4ffae4d2051ea2b-merged.mount: Deactivated successfully.
Oct 08 10:24:26 compute-0 podman[294555]: 2025-10-08 10:24:26.831229239 +0000 UTC m=+0.171648072 container remove 650d8624a47bdff3e74105202e5af978437e690297f74e523d864a47500b8d07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_gould, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:24:26 compute-0 systemd[1]: libpod-conmon-650d8624a47bdff3e74105202e5af978437e690297f74e523d864a47500b8d07.scope: Deactivated successfully.
Oct 08 10:24:26 compute-0 podman[294596]: 2025-10-08 10:24:26.985695792 +0000 UTC m=+0.035433368 container create 95db51a5615d99f0cbbcf83d4537901e3b2063934834c9f145f12a189d5349e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_bhabha, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 10:24:27 compute-0 systemd[1]: Started libpod-conmon-95db51a5615d99f0cbbcf83d4537901e3b2063934834c9f145f12a189d5349e6.scope.
Oct 08 10:24:27 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:24:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28a07cfa0ba0d8f66d6b87d0775e31d5f339e859ba057e142032162dbafb5809/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:24:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28a07cfa0ba0d8f66d6b87d0775e31d5f339e859ba057e142032162dbafb5809/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:24:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28a07cfa0ba0d8f66d6b87d0775e31d5f339e859ba057e142032162dbafb5809/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:24:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28a07cfa0ba0d8f66d6b87d0775e31d5f339e859ba057e142032162dbafb5809/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:24:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28a07cfa0ba0d8f66d6b87d0775e31d5f339e859ba057e142032162dbafb5809/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 10:24:27 compute-0 podman[294596]: 2025-10-08 10:24:27.062867293 +0000 UTC m=+0.112604879 container init 95db51a5615d99f0cbbcf83d4537901e3b2063934834c9f145f12a189d5349e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:24:27 compute-0 podman[294596]: 2025-10-08 10:24:26.970696097 +0000 UTC m=+0.020433693 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:24:27 compute-0 podman[294596]: 2025-10-08 10:24:27.06923954 +0000 UTC m=+0.118977116 container start 95db51a5615d99f0cbbcf83d4537901e3b2063934834c9f145f12a189d5349e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 10:24:27 compute-0 podman[294596]: 2025-10-08 10:24:27.073125165 +0000 UTC m=+0.122862761 container attach 95db51a5615d99f0cbbcf83d4537901e3b2063934834c9f145f12a189d5349e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:24:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:24:27.227Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:24:27 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:24:27 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:24:27 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:24:27.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:24:27 compute-0 ceph-mon[73572]: pgmap v1197: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:24:27 compute-0 ceph-mon[73572]: pgmap v1198: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 08 10:24:27 compute-0 strange_bhabha[294613]: --> passed data devices: 0 physical, 1 LVM
Oct 08 10:24:27 compute-0 strange_bhabha[294613]: --> All data devices are unavailable
Oct 08 10:24:27 compute-0 systemd[1]: libpod-95db51a5615d99f0cbbcf83d4537901e3b2063934834c9f145f12a189d5349e6.scope: Deactivated successfully.
Oct 08 10:24:27 compute-0 podman[294596]: 2025-10-08 10:24:27.432596751 +0000 UTC m=+0.482334347 container died 95db51a5615d99f0cbbcf83d4537901e3b2063934834c9f145f12a189d5349e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 10:24:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-28a07cfa0ba0d8f66d6b87d0775e31d5f339e859ba057e142032162dbafb5809-merged.mount: Deactivated successfully.
Oct 08 10:24:27 compute-0 podman[294596]: 2025-10-08 10:24:27.494963111 +0000 UTC m=+0.544700717 container remove 95db51a5615d99f0cbbcf83d4537901e3b2063934834c9f145f12a189d5349e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_bhabha, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct 08 10:24:27 compute-0 systemd[1]: libpod-conmon-95db51a5615d99f0cbbcf83d4537901e3b2063934834c9f145f12a189d5349e6.scope: Deactivated successfully.
Oct 08 10:24:27 compute-0 sudo[294488]: pam_unix(sudo:session): session closed for user root
Oct 08 10:24:27 compute-0 sudo[294641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:24:27 compute-0 sudo[294641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:24:27 compute-0 sudo[294641]: pam_unix(sudo:session): session closed for user root
Oct 08 10:24:27 compute-0 sudo[294666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- lvm list --format json
Oct 08 10:24:27 compute-0 sudo[294666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:24:28 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1199: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 08 10:24:28 compute-0 podman[294736]: 2025-10-08 10:24:28.139249533 +0000 UTC m=+0.040246274 container create 4372fa772eab3456a76ef1cda1ceed78fe82d75f397496cb7d1b08512b30d75a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_yalow, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:24:28 compute-0 systemd[1]: Started libpod-conmon-4372fa772eab3456a76ef1cda1ceed78fe82d75f397496cb7d1b08512b30d75a.scope.
Oct 08 10:24:28 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:24:28 compute-0 podman[294736]: 2025-10-08 10:24:28.207741392 +0000 UTC m=+0.108738133 container init 4372fa772eab3456a76ef1cda1ceed78fe82d75f397496cb7d1b08512b30d75a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_yalow, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 08 10:24:28 compute-0 podman[294736]: 2025-10-08 10:24:28.124581089 +0000 UTC m=+0.025577860 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:24:28 compute-0 podman[294736]: 2025-10-08 10:24:28.219760722 +0000 UTC m=+0.120757463 container start 4372fa772eab3456a76ef1cda1ceed78fe82d75f397496cb7d1b08512b30d75a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_yalow, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 08 10:24:28 compute-0 podman[294736]: 2025-10-08 10:24:28.223181452 +0000 UTC m=+0.124178223 container attach 4372fa772eab3456a76ef1cda1ceed78fe82d75f397496cb7d1b08512b30d75a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:24:28 compute-0 condescending_yalow[294754]: 167 167
Oct 08 10:24:28 compute-0 systemd[1]: libpod-4372fa772eab3456a76ef1cda1ceed78fe82d75f397496cb7d1b08512b30d75a.scope: Deactivated successfully.
Oct 08 10:24:28 compute-0 conmon[294754]: conmon 4372fa772eab3456a76e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4372fa772eab3456a76ef1cda1ceed78fe82d75f397496cb7d1b08512b30d75a.scope/container/memory.events
Oct 08 10:24:28 compute-0 podman[294736]: 2025-10-08 10:24:28.226969705 +0000 UTC m=+0.127966446 container died 4372fa772eab3456a76ef1cda1ceed78fe82d75f397496cb7d1b08512b30d75a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_yalow, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 08 10:24:28 compute-0 podman[294751]: 2025-10-08 10:24:28.257184254 +0000 UTC m=+0.087214137 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.license=GPLv2)
Oct 08 10:24:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2923532f89b5309ca7222b7e5615e36ffdcc370172c21361928cd42226676bd-merged.mount: Deactivated successfully.
Oct 08 10:24:28 compute-0 podman[294736]: 2025-10-08 10:24:28.270265378 +0000 UTC m=+0.171262119 container remove 4372fa772eab3456a76ef1cda1ceed78fe82d75f397496cb7d1b08512b30d75a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_yalow, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 08 10:24:28 compute-0 systemd[1]: libpod-conmon-4372fa772eab3456a76ef1cda1ceed78fe82d75f397496cb7d1b08512b30d75a.scope: Deactivated successfully.
Oct 08 10:24:28 compute-0 podman[294799]: 2025-10-08 10:24:28.440451211 +0000 UTC m=+0.045063310 container create 65be9d359a348a3a77d8d8cd348c40b8ac2d3ce7ba866668d963aa2792880d51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Oct 08 10:24:28 compute-0 systemd[1]: Started libpod-conmon-65be9d359a348a3a77d8d8cd348c40b8ac2d3ce7ba866668d963aa2792880d51.scope.
Oct 08 10:24:28 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:24:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cc2a2a5e6d18a2fd8f250d86bd8a1ec6f92d8f750f4353925f4d31e38b6f488/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:24:28 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:24:28 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:24:28 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:24:28.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:24:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cc2a2a5e6d18a2fd8f250d86bd8a1ec6f92d8f750f4353925f4d31e38b6f488/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:24:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cc2a2a5e6d18a2fd8f250d86bd8a1ec6f92d8f750f4353925f4d31e38b6f488/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:24:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cc2a2a5e6d18a2fd8f250d86bd8a1ec6f92d8f750f4353925f4d31e38b6f488/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:24:28 compute-0 podman[294799]: 2025-10-08 10:24:28.515455671 +0000 UTC m=+0.120067790 container init 65be9d359a348a3a77d8d8cd348c40b8ac2d3ce7ba866668d963aa2792880d51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_borg, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct 08 10:24:28 compute-0 podman[294799]: 2025-10-08 10:24:28.424703691 +0000 UTC m=+0.029315820 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:24:28 compute-0 podman[294799]: 2025-10-08 10:24:28.523248383 +0000 UTC m=+0.127860482 container start 65be9d359a348a3a77d8d8cd348c40b8ac2d3ce7ba866668d963aa2792880d51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_borg, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct 08 10:24:28 compute-0 podman[294799]: 2025-10-08 10:24:28.527382348 +0000 UTC m=+0.131994457 container attach 65be9d359a348a3a77d8d8cd348c40b8ac2d3ce7ba866668d963aa2792880d51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_borg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct 08 10:24:28 compute-0 laughing_borg[294815]: {
Oct 08 10:24:28 compute-0 laughing_borg[294815]:     "1": [
Oct 08 10:24:28 compute-0 laughing_borg[294815]:         {
Oct 08 10:24:28 compute-0 laughing_borg[294815]:             "devices": [
Oct 08 10:24:28 compute-0 laughing_borg[294815]:                 "/dev/loop3"
Oct 08 10:24:28 compute-0 laughing_borg[294815]:             ],
Oct 08 10:24:28 compute-0 laughing_borg[294815]:             "lv_name": "ceph_lv0",
Oct 08 10:24:28 compute-0 laughing_borg[294815]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:24:28 compute-0 laughing_borg[294815]:             "lv_size": "21470642176",
Oct 08 10:24:28 compute-0 laughing_borg[294815]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 08 10:24:28 compute-0 laughing_borg[294815]:             "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 10:24:28 compute-0 laughing_borg[294815]:             "name": "ceph_lv0",
Oct 08 10:24:28 compute-0 laughing_borg[294815]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:24:28 compute-0 laughing_borg[294815]:             "tags": {
Oct 08 10:24:28 compute-0 laughing_borg[294815]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:24:28 compute-0 laughing_borg[294815]:                 "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 10:24:28 compute-0 laughing_borg[294815]:                 "ceph.cephx_lockbox_secret": "",
Oct 08 10:24:28 compute-0 laughing_borg[294815]:                 "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct 08 10:24:28 compute-0 laughing_borg[294815]:                 "ceph.cluster_name": "ceph",
Oct 08 10:24:28 compute-0 laughing_borg[294815]:                 "ceph.crush_device_class": "",
Oct 08 10:24:28 compute-0 laughing_borg[294815]:                 "ceph.encrypted": "0",
Oct 08 10:24:28 compute-0 laughing_borg[294815]:                 "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct 08 10:24:28 compute-0 laughing_borg[294815]:                 "ceph.osd_id": "1",
Oct 08 10:24:28 compute-0 laughing_borg[294815]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 08 10:24:28 compute-0 laughing_borg[294815]:                 "ceph.type": "block",
Oct 08 10:24:28 compute-0 laughing_borg[294815]:                 "ceph.vdo": "0",
Oct 08 10:24:28 compute-0 laughing_borg[294815]:                 "ceph.with_tpm": "0"
Oct 08 10:24:28 compute-0 laughing_borg[294815]:             },
Oct 08 10:24:28 compute-0 laughing_borg[294815]:             "type": "block",
Oct 08 10:24:28 compute-0 laughing_borg[294815]:             "vg_name": "ceph_vg0"
Oct 08 10:24:28 compute-0 laughing_borg[294815]:         }
Oct 08 10:24:28 compute-0 laughing_borg[294815]:     ]
Oct 08 10:24:28 compute-0 laughing_borg[294815]: }
Oct 08 10:24:28 compute-0 nova_compute[262220]: 2025-10-08 10:24:28.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:24:28 compute-0 systemd[1]: libpod-65be9d359a348a3a77d8d8cd348c40b8ac2d3ce7ba866668d963aa2792880d51.scope: Deactivated successfully.
Oct 08 10:24:28 compute-0 podman[294799]: 2025-10-08 10:24:28.784151826 +0000 UTC m=+0.388763935 container died 65be9d359a348a3a77d8d8cd348c40b8ac2d3ce7ba866668d963aa2792880d51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_borg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Oct 08 10:24:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-7cc2a2a5e6d18a2fd8f250d86bd8a1ec6f92d8f750f4353925f4d31e38b6f488-merged.mount: Deactivated successfully.
Oct 08 10:24:28 compute-0 podman[294799]: 2025-10-08 10:24:28.824215544 +0000 UTC m=+0.428827643 container remove 65be9d359a348a3a77d8d8cd348c40b8ac2d3ce7ba866668d963aa2792880d51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_borg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 08 10:24:28 compute-0 systemd[1]: libpod-conmon-65be9d359a348a3a77d8d8cd348c40b8ac2d3ce7ba866668d963aa2792880d51.scope: Deactivated successfully.
Oct 08 10:24:28 compute-0 sudo[294666]: pam_unix(sudo:session): session closed for user root
Oct 08 10:24:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:24:28.872Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:24:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:24:28.873Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:24:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:24:28.873Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:24:28 compute-0 sudo[294835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:24:28 compute-0 sudo[294835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:24:28 compute-0 sudo[294835]: pam_unix(sudo:session): session closed for user root
Oct 08 10:24:28 compute-0 sudo[294860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- raw list --format json
Oct 08 10:24:28 compute-0 sudo[294860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:24:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:29 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:24:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:29 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:24:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:29 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:24:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:29 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:24:29 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:24:29 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:24:29 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:24:29.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:24:29 compute-0 podman[294929]: 2025-10-08 10:24:29.393005211 +0000 UTC m=+0.045102512 container create a557e2c0283c053a536e0b7386e1edbc4e783d45683c9806a4988e8a1d6ec6ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 08 10:24:29 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:24:29 compute-0 ceph-mon[73572]: pgmap v1199: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 08 10:24:29 compute-0 systemd[1]: Started libpod-conmon-a557e2c0283c053a536e0b7386e1edbc4e783d45683c9806a4988e8a1d6ec6ca.scope.
Oct 08 10:24:29 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:24:29 compute-0 podman[294929]: 2025-10-08 10:24:29.373825079 +0000 UTC m=+0.025922420 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:24:29 compute-0 podman[294929]: 2025-10-08 10:24:29.469659544 +0000 UTC m=+0.121756885 container init a557e2c0283c053a536e0b7386e1edbc4e783d45683c9806a4988e8a1d6ec6ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_villani, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 08 10:24:29 compute-0 podman[294929]: 2025-10-08 10:24:29.477858489 +0000 UTC m=+0.129955780 container start a557e2c0283c053a536e0b7386e1edbc4e783d45683c9806a4988e8a1d6ec6ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_villani, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 10:24:29 compute-0 podman[294929]: 2025-10-08 10:24:29.481667913 +0000 UTC m=+0.133765234 container attach a557e2c0283c053a536e0b7386e1edbc4e783d45683c9806a4988e8a1d6ec6ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct 08 10:24:29 compute-0 friendly_villani[294946]: 167 167
Oct 08 10:24:29 compute-0 systemd[1]: libpod-a557e2c0283c053a536e0b7386e1edbc4e783d45683c9806a4988e8a1d6ec6ca.scope: Deactivated successfully.
Oct 08 10:24:29 compute-0 podman[294929]: 2025-10-08 10:24:29.484892597 +0000 UTC m=+0.136989908 container died a557e2c0283c053a536e0b7386e1edbc4e783d45683c9806a4988e8a1d6ec6ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_villani, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct 08 10:24:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-b32bf2d97f46bfbb46900474593573c9bfcdda953a2f063bc78a5efc6f6fe4f8-merged.mount: Deactivated successfully.
Oct 08 10:24:29 compute-0 podman[294929]: 2025-10-08 10:24:29.522228256 +0000 UTC m=+0.174325547 container remove a557e2c0283c053a536e0b7386e1edbc4e783d45683c9806a4988e8a1d6ec6ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct 08 10:24:29 compute-0 systemd[1]: libpod-conmon-a557e2c0283c053a536e0b7386e1edbc4e783d45683c9806a4988e8a1d6ec6ca.scope: Deactivated successfully.
Oct 08 10:24:29 compute-0 podman[294970]: 2025-10-08 10:24:29.707202879 +0000 UTC m=+0.041244357 container create 2ab87d596348511e71d89191a9ff8e90f10e32562b01a4dffb0999cec380d50b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_jackson, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 08 10:24:29 compute-0 systemd[1]: Started libpod-conmon-2ab87d596348511e71d89191a9ff8e90f10e32562b01a4dffb0999cec380d50b.scope.
Oct 08 10:24:29 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:24:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6f79ff520b54d250b3ec93f6e39f7b5ee4998dc4172fe526d19959dccfdec95/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:24:29 compute-0 podman[294970]: 2025-10-08 10:24:29.688879896 +0000 UTC m=+0.022921384 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:24:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6f79ff520b54d250b3ec93f6e39f7b5ee4998dc4172fe526d19959dccfdec95/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:24:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6f79ff520b54d250b3ec93f6e39f7b5ee4998dc4172fe526d19959dccfdec95/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:24:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6f79ff520b54d250b3ec93f6e39f7b5ee4998dc4172fe526d19959dccfdec95/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:24:29 compute-0 podman[294970]: 2025-10-08 10:24:29.798463686 +0000 UTC m=+0.132505144 container init 2ab87d596348511e71d89191a9ff8e90f10e32562b01a4dffb0999cec380d50b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_jackson, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct 08 10:24:29 compute-0 podman[294970]: 2025-10-08 10:24:29.804178681 +0000 UTC m=+0.138220139 container start 2ab87d596348511e71d89191a9ff8e90f10e32562b01a4dffb0999cec380d50b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 08 10:24:29 compute-0 podman[294970]: 2025-10-08 10:24:29.807121076 +0000 UTC m=+0.141162554 container attach 2ab87d596348511e71d89191a9ff8e90f10e32562b01a4dffb0999cec380d50b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct 08 10:24:30 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1200: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 08 10:24:30 compute-0 lvm[295061]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 08 10:24:30 compute-0 lvm[295061]: VG ceph_vg0 finished
Oct 08 10:24:30 compute-0 admiring_jackson[294986]: {}
Oct 08 10:24:30 compute-0 systemd[1]: libpod-2ab87d596348511e71d89191a9ff8e90f10e32562b01a4dffb0999cec380d50b.scope: Deactivated successfully.
Oct 08 10:24:30 compute-0 systemd[1]: libpod-2ab87d596348511e71d89191a9ff8e90f10e32562b01a4dffb0999cec380d50b.scope: Consumed 1.066s CPU time.
Oct 08 10:24:30 compute-0 podman[294970]: 2025-10-08 10:24:30.485495881 +0000 UTC m=+0.819537339 container died 2ab87d596348511e71d89191a9ff8e90f10e32562b01a4dffb0999cec380d50b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_jackson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 10:24:30 compute-0 sudo[295065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:24:30 compute-0 sudo[295065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:24:30 compute-0 sudo[295065]: pam_unix(sudo:session): session closed for user root
Oct 08 10:24:30 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:24:30 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:24:30 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:24:30.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:24:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6f79ff520b54d250b3ec93f6e39f7b5ee4998dc4172fe526d19959dccfdec95-merged.mount: Deactivated successfully.
Oct 08 10:24:30 compute-0 podman[294970]: 2025-10-08 10:24:30.535554042 +0000 UTC m=+0.869595500 container remove 2ab87d596348511e71d89191a9ff8e90f10e32562b01a4dffb0999cec380d50b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:24:30 compute-0 systemd[1]: libpod-conmon-2ab87d596348511e71d89191a9ff8e90f10e32562b01a4dffb0999cec380d50b.scope: Deactivated successfully.
Oct 08 10:24:30 compute-0 sudo[294860]: pam_unix(sudo:session): session closed for user root
Oct 08 10:24:30 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 10:24:30 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:24:30 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 10:24:30 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:24:30 compute-0 sudo[295104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 08 10:24:30 compute-0 sudo[295104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:24:30 compute-0 sudo[295104]: pam_unix(sudo:session): session closed for user root
Oct 08 10:24:30 compute-0 nova_compute[262220]: 2025-10-08 10:24:30.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:24:31 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:24:31 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:24:31 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:24:31.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:24:31 compute-0 ceph-mon[73572]: pgmap v1200: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 08 10:24:31 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:24:31 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:24:31 compute-0 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 08 10:24:31 compute-0 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 12K writes, 46K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 12K writes, 3376 syncs, 3.74 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1713 writes, 5631 keys, 1713 commit groups, 1.0 writes per commit group, ingest: 6.95 MB, 0.01 MB/s
                                           Interval WAL: 1713 writes, 710 syncs, 2.41 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 08 10:24:31 compute-0 podman[295131]: 2025-10-08 10:24:31.930902267 +0000 UTC m=+0.076739167 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 08 10:24:31 compute-0 podman[295130]: 2025-10-08 10:24:31.943916058 +0000 UTC m=+0.089830461 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 08 10:24:32 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1201: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 08 10:24:32 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:24:32 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:24:32 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:24:32.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:24:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:24:32 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:24:33 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:24:33 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:24:33 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:24:33.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:24:33 compute-0 ceph-mon[73572]: pgmap v1201: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 08 10:24:33 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:24:33 compute-0 nova_compute[262220]: 2025-10-08 10:24:33.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:24:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:24:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:24:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:24:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:34 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:24:34 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1202: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 08 10:24:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:24:34 compute-0 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Oct 08 10:24:34 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:34.419331) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 08 10:24:34 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Oct 08 10:24:34 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759919074419376, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 2632, "num_deletes": 505, "total_data_size": 4220989, "memory_usage": 4324656, "flush_reason": "Manual Compaction"}
Oct 08 10:24:34 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Oct 08 10:24:34 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759919074440673, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 4086803, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31664, "largest_seqno": 34295, "table_properties": {"data_size": 4074665, "index_size": 7160, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3909, "raw_key_size": 31602, "raw_average_key_size": 20, "raw_value_size": 4046967, "raw_average_value_size": 2627, "num_data_blocks": 306, "num_entries": 1540, "num_filter_entries": 1540, "num_deletions": 505, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759918885, "oldest_key_time": 1759918885, "file_creation_time": 1759919074, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Oct 08 10:24:34 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 21379 microseconds, and 7085 cpu microseconds.
Oct 08 10:24:34 compute-0 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 08 10:24:34 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:34.440710) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 4086803 bytes OK
Oct 08 10:24:34 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:34.440727) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Oct 08 10:24:34 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:34.442249) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Oct 08 10:24:34 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:34.442260) EVENT_LOG_v1 {"time_micros": 1759919074442257, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 08 10:24:34 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:34.442281) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 08 10:24:34 compute-0 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 4208252, prev total WAL file size 4208252, number of live WAL files 2.
Oct 08 10:24:34 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 10:24:34 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:34.443169) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323630' seq:72057594037927935, type:22 .. '6B7600353131' seq:0, type:0; will stop at (end)
Oct 08 10:24:34 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 08 10:24:34 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(3991KB)], [68(13MB)]
Oct 08 10:24:34 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759919074443207, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 18078954, "oldest_snapshot_seqno": -1}
Oct 08 10:24:34 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:24:34 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:24:34 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:24:34.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:24:34 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 6722 keys, 16562887 bytes, temperature: kUnknown
Oct 08 10:24:34 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759919074529496, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 16562887, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16515952, "index_size": 29031, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16837, "raw_key_size": 174881, "raw_average_key_size": 26, "raw_value_size": 16393117, "raw_average_value_size": 2438, "num_data_blocks": 1157, "num_entries": 6722, "num_filter_entries": 6722, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916581, "oldest_key_time": 0, "file_creation_time": 1759919074, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Oct 08 10:24:34 compute-0 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 08 10:24:34 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:34.529699) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 16562887 bytes
Oct 08 10:24:34 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:34.530703) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 209.4 rd, 191.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.9, 13.3 +0.0 blob) out(15.8 +0.0 blob), read-write-amplify(8.5) write-amplify(4.1) OK, records in: 7749, records dropped: 1027 output_compression: NoCompression
Oct 08 10:24:34 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:34.530719) EVENT_LOG_v1 {"time_micros": 1759919074530712, "job": 38, "event": "compaction_finished", "compaction_time_micros": 86346, "compaction_time_cpu_micros": 43046, "output_level": 6, "num_output_files": 1, "total_output_size": 16562887, "num_input_records": 7749, "num_output_records": 6722, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 08 10:24:34 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 10:24:34 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759919074531426, "job": 38, "event": "table_file_deletion", "file_number": 70}
Oct 08 10:24:34 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 10:24:34 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759919074533791, "job": 38, "event": "table_file_deletion", "file_number": 68}
Oct 08 10:24:34 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:34.443098) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:24:34 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:34.533865) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:24:34 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:34.533870) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:24:34 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:34.533872) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:24:34 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:34.533873) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:24:34 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:34.533875) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:24:35 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:24:35 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:24:35 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:24:35.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:24:35 compute-0 ceph-mon[73572]: pgmap v1202: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 08 10:24:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:24:35] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Oct 08 10:24:35 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:24:35] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Oct 08 10:24:35 compute-0 nova_compute[262220]: 2025-10-08 10:24:35.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:24:36 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1203: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 08 10:24:36 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:24:36 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:24:36 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:24:36.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:24:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:24:37.228Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:24:37 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:24:37 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:24:37 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:24:37.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:24:37 compute-0 ceph-mon[73572]: pgmap v1203: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 08 10:24:38 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1204: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:24:38 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:24:38 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:24:38 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:24:38.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:24:38 compute-0 nova_compute[262220]: 2025-10-08 10:24:38.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:24:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:24:38.874Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:24:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:24:38.874Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:24:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:24:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:24:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:24:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:39 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:24:39 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:24:39 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:24:39 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:24:39.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:24:39 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:24:39 compute-0 ceph-mon[73572]: pgmap v1204: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:24:40 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1205: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:24:40 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:24:40 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:24:40 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:24:40.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:24:40 compute-0 nova_compute[262220]: 2025-10-08 10:24:40.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:24:41 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:24:41 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:24:41 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:24:41.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:24:41 compute-0 ceph-mon[73572]: pgmap v1205: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:24:42 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1206: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:24:42 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:24:42 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:24:42 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:24:42.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:24:42 compute-0 ceph-mon[73572]: pgmap v1206: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:24:43 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:24:43 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:24:43 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:24:43.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:24:43 compute-0 nova_compute[262220]: 2025-10-08 10:24:43.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:24:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:24:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:24:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:24:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:44 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:24:44 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1207: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:24:44 compute-0 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Oct 08 10:24:44 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:44.256326) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 08 10:24:44 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Oct 08 10:24:44 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759919084256369, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 335, "num_deletes": 251, "total_data_size": 218124, "memory_usage": 225560, "flush_reason": "Manual Compaction"}
Oct 08 10:24:44 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Oct 08 10:24:44 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759919084259455, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 215844, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34296, "largest_seqno": 34630, "table_properties": {"data_size": 213690, "index_size": 318, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5302, "raw_average_key_size": 18, "raw_value_size": 209540, "raw_average_value_size": 730, "num_data_blocks": 14, "num_entries": 287, "num_filter_entries": 287, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759919074, "oldest_key_time": 1759919074, "file_creation_time": 1759919084, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Oct 08 10:24:44 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 3162 microseconds, and 1031 cpu microseconds.
Oct 08 10:24:44 compute-0 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 08 10:24:44 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:44.259493) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 215844 bytes OK
Oct 08 10:24:44 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:44.259510) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Oct 08 10:24:44 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:44.261541) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Oct 08 10:24:44 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:44.261557) EVENT_LOG_v1 {"time_micros": 1759919084261552, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 08 10:24:44 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:44.261574) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 08 10:24:44 compute-0 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 215840, prev total WAL file size 215840, number of live WAL files 2.
Oct 08 10:24:44 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 10:24:44 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:44.261925) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Oct 08 10:24:44 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 08 10:24:44 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(210KB)], [71(15MB)]
Oct 08 10:24:44 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759919084261956, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 16778731, "oldest_snapshot_seqno": -1}
Oct 08 10:24:44 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 6499 keys, 14677691 bytes, temperature: kUnknown
Oct 08 10:24:44 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759919084330581, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 14677691, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14633718, "index_size": 26647, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16261, "raw_key_size": 170929, "raw_average_key_size": 26, "raw_value_size": 14516052, "raw_average_value_size": 2233, "num_data_blocks": 1051, "num_entries": 6499, "num_filter_entries": 6499, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916581, "oldest_key_time": 0, "file_creation_time": 1759919084, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Oct 08 10:24:44 compute-0 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 08 10:24:44 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:44.330818) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 14677691 bytes
Oct 08 10:24:44 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:44.331872) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 244.2 rd, 213.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 15.8 +0.0 blob) out(14.0 +0.0 blob), read-write-amplify(145.7) write-amplify(68.0) OK, records in: 7009, records dropped: 510 output_compression: NoCompression
Oct 08 10:24:44 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:44.331887) EVENT_LOG_v1 {"time_micros": 1759919084331880, "job": 40, "event": "compaction_finished", "compaction_time_micros": 68697, "compaction_time_cpu_micros": 29894, "output_level": 6, "num_output_files": 1, "total_output_size": 14677691, "num_input_records": 7009, "num_output_records": 6499, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 08 10:24:44 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 10:24:44 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759919084332017, "job": 40, "event": "table_file_deletion", "file_number": 73}
Oct 08 10:24:44 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 10:24:44 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759919084334965, "job": 40, "event": "table_file_deletion", "file_number": 71}
Oct 08 10:24:44 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:44.261858) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:24:44 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:44.335073) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:24:44 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:44.335078) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:24:44 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:44.335079) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:24:44 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:44.335082) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:24:44 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:44.335083) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:24:44 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:24:44 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:24:44 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:24:44 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:24:44.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:24:45 compute-0 ceph-mon[73572]: pgmap v1207: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:24:45 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:24:45 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:24:45 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:24:45.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:24:45 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:24:45] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Oct 08 10:24:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:24:45] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Oct 08 10:24:45 compute-0 nova_compute[262220]: 2025-10-08 10:24:45.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:24:46 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1208: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:24:46 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:24:46 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:24:46 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:24:46.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:24:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:24:47.229Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:24:47 compute-0 ceph-mon[73572]: pgmap v1208: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:24:47 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:24:47 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:24:47 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:24:47.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:24:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:24:47
Oct 08 10:24:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 08 10:24:47 compute-0 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct 08 10:24:47 compute-0 ceph-mgr[73869]: [balancer INFO root] pools ['.nfs', 'default.rgw.meta', '.rgw.root', 'volumes', 'cephfs.cephfs.meta', 'backups', 'images', 'default.rgw.control', '.mgr', 'vms', 'default.rgw.log', 'cephfs.cephfs.data']
Oct 08 10:24:47 compute-0 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct 08 10:24:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:24:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:24:47 compute-0 nova_compute[262220]: 2025-10-08 10:24:47.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:24:47 compute-0 nova_compute[262220]: 2025-10-08 10:24:47.887 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 08 10:24:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:24:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:24:48 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1209: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:24:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:24:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:24:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:24:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:24:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct 08 10:24:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:24:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 08 10:24:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:24:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 10:24:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:24:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:24:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:24:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:24:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:24:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 08 10:24:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:24:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 08 10:24:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:24:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:24:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:24:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 08 10:24:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:24:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 08 10:24:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:24:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:24:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:24:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 08 10:24:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:24:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 10:24:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 08 10:24:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:24:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:24:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:24:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:24:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:24:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 08 10:24:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:24:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:24:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:24:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:24:48 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:24:48 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:24:48 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:24:48.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:24:48 compute-0 nova_compute[262220]: 2025-10-08 10:24:48.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:24:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:24:48.875Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:24:48 compute-0 podman[295185]: 2025-10-08 10:24:48.925386449 +0000 UTC m=+0.079531418 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 08 10:24:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:24:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:24:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:24:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:49 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:24:49 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:24:49 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:24:49 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:24:49.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:24:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:24:49 compute-0 ceph-mon[73572]: pgmap v1209: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:24:49 compute-0 nova_compute[262220]: 2025-10-08 10:24:49.903 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:24:50 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1210: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:24:50 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:24:50 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:24:50 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:24:50.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:24:50 compute-0 sudo[295209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:24:50 compute-0 sudo[295209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:24:50 compute-0 sudo[295209]: pam_unix(sudo:session): session closed for user root
Oct 08 10:24:50 compute-0 nova_compute[262220]: 2025-10-08 10:24:50.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:24:51 compute-0 nova_compute[262220]: 2025-10-08 10:24:50.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:24:51 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:24:51 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:24:51 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:24:51.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:24:51 compute-0 ceph-mon[73572]: pgmap v1210: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:24:51 compute-0 nova_compute[262220]: 2025-10-08 10:24:51.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:24:52 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1211: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:24:52 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:24:52 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:24:52 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:24:52.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:24:53 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:24:53 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:24:53 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:24:53.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:24:53 compute-0 ceph-mon[73572]: pgmap v1211: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:24:53 compute-0 nova_compute[262220]: 2025-10-08 10:24:53.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:24:53 compute-0 nova_compute[262220]: 2025-10-08 10:24:53.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:24:53 compute-0 nova_compute[262220]: 2025-10-08 10:24:53.887 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 08 10:24:53 compute-0 nova_compute[262220]: 2025-10-08 10:24:53.887 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 08 10:24:53 compute-0 nova_compute[262220]: 2025-10-08 10:24:53.901 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 08 10:24:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:24:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:24:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:24:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:54 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:24:54 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1212: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:24:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:24:54 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:24:54 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:24:54 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:24:54.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:24:54 compute-0 nova_compute[262220]: 2025-10-08 10:24:54.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:24:54 compute-0 nova_compute[262220]: 2025-10-08 10:24:54.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:24:54 compute-0 nova_compute[262220]: 2025-10-08 10:24:54.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:24:54 compute-0 nova_compute[262220]: 2025-10-08 10:24:54.932 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:24:54 compute-0 nova_compute[262220]: 2025-10-08 10:24:54.932 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:24:54 compute-0 nova_compute[262220]: 2025-10-08 10:24:54.932 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:24:54 compute-0 nova_compute[262220]: 2025-10-08 10:24:54.933 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 08 10:24:54 compute-0 nova_compute[262220]: 2025-10-08 10:24:54.933 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:24:55 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:24:55 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:24:55 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:24:55.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:24:55 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 08 10:24:55 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1978300500' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:24:55 compute-0 nova_compute[262220]: 2025-10-08 10:24:55.422 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:24:55 compute-0 ceph-mon[73572]: pgmap v1212: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:24:55 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/1978300500' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:24:55 compute-0 nova_compute[262220]: 2025-10-08 10:24:55.588 2 WARNING nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 10:24:55 compute-0 nova_compute[262220]: 2025-10-08 10:24:55.590 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4503MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 08 10:24:55 compute-0 nova_compute[262220]: 2025-10-08 10:24:55.590 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:24:55 compute-0 nova_compute[262220]: 2025-10-08 10:24:55.590 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:24:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:24:55] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Oct 08 10:24:55 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:24:55] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Oct 08 10:24:55 compute-0 nova_compute[262220]: 2025-10-08 10:24:55.873 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 08 10:24:55 compute-0 nova_compute[262220]: 2025-10-08 10:24:55.874 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 08 10:24:56 compute-0 nova_compute[262220]: 2025-10-08 10:24:56.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:24:56 compute-0 nova_compute[262220]: 2025-10-08 10:24:56.046 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Refreshing inventories for resource provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 08 10:24:56 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1213: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:24:56 compute-0 nova_compute[262220]: 2025-10-08 10:24:56.167 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Updating ProviderTree inventory for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 08 10:24:56 compute-0 nova_compute[262220]: 2025-10-08 10:24:56.167 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Updating inventory in ProviderTree for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 08 10:24:56 compute-0 nova_compute[262220]: 2025-10-08 10:24:56.201 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Refreshing aggregate associations for resource provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 08 10:24:56 compute-0 nova_compute[262220]: 2025-10-08 10:24:56.231 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Refreshing trait associations for resource provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2, traits: HW_CPU_X86_CLMUL,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_FMA3,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ACCELERATORS,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE41,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AMD_SVM,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_RESCUE_BFV,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SVM,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,HW_CPU_X86_SSE2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 08 10:24:56 compute-0 nova_compute[262220]: 2025-10-08 10:24:56.258 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:24:56 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/1290111322' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:24:56 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/1702648968' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:24:56 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:24:56 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:24:56 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:24:56.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:24:56 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 08 10:24:56 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/323080184' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:24:56 compute-0 nova_compute[262220]: 2025-10-08 10:24:56.701 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:24:56 compute-0 nova_compute[262220]: 2025-10-08 10:24:56.707 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 08 10:24:56 compute-0 nova_compute[262220]: 2025-10-08 10:24:56.738 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 08 10:24:56 compute-0 nova_compute[262220]: 2025-10-08 10:24:56.740 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 08 10:24:56 compute-0 nova_compute[262220]: 2025-10-08 10:24:56.740 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.150s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:24:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:24:57.230Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:24:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:24:57.230Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:24:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:24:57.230Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:24:57 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:24:57 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:24:57 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:24:57.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:24:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:24:57.423 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:24:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:24:57.424 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:24:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:24:57.424 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:24:57 compute-0 ceph-mon[73572]: pgmap v1213: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:24:57 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/323080184' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:24:58 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1214: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:24:58 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:24:58 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:24:58 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:24:58.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:24:58 compute-0 nova_compute[262220]: 2025-10-08 10:24:58.741 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:24:58 compute-0 nova_compute[262220]: 2025-10-08 10:24:58.741 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:24:58 compute-0 nova_compute[262220]: 2025-10-08 10:24:58.741 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 08 10:24:58 compute-0 nova_compute[262220]: 2025-10-08 10:24:58.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:24:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:24:58.876Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:24:58 compute-0 podman[295286]: 2025-10-08 10:24:58.906858259 +0000 UTC m=+0.073791941 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 08 10:24:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:59 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:24:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:59 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:24:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:59 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:24:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:59 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:24:59 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:24:59 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:24:59 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:24:59.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:24:59 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:24:59 compute-0 ceph-mon[73572]: pgmap v1214: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:24:59 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/3358993482' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:25:00 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1215: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:25:00 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:25:00 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:25:00 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:25:00.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:25:00 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/1892105241' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:25:01 compute-0 nova_compute[262220]: 2025-10-08 10:25:01.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:25:01 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:25:01 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:25:01 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:25:01.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:25:01 compute-0 ceph-mon[73572]: pgmap v1215: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:25:02 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1216: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:25:02 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:25:02 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:25:02 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:25:02.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:25:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:25:02 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:25:02 compute-0 nova_compute[262220]: 2025-10-08 10:25:02.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:25:02 compute-0 podman[295317]: 2025-10-08 10:25:02.901788261 +0000 UTC m=+0.056675977 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 08 10:25:02 compute-0 podman[295316]: 2025-10-08 10:25:02.927915667 +0000 UTC m=+0.086356378 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, config_id=multipathd)
Oct 08 10:25:03 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:25:03 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:25:03 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:25:03.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:25:03 compute-0 ceph-mon[73572]: pgmap v1216: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:25:03 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:25:03 compute-0 nova_compute[262220]: 2025-10-08 10:25:03.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:25:03 compute-0 nova_compute[262220]: 2025-10-08 10:25:03.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:25:03 compute-0 nova_compute[262220]: 2025-10-08 10:25:03.886 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 08 10:25:03 compute-0 nova_compute[262220]: 2025-10-08 10:25:03.911 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 08 10:25:03 compute-0 nova_compute[262220]: 2025-10-08 10:25:03.911 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:25:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:25:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:25:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:25:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:04 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:25:04 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1217: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:25:04 compute-0 unix_chkpwd[295353]: password check failed for user (root)
Oct 08 10:25:04 compute-0 sshd-session[295350]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113  user=root
Oct 08 10:25:04 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:25:04 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:25:04 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:25:04 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:25:04.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:25:05 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:25:05 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:25:05 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:25:05.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:25:05 compute-0 ceph-mon[73572]: pgmap v1217: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:25:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:25:05] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 08 10:25:05 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:25:05] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 08 10:25:05 compute-0 sshd-session[295350]: Failed password for root from 196.203.106.113 port 57192 ssh2
Oct 08 10:25:06 compute-0 nova_compute[262220]: 2025-10-08 10:25:06.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:25:06 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1218: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:25:06 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:25:06 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:25:06 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:25:06.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:25:06 compute-0 nova_compute[262220]: 2025-10-08 10:25:06.963 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:25:07 compute-0 sshd-session[295350]: Connection closed by authenticating user root 196.203.106.113 port 57192 [preauth]
Oct 08 10:25:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:25:07.231Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:25:07 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:25:07 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:25:07 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:25:07.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:25:07 compute-0 ceph-mon[73572]: pgmap v1218: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:25:07 compute-0 unix_chkpwd[295359]: password check failed for user (root)
Oct 08 10:25:07 compute-0 sshd-session[295357]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113  user=root
Oct 08 10:25:08 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1219: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:25:08 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:25:08 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:25:08 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:25:08.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:25:08 compute-0 ceph-mon[73572]: pgmap v1219: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:25:08 compute-0 nova_compute[262220]: 2025-10-08 10:25:08.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:25:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:25:08.877Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:25:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:25:08.877Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:25:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:25:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:25:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:25:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:09 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:25:09 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:25:09 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:25:09 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:25:09.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:25:09 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:25:09 compute-0 sshd-session[295357]: Failed password for root from 196.203.106.113 port 53894 ssh2
Oct 08 10:25:10 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1220: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:25:10 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:25:10 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:25:10 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:25:10.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:25:10 compute-0 sudo[295363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:25:10 compute-0 sudo[295363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:25:10 compute-0 sudo[295363]: pam_unix(sudo:session): session closed for user root
Oct 08 10:25:10 compute-0 sshd-session[295357]: Connection closed by authenticating user root 196.203.106.113 port 53894 [preauth]
Oct 08 10:25:11 compute-0 nova_compute[262220]: 2025-10-08 10:25:11.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:25:11 compute-0 ceph-mon[73572]: pgmap v1220: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:25:11 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:25:11 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:25:11 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:25:11.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:25:11 compute-0 unix_chkpwd[295391]: password check failed for user (root)
Oct 08 10:25:11 compute-0 sshd-session[295388]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113  user=root
Oct 08 10:25:12 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1221: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:25:12 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:25:12 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:25:12 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:25:12.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:25:13 compute-0 ceph-mon[73572]: pgmap v1221: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:25:13 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:25:13 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:25:13 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:25:13.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:25:13 compute-0 sshd-session[295388]: Failed password for root from 196.203.106.113 port 53906 ssh2
Oct 08 10:25:13 compute-0 nova_compute[262220]: 2025-10-08 10:25:13.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:25:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:25:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:25:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:25:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:14 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:25:14 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1222: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:25:14 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:25:14 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:25:14 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:25:14 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:25:14.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:25:14 compute-0 sshd-session[295388]: Connection closed by authenticating user root 196.203.106.113 port 53906 [preauth]
Oct 08 10:25:15 compute-0 ceph-mon[73572]: pgmap v1222: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:25:15 compute-0 unix_chkpwd[295398]: password check failed for user (root)
Oct 08 10:25:15 compute-0 sshd-session[295395]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113  user=root
Oct 08 10:25:15 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:25:15 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:25:15 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:25:15.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:25:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:25:15] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 08 10:25:15 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:25:15] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 08 10:25:16 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1223: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:25:16 compute-0 nova_compute[262220]: 2025-10-08 10:25:16.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:25:16 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:25:16 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:25:16 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:25:16.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:25:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:25:17.234Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:25:17 compute-0 ceph-mon[73572]: pgmap v1223: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:25:17 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:25:17 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:25:17 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:25:17.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:25:17 compute-0 sshd-session[295395]: Failed password for root from 196.203.106.113 port 45838 ssh2
Oct 08 10:25:17 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:25:17 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:25:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:25:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:25:18 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1224: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:25:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:25:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:25:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:25:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:25:18 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:25:18 compute-0 sshd-session[295395]: Connection closed by authenticating user root 196.203.106.113 port 45838 [preauth]
Oct 08 10:25:18 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:25:18 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:25:18 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:25:18.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:25:18 compute-0 nova_compute[262220]: 2025-10-08 10:25:18.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:25:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:25:18.878Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:25:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:25:18.878Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:25:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:25:18.878Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:25:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:25:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:25:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:25:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:19 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:25:19 compute-0 unix_chkpwd[295404]: password check failed for user (root)
Oct 08 10:25:19 compute-0 sshd-session[295402]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113  user=root
Oct 08 10:25:19 compute-0 ceph-mon[73572]: pgmap v1224: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:25:19 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:25:19 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:25:19 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:25:19.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:25:19 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:25:19 compute-0 podman[295406]: 2025-10-08 10:25:19.899766139 +0000 UTC m=+0.058899379 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 08 10:25:20 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1225: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:25:20 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:25:20 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:25:20 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:25:20.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:25:20 compute-0 sshd-session[295402]: Failed password for root from 196.203.106.113 port 45846 ssh2
Oct 08 10:25:21 compute-0 nova_compute[262220]: 2025-10-08 10:25:21.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:25:21 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:25:21 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:25:21 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:25:21.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:25:21 compute-0 ceph-mon[73572]: pgmap v1225: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:25:21 compute-0 ceph-mon[73572]: from='client.? 192.168.122.10:0/2833912345' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 08 10:25:21 compute-0 ceph-mon[73572]: from='client.? 192.168.122.10:0/2833912345' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 08 10:25:22 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1226: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:25:22 compute-0 sshd-session[295402]: Connection closed by authenticating user root 196.203.106.113 port 45846 [preauth]
Oct 08 10:25:22 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:25:22 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:25:22 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:25:22.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:25:22 compute-0 unix_chkpwd[295432]: password check failed for user (root)
Oct 08 10:25:22 compute-0 sshd-session[295430]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113  user=root
Oct 08 10:25:23 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:25:23 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:25:23 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:25:23.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:25:23 compute-0 ceph-mon[73572]: pgmap v1226: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:25:23 compute-0 nova_compute[262220]: 2025-10-08 10:25:23.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:25:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:25:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:25:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:25:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:24 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:25:24 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1227: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:25:24 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:25:24 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:25:24 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:25:24 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:25:24.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:25:24 compute-0 sshd-session[295430]: Failed password for root from 196.203.106.113 port 45858 ssh2
Oct 08 10:25:25 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:25:25 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:25:25 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:25:25.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:25:25 compute-0 ceph-mon[73572]: pgmap v1227: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:25:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:25:25] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct 08 10:25:25 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:25:25] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct 08 10:25:25 compute-0 sshd-session[295430]: Connection closed by authenticating user root 196.203.106.113 port 45858 [preauth]
Oct 08 10:25:26 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1228: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:25:26 compute-0 nova_compute[262220]: 2025-10-08 10:25:26.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:25:26 compute-0 unix_chkpwd[295439]: password check failed for user (root)
Oct 08 10:25:26 compute-0 sshd-session[295436]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113  user=root
Oct 08 10:25:26 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:25:26 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:25:26 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:25:26.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:25:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:25:27.235Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:25:27 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:25:27 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:25:27 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:25:27.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:25:27 compute-0 ceph-mon[73572]: pgmap v1228: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:25:28 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1229: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:25:28 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:25:28 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:25:28 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:25:28.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:25:28 compute-0 sshd-session[295436]: Failed password for root from 196.203.106.113 port 38504 ssh2
Oct 08 10:25:28 compute-0 nova_compute[262220]: 2025-10-08 10:25:28.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:25:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:25:28.879Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:25:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:25:28.879Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:25:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:25:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:25:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:25:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:29 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:25:29 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:25:29 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:25:29 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:25:29.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:25:29 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:25:29 compute-0 ceph-mon[73572]: pgmap v1229: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:25:29 compute-0 sshd-session[295436]: Connection closed by authenticating user root 196.203.106.113 port 38504 [preauth]
Oct 08 10:25:29 compute-0 podman[295445]: 2025-10-08 10:25:29.977738977 +0000 UTC m=+0.123926366 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 08 10:25:30 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1230: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:25:30 compute-0 unix_chkpwd[295473]: password check failed for user (root)
Oct 08 10:25:30 compute-0 sshd-session[295443]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113  user=root
Oct 08 10:25:30 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:25:30 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:25:30 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:25:30.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:25:30 compute-0 sudo[295474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:25:30 compute-0 sudo[295474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:25:30 compute-0 sudo[295474]: pam_unix(sudo:session): session closed for user root
Oct 08 10:25:30 compute-0 sudo[295499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:25:30 compute-0 sudo[295499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:25:30 compute-0 sudo[295499]: pam_unix(sudo:session): session closed for user root
Oct 08 10:25:30 compute-0 sudo[295524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 08 10:25:30 compute-0 sudo[295524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:25:31 compute-0 nova_compute[262220]: 2025-10-08 10:25:31.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:25:31 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 08 10:25:31 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:25:31 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 08 10:25:31 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:25:31 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:25:31 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:25:31 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:25:31.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:25:31 compute-0 sudo[295524]: pam_unix(sudo:session): session closed for user root
Oct 08 10:25:31 compute-0 ceph-mon[73572]: pgmap v1230: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:25:31 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:25:31 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:25:32 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1231: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:25:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 10:25:32 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:25:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 08 10:25:32 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 10:25:32 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1232: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 08 10:25:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 08 10:25:32 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:25:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 08 10:25:32 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:25:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 08 10:25:32 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 10:25:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 08 10:25:32 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 10:25:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 10:25:32 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:25:32 compute-0 sudo[295582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:25:32 compute-0 sshd-session[295443]: Failed password for root from 196.203.106.113 port 38514 ssh2
Oct 08 10:25:32 compute-0 sudo[295582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:25:32 compute-0 sudo[295582]: pam_unix(sudo:session): session closed for user root
Oct 08 10:25:32 compute-0 sudo[295607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 08 10:25:32 compute-0 sudo[295607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:25:32 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:25:32 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 10:25:32 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:25:32 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:25:32 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 10:25:32 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 10:25:32 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:25:32 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:25:32 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:25:32 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:25:32.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:25:32 compute-0 podman[295672]: 2025-10-08 10:25:32.729987229 +0000 UTC m=+0.037428544 container create a40dc6d11bab4810312a4d915ed6251e3c1a56c26b7ea262b4152cea04ecee1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_bhaskara, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 08 10:25:32 compute-0 systemd[1]: Started libpod-conmon-a40dc6d11bab4810312a4d915ed6251e3c1a56c26b7ea262b4152cea04ecee1b.scope.
Oct 08 10:25:32 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:25:32 compute-0 podman[295672]: 2025-10-08 10:25:32.712619316 +0000 UTC m=+0.020060641 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:25:32 compute-0 podman[295672]: 2025-10-08 10:25:32.812341476 +0000 UTC m=+0.119782801 container init a40dc6d11bab4810312a4d915ed6251e3c1a56c26b7ea262b4152cea04ecee1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 08 10:25:32 compute-0 podman[295672]: 2025-10-08 10:25:32.819080685 +0000 UTC m=+0.126521990 container start a40dc6d11bab4810312a4d915ed6251e3c1a56c26b7ea262b4152cea04ecee1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_bhaskara, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 08 10:25:32 compute-0 podman[295672]: 2025-10-08 10:25:32.822935821 +0000 UTC m=+0.130377176 container attach a40dc6d11bab4810312a4d915ed6251e3c1a56c26b7ea262b4152cea04ecee1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_bhaskara, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Oct 08 10:25:32 compute-0 gracious_bhaskara[295688]: 167 167
Oct 08 10:25:32 compute-0 systemd[1]: libpod-a40dc6d11bab4810312a4d915ed6251e3c1a56c26b7ea262b4152cea04ecee1b.scope: Deactivated successfully.
Oct 08 10:25:32 compute-0 conmon[295688]: conmon a40dc6d11bab4810312a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a40dc6d11bab4810312a4d915ed6251e3c1a56c26b7ea262b4152cea04ecee1b.scope/container/memory.events
Oct 08 10:25:32 compute-0 podman[295693]: 2025-10-08 10:25:32.870232423 +0000 UTC m=+0.027895765 container died a40dc6d11bab4810312a4d915ed6251e3c1a56c26b7ea262b4152cea04ecee1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_bhaskara, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:25:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:25:32 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:25:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb73375c8d9b7459a967bd5dba04125ac1726fb3a93ecafd8edea0ac1be1fd87-merged.mount: Deactivated successfully.
Oct 08 10:25:32 compute-0 podman[295693]: 2025-10-08 10:25:32.927224148 +0000 UTC m=+0.084887450 container remove a40dc6d11bab4810312a4d915ed6251e3c1a56c26b7ea262b4152cea04ecee1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:25:32 compute-0 systemd[1]: libpod-conmon-a40dc6d11bab4810312a4d915ed6251e3c1a56c26b7ea262b4152cea04ecee1b.scope: Deactivated successfully.
Oct 08 10:25:33 compute-0 podman[295708]: 2025-10-08 10:25:33.035821186 +0000 UTC m=+0.068918053 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:25:33 compute-0 podman[295711]: 2025-10-08 10:25:33.057584072 +0000 UTC m=+0.079071173 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Oct 08 10:25:33 compute-0 podman[295753]: 2025-10-08 10:25:33.105830074 +0000 UTC m=+0.039609283 container create 2790fea115d8fac12d5d57e4c47f33186ad09ba7721b1ff05d890b127f0435f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_cori, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:25:33 compute-0 systemd[1]: Started libpod-conmon-2790fea115d8fac12d5d57e4c47f33186ad09ba7721b1ff05d890b127f0435f4.scope.
Oct 08 10:25:33 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:25:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c8ecaf8b4357673882ac52cd5eb58288e4f9251708a0e9367f27503bd31fb82/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:25:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c8ecaf8b4357673882ac52cd5eb58288e4f9251708a0e9367f27503bd31fb82/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:25:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c8ecaf8b4357673882ac52cd5eb58288e4f9251708a0e9367f27503bd31fb82/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:25:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c8ecaf8b4357673882ac52cd5eb58288e4f9251708a0e9367f27503bd31fb82/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:25:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c8ecaf8b4357673882ac52cd5eb58288e4f9251708a0e9367f27503bd31fb82/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 10:25:33 compute-0 podman[295753]: 2025-10-08 10:25:33.089207966 +0000 UTC m=+0.022987185 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:25:33 compute-0 podman[295753]: 2025-10-08 10:25:33.216579703 +0000 UTC m=+0.150358942 container init 2790fea115d8fac12d5d57e4c47f33186ad09ba7721b1ff05d890b127f0435f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_cori, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct 08 10:25:33 compute-0 podman[295753]: 2025-10-08 10:25:33.225372338 +0000 UTC m=+0.159151527 container start 2790fea115d8fac12d5d57e4c47f33186ad09ba7721b1ff05d890b127f0435f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_cori, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Oct 08 10:25:33 compute-0 podman[295753]: 2025-10-08 10:25:33.228798659 +0000 UTC m=+0.162577888 container attach 2790fea115d8fac12d5d57e4c47f33186ad09ba7721b1ff05d890b127f0435f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_cori, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct 08 10:25:33 compute-0 sshd-session[295443]: Connection closed by authenticating user root 196.203.106.113 port 38514 [preauth]
Oct 08 10:25:33 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:25:33 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:25:33 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:25:33.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:25:33 compute-0 quizzical_cori[295769]: --> passed data devices: 0 physical, 1 LVM
Oct 08 10:25:33 compute-0 quizzical_cori[295769]: --> All data devices are unavailable
Oct 08 10:25:33 compute-0 ceph-mon[73572]: pgmap v1231: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:25:33 compute-0 ceph-mon[73572]: pgmap v1232: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 08 10:25:33 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:25:33 compute-0 systemd[1]: libpod-2790fea115d8fac12d5d57e4c47f33186ad09ba7721b1ff05d890b127f0435f4.scope: Deactivated successfully.
Oct 08 10:25:33 compute-0 podman[295753]: 2025-10-08 10:25:33.576650357 +0000 UTC m=+0.510429546 container died 2790fea115d8fac12d5d57e4c47f33186ad09ba7721b1ff05d890b127f0435f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_cori, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 08 10:25:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c8ecaf8b4357673882ac52cd5eb58288e4f9251708a0e9367f27503bd31fb82-merged.mount: Deactivated successfully.
Oct 08 10:25:33 compute-0 podman[295753]: 2025-10-08 10:25:33.620161567 +0000 UTC m=+0.553940756 container remove 2790fea115d8fac12d5d57e4c47f33186ad09ba7721b1ff05d890b127f0435f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 08 10:25:33 compute-0 systemd[1]: libpod-conmon-2790fea115d8fac12d5d57e4c47f33186ad09ba7721b1ff05d890b127f0435f4.scope: Deactivated successfully.
Oct 08 10:25:33 compute-0 sudo[295607]: pam_unix(sudo:session): session closed for user root
Oct 08 10:25:33 compute-0 sudo[295800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:25:33 compute-0 sudo[295800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:25:33 compute-0 sudo[295800]: pam_unix(sudo:session): session closed for user root
Oct 08 10:25:33 compute-0 sudo[295825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- lvm list --format json
Oct 08 10:25:33 compute-0 nova_compute[262220]: 2025-10-08 10:25:33.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:25:33 compute-0 sudo[295825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:25:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:25:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:34 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:25:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:34 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:25:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:34 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:25:34 compute-0 unix_chkpwd[295886]: password check failed for user (root)
Oct 08 10:25:34 compute-0 sshd-session[295781]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113  user=root
Oct 08 10:25:34 compute-0 podman[295891]: 2025-10-08 10:25:34.16875543 +0000 UTC m=+0.039207262 container create b47b901dda42e314b9312652062a2a964b598aaf98a06798dd3abd589d85542a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:25:34 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1233: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 08 10:25:34 compute-0 systemd[1]: Started libpod-conmon-b47b901dda42e314b9312652062a2a964b598aaf98a06798dd3abd589d85542a.scope.
Oct 08 10:25:34 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:25:34 compute-0 podman[295891]: 2025-10-08 10:25:34.152782722 +0000 UTC m=+0.023234554 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:25:34 compute-0 podman[295891]: 2025-10-08 10:25:34.255923274 +0000 UTC m=+0.126375096 container init b47b901dda42e314b9312652062a2a964b598aaf98a06798dd3abd589d85542a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct 08 10:25:34 compute-0 podman[295891]: 2025-10-08 10:25:34.262278479 +0000 UTC m=+0.132730291 container start b47b901dda42e314b9312652062a2a964b598aaf98a06798dd3abd589d85542a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_aryabhata, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct 08 10:25:34 compute-0 podman[295891]: 2025-10-08 10:25:34.265382609 +0000 UTC m=+0.135834431 container attach b47b901dda42e314b9312652062a2a964b598aaf98a06798dd3abd589d85542a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:25:34 compute-0 beautiful_aryabhata[295907]: 167 167
Oct 08 10:25:34 compute-0 systemd[1]: libpod-b47b901dda42e314b9312652062a2a964b598aaf98a06798dd3abd589d85542a.scope: Deactivated successfully.
Oct 08 10:25:34 compute-0 conmon[295907]: conmon b47b901dda42e314b931 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b47b901dda42e314b9312652062a2a964b598aaf98a06798dd3abd589d85542a.scope/container/memory.events
Oct 08 10:25:34 compute-0 podman[295891]: 2025-10-08 10:25:34.269879036 +0000 UTC m=+0.140330868 container died b47b901dda42e314b9312652062a2a964b598aaf98a06798dd3abd589d85542a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_aryabhata, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid)
Oct 08 10:25:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ab98804145baa969a471bdd1396992643a996f3893790f8d4aa6e03dcd9bc65-merged.mount: Deactivated successfully.
Oct 08 10:25:34 compute-0 podman[295891]: 2025-10-08 10:25:34.304246488 +0000 UTC m=+0.174698310 container remove b47b901dda42e314b9312652062a2a964b598aaf98a06798dd3abd589d85542a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_aryabhata, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Oct 08 10:25:34 compute-0 systemd[1]: libpod-conmon-b47b901dda42e314b9312652062a2a964b598aaf98a06798dd3abd589d85542a.scope: Deactivated successfully.
Oct 08 10:25:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:25:34 compute-0 podman[295933]: 2025-10-08 10:25:34.459653424 +0000 UTC m=+0.041100133 container create d953a6036489ec518c191e36877007ec41cd7f1f5a870f95b5ce50937b0e9698 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_gauss, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:25:34 compute-0 systemd[1]: Started libpod-conmon-d953a6036489ec518c191e36877007ec41cd7f1f5a870f95b5ce50937b0e9698.scope.
Oct 08 10:25:34 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:25:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6a4d41b813c9db7d29d03adf2b7d7d62e10ea118ea5450481db1b6188a0ae16/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:25:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6a4d41b813c9db7d29d03adf2b7d7d62e10ea118ea5450481db1b6188a0ae16/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:25:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6a4d41b813c9db7d29d03adf2b7d7d62e10ea118ea5450481db1b6188a0ae16/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:25:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6a4d41b813c9db7d29d03adf2b7d7d62e10ea118ea5450481db1b6188a0ae16/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:25:34 compute-0 podman[295933]: 2025-10-08 10:25:34.441188045 +0000 UTC m=+0.022634774 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:25:34 compute-0 podman[295933]: 2025-10-08 10:25:34.540467131 +0000 UTC m=+0.121913860 container init d953a6036489ec518c191e36877007ec41cd7f1f5a870f95b5ce50937b0e9698 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_gauss, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 10:25:34 compute-0 podman[295933]: 2025-10-08 10:25:34.547045085 +0000 UTC m=+0.128491794 container start d953a6036489ec518c191e36877007ec41cd7f1f5a870f95b5ce50937b0e9698 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Oct 08 10:25:34 compute-0 podman[295933]: 2025-10-08 10:25:34.549957219 +0000 UTC m=+0.131403958 container attach d953a6036489ec518c191e36877007ec41cd7f1f5a870f95b5ce50937b0e9698 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 08 10:25:34 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:25:34 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:25:34 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:25:34.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:25:34 compute-0 amazing_gauss[295949]: {
Oct 08 10:25:34 compute-0 amazing_gauss[295949]:     "1": [
Oct 08 10:25:34 compute-0 amazing_gauss[295949]:         {
Oct 08 10:25:34 compute-0 amazing_gauss[295949]:             "devices": [
Oct 08 10:25:34 compute-0 amazing_gauss[295949]:                 "/dev/loop3"
Oct 08 10:25:34 compute-0 amazing_gauss[295949]:             ],
Oct 08 10:25:34 compute-0 amazing_gauss[295949]:             "lv_name": "ceph_lv0",
Oct 08 10:25:34 compute-0 amazing_gauss[295949]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:25:34 compute-0 amazing_gauss[295949]:             "lv_size": "21470642176",
Oct 08 10:25:34 compute-0 amazing_gauss[295949]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 08 10:25:34 compute-0 amazing_gauss[295949]:             "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 10:25:34 compute-0 amazing_gauss[295949]:             "name": "ceph_lv0",
Oct 08 10:25:34 compute-0 amazing_gauss[295949]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:25:34 compute-0 amazing_gauss[295949]:             "tags": {
Oct 08 10:25:34 compute-0 amazing_gauss[295949]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:25:34 compute-0 amazing_gauss[295949]:                 "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 10:25:34 compute-0 amazing_gauss[295949]:                 "ceph.cephx_lockbox_secret": "",
Oct 08 10:25:34 compute-0 amazing_gauss[295949]:                 "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct 08 10:25:34 compute-0 amazing_gauss[295949]:                 "ceph.cluster_name": "ceph",
Oct 08 10:25:34 compute-0 amazing_gauss[295949]:                 "ceph.crush_device_class": "",
Oct 08 10:25:34 compute-0 amazing_gauss[295949]:                 "ceph.encrypted": "0",
Oct 08 10:25:34 compute-0 amazing_gauss[295949]:                 "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct 08 10:25:34 compute-0 amazing_gauss[295949]:                 "ceph.osd_id": "1",
Oct 08 10:25:34 compute-0 amazing_gauss[295949]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 08 10:25:34 compute-0 amazing_gauss[295949]:                 "ceph.type": "block",
Oct 08 10:25:34 compute-0 amazing_gauss[295949]:                 "ceph.vdo": "0",
Oct 08 10:25:34 compute-0 amazing_gauss[295949]:                 "ceph.with_tpm": "0"
Oct 08 10:25:34 compute-0 amazing_gauss[295949]:             },
Oct 08 10:25:34 compute-0 amazing_gauss[295949]:             "type": "block",
Oct 08 10:25:34 compute-0 amazing_gauss[295949]:             "vg_name": "ceph_vg0"
Oct 08 10:25:34 compute-0 amazing_gauss[295949]:         }
Oct 08 10:25:34 compute-0 amazing_gauss[295949]:     ]
Oct 08 10:25:34 compute-0 amazing_gauss[295949]: }
Oct 08 10:25:34 compute-0 systemd[1]: libpod-d953a6036489ec518c191e36877007ec41cd7f1f5a870f95b5ce50937b0e9698.scope: Deactivated successfully.
Oct 08 10:25:34 compute-0 podman[295933]: 2025-10-08 10:25:34.831086126 +0000 UTC m=+0.412532855 container died d953a6036489ec518c191e36877007ec41cd7f1f5a870f95b5ce50937b0e9698 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_gauss, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:25:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6a4d41b813c9db7d29d03adf2b7d7d62e10ea118ea5450481db1b6188a0ae16-merged.mount: Deactivated successfully.
Oct 08 10:25:34 compute-0 podman[295933]: 2025-10-08 10:25:34.872855939 +0000 UTC m=+0.454302658 container remove d953a6036489ec518c191e36877007ec41cd7f1f5a870f95b5ce50937b0e9698 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Oct 08 10:25:34 compute-0 systemd[1]: libpod-conmon-d953a6036489ec518c191e36877007ec41cd7f1f5a870f95b5ce50937b0e9698.scope: Deactivated successfully.
Oct 08 10:25:34 compute-0 sudo[295825]: pam_unix(sudo:session): session closed for user root
Oct 08 10:25:34 compute-0 sudo[295968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:25:34 compute-0 sudo[295968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:25:34 compute-0 sudo[295968]: pam_unix(sudo:session): session closed for user root
Oct 08 10:25:35 compute-0 sudo[295993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- raw list --format json
Oct 08 10:25:35 compute-0 sudo[295993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:25:35 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:25:35 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:25:35 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:25:35.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:25:35 compute-0 podman[296059]: 2025-10-08 10:25:35.44702209 +0000 UTC m=+0.048146591 container create 74a78b5ab6b581e3251370dcc1389eef404d6d45bb7ec3bbbb2a15edab07ab99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 10:25:35 compute-0 systemd[1]: Started libpod-conmon-74a78b5ab6b581e3251370dcc1389eef404d6d45bb7ec3bbbb2a15edab07ab99.scope.
Oct 08 10:25:35 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:25:35 compute-0 podman[296059]: 2025-10-08 10:25:35.429142962 +0000 UTC m=+0.030267513 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:25:35 compute-0 podman[296059]: 2025-10-08 10:25:35.540006483 +0000 UTC m=+0.141131054 container init 74a78b5ab6b581e3251370dcc1389eef404d6d45bb7ec3bbbb2a15edab07ab99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_faraday, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Oct 08 10:25:35 compute-0 podman[296059]: 2025-10-08 10:25:35.549888322 +0000 UTC m=+0.151012823 container start 74a78b5ab6b581e3251370dcc1389eef404d6d45bb7ec3bbbb2a15edab07ab99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_faraday, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 08 10:25:35 compute-0 podman[296059]: 2025-10-08 10:25:35.553688096 +0000 UTC m=+0.154812697 container attach 74a78b5ab6b581e3251370dcc1389eef404d6d45bb7ec3bbbb2a15edab07ab99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_faraday, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 08 10:25:35 compute-0 zealous_faraday[296076]: 167 167
Oct 08 10:25:35 compute-0 systemd[1]: libpod-74a78b5ab6b581e3251370dcc1389eef404d6d45bb7ec3bbbb2a15edab07ab99.scope: Deactivated successfully.
Oct 08 10:25:35 compute-0 podman[296059]: 2025-10-08 10:25:35.556500027 +0000 UTC m=+0.157624568 container died 74a78b5ab6b581e3251370dcc1389eef404d6d45bb7ec3bbbb2a15edab07ab99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_faraday, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct 08 10:25:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-344aa05e1b860a940e3e690741e3079f610ce6e7b3c047f3ba2bd48bd345222b-merged.mount: Deactivated successfully.
Oct 08 10:25:35 compute-0 ceph-mon[73572]: pgmap v1233: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 08 10:25:35 compute-0 podman[296059]: 2025-10-08 10:25:35.603798469 +0000 UTC m=+0.204922970 container remove 74a78b5ab6b581e3251370dcc1389eef404d6d45bb7ec3bbbb2a15edab07ab99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_faraday, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 08 10:25:35 compute-0 systemd[1]: libpod-conmon-74a78b5ab6b581e3251370dcc1389eef404d6d45bb7ec3bbbb2a15edab07ab99.scope: Deactivated successfully.
Oct 08 10:25:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:25:35] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 08 10:25:35 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:25:35] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 08 10:25:35 compute-0 podman[296100]: 2025-10-08 10:25:35.762945875 +0000 UTC m=+0.041412823 container create 14e377409ab26dc4e52b9e99966d660160ec3d0172d2e760a1197724f6fc8b14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Oct 08 10:25:35 compute-0 systemd[1]: Started libpod-conmon-14e377409ab26dc4e52b9e99966d660160ec3d0172d2e760a1197724f6fc8b14.scope.
Oct 08 10:25:35 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:25:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b9e2d7dd995aab026d0cad74d26189fc50d8d0040aa052a4d3afbf56ee05255/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:25:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b9e2d7dd995aab026d0cad74d26189fc50d8d0040aa052a4d3afbf56ee05255/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:25:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b9e2d7dd995aab026d0cad74d26189fc50d8d0040aa052a4d3afbf56ee05255/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:25:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b9e2d7dd995aab026d0cad74d26189fc50d8d0040aa052a4d3afbf56ee05255/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:25:35 compute-0 podman[296100]: 2025-10-08 10:25:35.834851675 +0000 UTC m=+0.113318623 container init 14e377409ab26dc4e52b9e99966d660160ec3d0172d2e760a1197724f6fc8b14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_joliot, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 10:25:35 compute-0 podman[296100]: 2025-10-08 10:25:35.747565857 +0000 UTC m=+0.026032795 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:25:35 compute-0 podman[296100]: 2025-10-08 10:25:35.851132062 +0000 UTC m=+0.129599000 container start 14e377409ab26dc4e52b9e99966d660160ec3d0172d2e760a1197724f6fc8b14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_joliot, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:25:35 compute-0 podman[296100]: 2025-10-08 10:25:35.856159595 +0000 UTC m=+0.134626573 container attach 14e377409ab26dc4e52b9e99966d660160ec3d0172d2e760a1197724f6fc8b14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct 08 10:25:35 compute-0 sshd-session[295781]: Failed password for root from 196.203.106.113 port 38518 ssh2
Oct 08 10:25:36 compute-0 nova_compute[262220]: 2025-10-08 10:25:36.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:25:36 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1234: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 08 10:25:36 compute-0 lvm[296192]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 08 10:25:36 compute-0 lvm[296192]: VG ceph_vg0 finished
Oct 08 10:25:36 compute-0 gallant_joliot[296117]: {}
Oct 08 10:25:36 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:25:36 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:25:36 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:25:36.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:25:36 compute-0 systemd[1]: libpod-14e377409ab26dc4e52b9e99966d660160ec3d0172d2e760a1197724f6fc8b14.scope: Deactivated successfully.
Oct 08 10:25:36 compute-0 systemd[1]: libpod-14e377409ab26dc4e52b9e99966d660160ec3d0172d2e760a1197724f6fc8b14.scope: Consumed 1.228s CPU time.
Oct 08 10:25:36 compute-0 podman[296100]: 2025-10-08 10:25:36.642453297 +0000 UTC m=+0.920920245 container died 14e377409ab26dc4e52b9e99966d660160ec3d0172d2e760a1197724f6fc8b14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_joliot, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:25:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b9e2d7dd995aab026d0cad74d26189fc50d8d0040aa052a4d3afbf56ee05255-merged.mount: Deactivated successfully.
Oct 08 10:25:36 compute-0 podman[296100]: 2025-10-08 10:25:36.682796575 +0000 UTC m=+0.961263503 container remove 14e377409ab26dc4e52b9e99966d660160ec3d0172d2e760a1197724f6fc8b14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_joliot, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 10:25:36 compute-0 systemd[1]: libpod-conmon-14e377409ab26dc4e52b9e99966d660160ec3d0172d2e760a1197724f6fc8b14.scope: Deactivated successfully.
Oct 08 10:25:36 compute-0 sudo[295993]: pam_unix(sudo:session): session closed for user root
Oct 08 10:25:36 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 10:25:36 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:25:36 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 10:25:36 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:25:36 compute-0 sudo[296207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 08 10:25:36 compute-0 sudo[296207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:25:36 compute-0 sudo[296207]: pam_unix(sudo:session): session closed for user root
Oct 08 10:25:37 compute-0 sshd-session[295781]: Connection closed by authenticating user root 196.203.106.113 port 38518 [preauth]
Oct 08 10:25:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:25:37.236Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:25:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:25:37.236Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:25:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:25:37.238Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:25:37 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:25:37 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:25:37 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:25:37.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:25:37 compute-0 ceph-mon[73572]: pgmap v1234: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 08 10:25:37 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:25:37 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:25:37 compute-0 unix_chkpwd[296235]: password check failed for user (root)
Oct 08 10:25:37 compute-0 sshd-session[296233]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113  user=root
Oct 08 10:25:38 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1235: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 08 10:25:38 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:25:38 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:25:38 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:25:38.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:25:38 compute-0 nova_compute[262220]: 2025-10-08 10:25:38.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:25:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:25:38.881Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:25:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:25:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:25:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:25:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:39 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:25:39 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:25:39 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:25:39 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:25:39.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:25:39 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:25:39 compute-0 ceph-mon[73572]: pgmap v1235: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 08 10:25:40 compute-0 sshd-session[296233]: Failed password for root from 196.203.106.113 port 59452 ssh2
Oct 08 10:25:40 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1236: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 08 10:25:40 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:25:40 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:25:40 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:25:40.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:25:40 compute-0 sshd-session[296233]: Connection closed by authenticating user root 196.203.106.113 port 59452 [preauth]
Oct 08 10:25:41 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:59460 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:41 compute-0 nova_compute[262220]: 2025-10-08 10:25:41.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:25:41 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:59466 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:41 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:25:41 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:25:41 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:25:41.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:25:41 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:59482 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:41 compute-0 ceph-mon[73572]: pgmap v1236: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 08 10:25:41 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:59488 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:41 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:59502 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:42 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:59510 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:42 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1237: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 08 10:25:42 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:59516 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:42 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:25:42 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:25:42 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:25:42.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:25:42 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:59520 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:42 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:59532 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:43 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:59538 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:43 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:59550 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:43 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:25:43 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:25:43 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:25:43.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:25:43 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:59558 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:43 compute-0 ceph-mon[73572]: pgmap v1237: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 08 10:25:43 compute-0 nova_compute[262220]: 2025-10-08 10:25:43.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:25:43 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:59568 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:25:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:25:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:25:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:44 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:25:44 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:59570 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:44 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1238: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:25:44 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:59576 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:44 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:25:44 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40776 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:44 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:25:44 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:25:44 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:25:44.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:25:44 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40780 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:45 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40788 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:45 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40796 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:45 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:25:45 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:25:45 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:25:45.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:25:45 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40804 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:45 compute-0 ceph-mon[73572]: pgmap v1238: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:25:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:25:45] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 08 10:25:45 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:25:45] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 08 10:25:45 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40806 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:45 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40818 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:46 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1239: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:25:46 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40826 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:46 compute-0 nova_compute[262220]: 2025-10-08 10:25:46.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:25:46 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40830 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:46 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:25:46 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:25:46 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:25:46.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:25:46 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40838 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:46 compute-0 ceph-mon[73572]: pgmap v1239: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:25:46 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40842 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:47 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40854 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:25:47.239Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:25:47 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40866 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:47 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:25:47 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:25:47 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:25:47.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:25:47 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40870 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:25:47
Oct 08 10:25:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 08 10:25:47 compute-0 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct 08 10:25:47 compute-0 ceph-mgr[73869]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', 'images', 'backups', 'default.rgw.meta', '.mgr', 'vms', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', '.nfs']
Oct 08 10:25:47 compute-0 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct 08 10:25:47 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40880 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:25:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:25:47 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:25:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:25:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:25:48 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40884 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:25:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:25:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:25:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:25:48 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1240: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:25:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct 08 10:25:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:25:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 08 10:25:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:25:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 10:25:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:25:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:25:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:25:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:25:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:25:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 08 10:25:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:25:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 08 10:25:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:25:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:25:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:25:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 08 10:25:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:25:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 08 10:25:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:25:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:25:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:25:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 08 10:25:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:25:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 10:25:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 08 10:25:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:25:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:25:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:25:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:25:48 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40900 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 08 10:25:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:25:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:25:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:25:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:25:48 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40916 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:48 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:25:48 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:25:48 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:25:48.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:25:48 compute-0 nova_compute[262220]: 2025-10-08 10:25:48.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:25:48 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40920 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:25:48.884Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:25:48 compute-0 ceph-mon[73572]: pgmap v1240: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:25:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:25:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:25:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:25:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:49 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:25:49 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40934 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:49 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40944 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:25:49 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:25:49 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:25:49 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:25:49.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:25:49 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40956 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:49 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40972 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:50 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40988 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:50 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1241: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:25:50 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40998 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:50 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:41004 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:50 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:25:50 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:25:50 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:25:50.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:25:50 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:41010 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:50 compute-0 sudo[296249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:25:50 compute-0 sudo[296249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:25:50 compute-0 sudo[296249]: pam_unix(sudo:session): session closed for user root
Oct 08 10:25:50 compute-0 podman[296272]: 2025-10-08 10:25:50.911910992 +0000 UTC m=+0.064603764 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 08 10:25:50 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:41016 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:51 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:41026 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:51 compute-0 nova_compute[262220]: 2025-10-08 10:25:51.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:25:51 compute-0 ceph-mon[73572]: pgmap v1241: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:25:51 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:25:51 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:25:51 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:25:51.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:25:51 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:41028 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:51 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:41032 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:51 compute-0 nova_compute[262220]: 2025-10-08 10:25:51.913 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:25:51 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:41036 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:52 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1242: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:25:52 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:41042 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:52 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:41058 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:52 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:25:52 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:25:52 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:25:52.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:25:52 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:41074 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:52 compute-0 nova_compute[262220]: 2025-10-08 10:25:52.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:25:52 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:41080 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:53 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:41084 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:53 compute-0 ceph-mon[73572]: pgmap v1242: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:25:53 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:41090 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:53 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:25:53 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:25:53 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:25:53.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:25:53 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:41094 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:53 compute-0 nova_compute[262220]: 2025-10-08 10:25:53.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:25:53 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:41098 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:53 compute-0 nova_compute[262220]: 2025-10-08 10:25:53.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:25:53 compute-0 nova_compute[262220]: 2025-10-08 10:25:53.886 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 08 10:25:53 compute-0 nova_compute[262220]: 2025-10-08 10:25:53.887 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 08 10:25:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:25:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:25:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:25:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:54 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:25:54 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:41108 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:54 compute-0 nova_compute[262220]: 2025-10-08 10:25:54.103 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 08 10:25:54 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1243: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 9.5 KiB/s rd, 0 B/s wr, 14 op/s
Oct 08 10:25:54 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:41112 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:25:54 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:41122 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:54 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:25:54 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:25:54 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:25:54.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:25:54 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:57244 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:54 compute-0 nova_compute[262220]: 2025-10-08 10:25:54.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:25:54 compute-0 nova_compute[262220]: 2025-10-08 10:25:54.953 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:25:54 compute-0 nova_compute[262220]: 2025-10-08 10:25:54.954 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:25:54 compute-0 nova_compute[262220]: 2025-10-08 10:25:54.954 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:25:54 compute-0 nova_compute[262220]: 2025-10-08 10:25:54.954 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 08 10:25:54 compute-0 nova_compute[262220]: 2025-10-08 10:25:54.954 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:25:54 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:57252 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:55 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:57260 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:55 compute-0 ceph-mon[73572]: pgmap v1243: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 9.5 KiB/s rd, 0 B/s wr, 14 op/s
Oct 08 10:25:55 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 08 10:25:55 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/882277071' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:25:55 compute-0 nova_compute[262220]: 2025-10-08 10:25:55.410 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:25:55 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:25:55 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:25:55 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:25:55.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:25:55 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:57272 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:55 compute-0 nova_compute[262220]: 2025-10-08 10:25:55.609 2 WARNING nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 10:25:55 compute-0 nova_compute[262220]: 2025-10-08 10:25:55.610 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4470MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 08 10:25:55 compute-0 nova_compute[262220]: 2025-10-08 10:25:55.610 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:25:55 compute-0 nova_compute[262220]: 2025-10-08 10:25:55.610 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:25:55 compute-0 nova_compute[262220]: 2025-10-08 10:25:55.698 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 08 10:25:55 compute-0 nova_compute[262220]: 2025-10-08 10:25:55.698 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 08 10:25:55 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:57282 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:55 compute-0 nova_compute[262220]: 2025-10-08 10:25:55.725 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:25:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:25:55] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Oct 08 10:25:55 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:25:55] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Oct 08 10:25:55 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:57288 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:56 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:57296 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:56 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1244: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s rd, 0 B/s wr, 14 op/s
Oct 08 10:25:56 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 08 10:25:56 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1399501425' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:25:56 compute-0 nova_compute[262220]: 2025-10-08 10:25:56.233 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:25:56 compute-0 nova_compute[262220]: 2025-10-08 10:25:56.239 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 08 10:25:56 compute-0 nova_compute[262220]: 2025-10-08 10:25:56.261 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 08 10:25:56 compute-0 nova_compute[262220]: 2025-10-08 10:25:56.262 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 08 10:25:56 compute-0 nova_compute[262220]: 2025-10-08 10:25:56.262 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:25:56 compute-0 nova_compute[262220]: 2025-10-08 10:25:56.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:25:56 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/882277071' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:25:56 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/1399501425' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:25:56 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:57306 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:56 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:25:56 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:25:56 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:25:56.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:25:56 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:57308 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:56 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:57310 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:57 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:57324 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:25:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:25:57.240Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:25:57 compute-0 nova_compute[262220]: 2025-10-08 10:25:57.263 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:25:57 compute-0 nova_compute[262220]: 2025-10-08 10:25:57.263 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:25:57 compute-0 nova_compute[262220]: 2025-10-08 10:25:57.263 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:25:57 compute-0 nova_compute[262220]: 2025-10-08 10:25:57.263 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:25:57 compute-0 nova_compute[262220]: 2025-10-08 10:25:57.264 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 08 10:25:57 compute-0 ceph-mon[73572]: pgmap v1244: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s rd, 0 B/s wr, 14 op/s
Oct 08 10:25:57 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/2070749649' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:25:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:25:57.424 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:25:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:25:57.424 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:25:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:25:57.424 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:25:57 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:25:57 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:25:57 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:25:57.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:25:57 compute-0 unix_chkpwd[296347]: password check failed for user (root)
Oct 08 10:25:57 compute-0 sshd-session[296345]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113  user=root
Oct 08 10:25:58 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1245: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s rd, 0 B/s wr, 14 op/s
Oct 08 10:25:58 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/566668693' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:25:58 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:25:58 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:25:58 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:25:58.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:25:58 compute-0 nova_compute[262220]: 2025-10-08 10:25:58.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:25:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:25:58.885Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:25:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:25:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:25:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:25:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:59 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:25:59 compute-0 ceph-mon[73572]: pgmap v1245: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s rd, 0 B/s wr, 14 op/s
Oct 08 10:25:59 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:25:59 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:25:59 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:25:59 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:25:59.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:26:00 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1246: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 109 KiB/s rd, 0 B/s wr, 180 op/s
Oct 08 10:26:00 compute-0 sshd-session[296345]: Failed password for root from 196.203.106.113 port 57328 ssh2
Oct 08 10:26:00 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:26:00 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:26:00 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:26:00.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:26:00 compute-0 sshd-session[296345]: Connection closed by authenticating user root 196.203.106.113 port 57328 [preauth]
Oct 08 10:26:01 compute-0 podman[296351]: 2025-10-08 10:26:01.009217295 +0000 UTC m=+0.164716647 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:26:01 compute-0 nova_compute[262220]: 2025-10-08 10:26:01.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:26:01 compute-0 ceph-mon[73572]: pgmap v1246: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 109 KiB/s rd, 0 B/s wr, 180 op/s
Oct 08 10:26:01 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/2678591205' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:26:01 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:26:01 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:26:01 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:26:01.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:26:01 compute-0 unix_chkpwd[296381]: password check failed for user (root)
Oct 08 10:26:01 compute-0 sshd-session[296378]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113  user=root
Oct 08 10:26:02 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1247: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 108 KiB/s rd, 0 B/s wr, 179 op/s
Oct 08 10:26:02 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/2785935627' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:26:02 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:26:02 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:26:02 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:26:02.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:26:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:26:02 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:26:03 compute-0 ceph-mon[73572]: pgmap v1247: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 108 KiB/s rd, 0 B/s wr, 179 op/s
Oct 08 10:26:03 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:26:03 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:26:03 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:26:03 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:26:03.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:26:03 compute-0 sshd-session[296378]: Failed password for root from 196.203.106.113 port 57330 ssh2
Oct 08 10:26:03 compute-0 nova_compute[262220]: 2025-10-08 10:26:03.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:26:03 compute-0 podman[296385]: 2025-10-08 10:26:03.891970197 +0000 UTC m=+0.051757499 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 08 10:26:03 compute-0 podman[296384]: 2025-10-08 10:26:03.898433925 +0000 UTC m=+0.060748178 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 08 10:26:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:26:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:26:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:26:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:04 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:26:04 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1248: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 109 KiB/s rd, 0 B/s wr, 180 op/s
Oct 08 10:26:04 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:26:04 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:26:04 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:26:04 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:26:04.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:26:04 compute-0 sshd-session[296378]: Connection closed by authenticating user root 196.203.106.113 port 57330 [preauth]
Oct 08 10:26:04 compute-0 nova_compute[262220]: 2025-10-08 10:26:04.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:26:05 compute-0 ceph-mon[73572]: pgmap v1248: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 109 KiB/s rd, 0 B/s wr, 180 op/s
Oct 08 10:26:05 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:26:05 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:26:05 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:26:05.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:26:05 compute-0 unix_chkpwd[296424]: password check failed for user (root)
Oct 08 10:26:05 compute-0 sshd-session[296421]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113  user=root
Oct 08 10:26:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:26:05] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 08 10:26:05 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:26:05] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 08 10:26:06 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1249: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 0 B/s wr, 166 op/s
Oct 08 10:26:06 compute-0 nova_compute[262220]: 2025-10-08 10:26:06.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:26:06 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:26:06 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:26:06 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:26:06.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:26:07 compute-0 sshd-session[296421]: Failed password for root from 196.203.106.113 port 35530 ssh2
Oct 08 10:26:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:26:07.240Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:26:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:26:07.241Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:26:07 compute-0 ceph-mon[73572]: pgmap v1249: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 0 B/s wr, 166 op/s
Oct 08 10:26:07 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:26:07 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:26:07 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:26:07.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:26:08 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1250: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 0 B/s wr, 166 op/s
Oct 08 10:26:08 compute-0 sshd-session[296421]: Connection closed by authenticating user root 196.203.106.113 port 35530 [preauth]
Oct 08 10:26:08 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:26:08 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:26:08 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:26:08.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:26:08 compute-0 nova_compute[262220]: 2025-10-08 10:26:08.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:26:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:26:08.886Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:26:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:26:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:26:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:26:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:09 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:26:09 compute-0 sshd-session[296428]: Invalid user user from 196.203.106.113 port 35536
Oct 08 10:26:09 compute-0 sshd-session[296428]: pam_unix(sshd:auth): check pass; user unknown
Oct 08 10:26:09 compute-0 sshd-session[296428]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113
Oct 08 10:26:09 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:26:09 compute-0 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Oct 08 10:26:09 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:26:09.455223) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 08 10:26:09 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Oct 08 10:26:09 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759919169455270, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 991, "num_deletes": 251, "total_data_size": 1639369, "memory_usage": 1672104, "flush_reason": "Manual Compaction"}
Oct 08 10:26:09 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Oct 08 10:26:09 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:26:09 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:26:09 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:26:09.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:26:09 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759919169465994, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 1031799, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34631, "largest_seqno": 35621, "table_properties": {"data_size": 1027887, "index_size": 1564, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10583, "raw_average_key_size": 20, "raw_value_size": 1019376, "raw_average_value_size": 2018, "num_data_blocks": 67, "num_entries": 505, "num_filter_entries": 505, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759919085, "oldest_key_time": 1759919085, "file_creation_time": 1759919169, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Oct 08 10:26:09 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 10819 microseconds, and 3265 cpu microseconds.
Oct 08 10:26:09 compute-0 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 08 10:26:09 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:26:09.466051) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 1031799 bytes OK
Oct 08 10:26:09 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:26:09.466065) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Oct 08 10:26:09 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:26:09.468887) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Oct 08 10:26:09 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:26:09.468898) EVENT_LOG_v1 {"time_micros": 1759919169468894, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 08 10:26:09 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:26:09.468915) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 08 10:26:09 compute-0 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 1634785, prev total WAL file size 1635490, number of live WAL files 2.
Oct 08 10:26:09 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 10:26:09 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:26:09.469548) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303034' seq:72057594037927935, type:22 .. '6D6772737461740031323536' seq:0, type:0; will stop at (end)
Oct 08 10:26:09 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 08 10:26:09 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(1007KB)], [74(13MB)]
Oct 08 10:26:09 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759919169469615, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 15709490, "oldest_snapshot_seqno": -1}
Oct 08 10:26:09 compute-0 ceph-mon[73572]: pgmap v1250: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 0 B/s wr, 166 op/s
Oct 08 10:26:09 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 6522 keys, 12181676 bytes, temperature: kUnknown
Oct 08 10:26:09 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759919169544203, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 12181676, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12141286, "index_size": 23000, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16325, "raw_key_size": 171606, "raw_average_key_size": 26, "raw_value_size": 12026902, "raw_average_value_size": 1844, "num_data_blocks": 900, "num_entries": 6522, "num_filter_entries": 6522, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916581, "oldest_key_time": 0, "file_creation_time": 1759919169, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Oct 08 10:26:09 compute-0 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 08 10:26:09 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:26:09.544707) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 12181676 bytes
Oct 08 10:26:09 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:26:09.547797) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 210.0 rd, 162.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 14.0 +0.0 blob) out(11.6 +0.0 blob), read-write-amplify(27.0) write-amplify(11.8) OK, records in: 7004, records dropped: 482 output_compression: NoCompression
Oct 08 10:26:09 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:26:09.547824) EVENT_LOG_v1 {"time_micros": 1759919169547811, "job": 42, "event": "compaction_finished", "compaction_time_micros": 74801, "compaction_time_cpu_micros": 27021, "output_level": 6, "num_output_files": 1, "total_output_size": 12181676, "num_input_records": 7004, "num_output_records": 6522, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 08 10:26:09 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 10:26:09 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759919169548301, "job": 42, "event": "table_file_deletion", "file_number": 76}
Oct 08 10:26:09 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 10:26:09 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759919169551463, "job": 42, "event": "table_file_deletion", "file_number": 74}
Oct 08 10:26:09 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:26:09.469445) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:26:09 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:26:09.551609) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:26:09 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:26:09.551618) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:26:09 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:26:09.551620) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:26:09 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:26:09.551622) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:26:09 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:26:09.551624) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:26:10 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1251: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 0 B/s wr, 166 op/s
Oct 08 10:26:10 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:26:10 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:26:10 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:26:10.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:26:10 compute-0 sudo[296432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:26:10 compute-0 sudo[296432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:26:10 compute-0 sudo[296432]: pam_unix(sudo:session): session closed for user root
Oct 08 10:26:11 compute-0 nova_compute[262220]: 2025-10-08 10:26:11.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:26:11 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:26:11 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:26:11 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:26:11.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:26:11 compute-0 ceph-mon[73572]: pgmap v1251: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 0 B/s wr, 166 op/s
Oct 08 10:26:11 compute-0 sshd-session[296428]: Failed password for invalid user user from 196.203.106.113 port 35536 ssh2
Oct 08 10:26:12 compute-0 sshd-session[296428]: Connection closed by invalid user user 196.203.106.113 port 35536 [preauth]
Oct 08 10:26:12 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1252: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:26:12 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:26:12 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:26:12 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:26:12.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:26:12 compute-0 sshd-session[296459]: Invalid user user from 196.203.106.113 port 35540
Oct 08 10:26:12 compute-0 sshd-session[296459]: pam_unix(sshd:auth): check pass; user unknown
Oct 08 10:26:12 compute-0 sshd-session[296459]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113
Oct 08 10:26:13 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:26:13 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:26:13 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:26:13.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:26:13 compute-0 ceph-mon[73572]: pgmap v1252: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:26:13 compute-0 nova_compute[262220]: 2025-10-08 10:26:13.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:26:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:26:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:26:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:26:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:14 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:26:14 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1253: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:26:14 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:26:14 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:26:14 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:26:14 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:26:14.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:26:14 compute-0 sshd-session[296459]: Failed password for invalid user user from 196.203.106.113 port 35540 ssh2
Oct 08 10:26:15 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:26:15 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:26:15 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:26:15.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:26:15 compute-0 sshd-session[296459]: Connection closed by invalid user user 196.203.106.113 port 35540 [preauth]
Oct 08 10:26:15 compute-0 ceph-mon[73572]: pgmap v1253: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:26:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:26:15] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 08 10:26:15 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:26:15] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 08 10:26:16 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1254: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:26:16 compute-0 sshd-session[296464]: Invalid user user from 196.203.106.113 port 42742
Oct 08 10:26:16 compute-0 nova_compute[262220]: 2025-10-08 10:26:16.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:26:16 compute-0 sshd-session[296464]: pam_unix(sshd:auth): check pass; user unknown
Oct 08 10:26:16 compute-0 sshd-session[296464]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113
Oct 08 10:26:16 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:26:16 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:26:16 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:26:16.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:26:16 compute-0 ceph-mon[73572]: pgmap v1254: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:26:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:26:17.242Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:26:17 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:26:17 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:26:17 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:26:17.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:26:17 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:26:17 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:26:17 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:26:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:26:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:26:17 compute-0 sshd-session[296464]: Failed password for invalid user user from 196.203.106.113 port 42742 ssh2
Oct 08 10:26:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:26:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:26:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:26:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:26:18 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1255: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:26:18 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:26:18 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:26:18 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:26:18.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:26:18 compute-0 nova_compute[262220]: 2025-10-08 10:26:18.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:26:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:26:18.887Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:26:18 compute-0 ceph-mon[73572]: pgmap v1255: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:26:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:19 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:26:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:19 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:26:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:19 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:26:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:19 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:26:19 compute-0 sshd-session[296464]: Connection closed by invalid user user 196.203.106.113 port 42742 [preauth]
Oct 08 10:26:19 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:26:19 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:26:19 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:26:19 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:26:19.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:26:19 compute-0 sshd-session[296470]: Invalid user user from 196.203.106.113 port 42758
Oct 08 10:26:19 compute-0 sshd-session[296470]: pam_unix(sshd:auth): check pass; user unknown
Oct 08 10:26:19 compute-0 sshd-session[296470]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113
Oct 08 10:26:20 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1256: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:26:20 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:26:20 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:26:20 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:26:20.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:26:21 compute-0 ceph-mon[73572]: pgmap v1256: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:26:21 compute-0 ceph-mon[73572]: from='client.? 192.168.122.10:0/3381154340' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 08 10:26:21 compute-0 ceph-mon[73572]: from='client.? 192.168.122.10:0/3381154340' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 08 10:26:21 compute-0 nova_compute[262220]: 2025-10-08 10:26:21.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:26:21 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:26:21 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:26:21 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:26:21.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:26:21 compute-0 podman[296474]: 2025-10-08 10:26:21.925914705 +0000 UTC m=+0.078602697 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:26:22 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1257: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:26:22 compute-0 sshd-session[296470]: Failed password for invalid user user from 196.203.106.113 port 42758 ssh2
Oct 08 10:26:22 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:26:22 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:26:22 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:26:22.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:26:22 compute-0 sshd-session[296470]: Connection closed by invalid user user 196.203.106.113 port 42758 [preauth]
Oct 08 10:26:23 compute-0 ceph-mon[73572]: pgmap v1257: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:26:23 compute-0 sshd-session[296496]: Invalid user user from 196.203.106.113 port 42764
Oct 08 10:26:23 compute-0 sshd-session[296496]: pam_unix(sshd:auth): check pass; user unknown
Oct 08 10:26:23 compute-0 sshd-session[296496]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113
Oct 08 10:26:23 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:26:23 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:26:23 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:26:23.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:26:23 compute-0 nova_compute[262220]: 2025-10-08 10:26:23.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:26:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:26:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:26:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:26:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:24 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:26:24 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1258: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:26:24 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:26:24 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:26:24 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:26:24 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:26:24.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:26:25 compute-0 ceph-mon[73572]: pgmap v1258: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:26:25 compute-0 sshd-session[296496]: Failed password for invalid user user from 196.203.106.113 port 42764 ssh2
Oct 08 10:26:25 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:26:25 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:26:25 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:26:25.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:26:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:26:25] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Oct 08 10:26:25 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:26:25] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Oct 08 10:26:26 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1259: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:26:26 compute-0 sshd-session[296496]: Connection closed by invalid user user 196.203.106.113 port 42764 [preauth]
Oct 08 10:26:26 compute-0 nova_compute[262220]: 2025-10-08 10:26:26.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:26:26 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:26:26 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:26:26 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:26:26.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:26:26 compute-0 sshd-session[296502]: Invalid user user from 196.203.106.113 port 46782
Oct 08 10:26:26 compute-0 sshd-session[296502]: pam_unix(sshd:auth): check pass; user unknown
Oct 08 10:26:26 compute-0 sshd-session[296502]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113
Oct 08 10:26:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:26:27.242Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:26:27 compute-0 ceph-mon[73572]: pgmap v1259: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:26:27 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:26:27 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:26:27 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:26:27.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:26:28 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1260: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:26:28 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:26:28 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:26:28 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:26:28.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:26:28 compute-0 nova_compute[262220]: 2025-10-08 10:26:28.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:26:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:26:28.889Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:26:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:26:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:26:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:26:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:29 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:26:29 compute-0 sshd-session[296502]: Failed password for invalid user user from 196.203.106.113 port 46782 ssh2
Oct 08 10:26:29 compute-0 ceph-mon[73572]: pgmap v1260: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:26:29 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:26:29 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:26:29 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:26:29 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:26:29.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:26:29 compute-0 sshd-session[296502]: Connection closed by invalid user user 196.203.106.113 port 46782 [preauth]
Oct 08 10:26:29 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:46786 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:30 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:46794 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:30 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1261: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:26:30 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:46802 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:30 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:46812 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:30 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:26:30 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:26:30 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:26:30.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:26:30 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:46818 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:31 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:46824 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:31 compute-0 sudo[296508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:26:31 compute-0 sudo[296508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:26:31 compute-0 sudo[296508]: pam_unix(sudo:session): session closed for user root
Oct 08 10:26:31 compute-0 podman[296532]: 2025-10-08 10:26:31.220230056 +0000 UTC m=+0.127114229 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 08 10:26:31 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:46838 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:31 compute-0 nova_compute[262220]: 2025-10-08 10:26:31.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:26:31 compute-0 ceph-mon[73572]: pgmap v1261: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:26:31 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:26:31 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:26:31 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:26:31.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:26:31 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:46854 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:31 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:46866 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:32 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:46868 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:32 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1262: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:26:32 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:46874 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:32 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:46878 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:32 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:26:32 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:26:32 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:26:32.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:26:32 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:46892 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:26:32 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:26:32 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:46904 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:33 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:46920 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:33 compute-0 ceph-mon[73572]: pgmap v1262: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:26:33 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:26:33 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:46936 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:33 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:26:33 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:26:33 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:26:33.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:26:33 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:46952 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:33 compute-0 nova_compute[262220]: 2025-10-08 10:26:33.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:26:33 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:46964 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:26:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:26:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:26:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:34 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:26:34 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:46980 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:34 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1263: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:26:34 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:46984 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:26:34 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:56970 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:34 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:26:34 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:26:34 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:26:34.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:26:34 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:56984 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:34 compute-0 podman[296564]: 2025-10-08 10:26:34.893168675 +0000 UTC m=+0.047603644 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 08 10:26:34 compute-0 podman[296563]: 2025-10-08 10:26:34.894946303 +0000 UTC m=+0.055094637 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 08 10:26:35 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:56990 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:35 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:56992 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:35 compute-0 ceph-mon[73572]: pgmap v1263: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:26:35 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:26:35 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:26:35 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:26:35.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:26:35 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:57008 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:26:35] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Oct 08 10:26:35 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:26:35] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Oct 08 10:26:35 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:57018 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:36 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:57030 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:36 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1264: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:26:36 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:57038 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:36 compute-0 nova_compute[262220]: 2025-10-08 10:26:36.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:26:36 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:57050 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:36 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:26:36 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:26:36 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:26:36.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:26:36 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:57060 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:36 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:57062 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:37 compute-0 sudo[296601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:26:37 compute-0 sudo[296601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:26:37 compute-0 sudo[296601]: pam_unix(sudo:session): session closed for user root
Oct 08 10:26:37 compute-0 sudo[296626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 08 10:26:37 compute-0 sudo[296626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:26:37 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:57072 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:26:37.243Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:26:37 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:57084 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:37 compute-0 ceph-mon[73572]: pgmap v1264: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:26:37 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:26:37 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:26:37 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:26:37.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:26:37 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:57100 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:37 compute-0 sudo[296626]: pam_unix(sudo:session): session closed for user root
Oct 08 10:26:37 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 10:26:37 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:26:37 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 08 10:26:37 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 10:26:37 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 08 10:26:37 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1265: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 08 10:26:37 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:26:37 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 08 10:26:37 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:26:37 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 08 10:26:37 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 10:26:37 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 08 10:26:37 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 10:26:37 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 10:26:37 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:26:37 compute-0 sudo[296685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:26:37 compute-0 sudo[296685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:26:37 compute-0 sudo[296685]: pam_unix(sudo:session): session closed for user root
Oct 08 10:26:37 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:57104 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:37 compute-0 sudo[296710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 08 10:26:37 compute-0 sudo[296710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:26:38 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:57112 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:38 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:57126 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:38 compute-0 podman[296776]: 2025-10-08 10:26:38.393886226 +0000 UTC m=+0.046061234 container create e0dbdb7bd00942091dffbb3e6368da0accf856ea539f33dcca6d18ca72a3624d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_almeida, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 08 10:26:38 compute-0 systemd[1]: Started libpod-conmon-e0dbdb7bd00942091dffbb3e6368da0accf856ea539f33dcca6d18ca72a3624d.scope.
Oct 08 10:26:38 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:26:38 compute-0 podman[296776]: 2025-10-08 10:26:38.371871742 +0000 UTC m=+0.024046771 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:26:38 compute-0 podman[296776]: 2025-10-08 10:26:38.478875408 +0000 UTC m=+0.131050436 container init e0dbdb7bd00942091dffbb3e6368da0accf856ea539f33dcca6d18ca72a3624d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_almeida, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 10:26:38 compute-0 podman[296776]: 2025-10-08 10:26:38.487372783 +0000 UTC m=+0.139547801 container start e0dbdb7bd00942091dffbb3e6368da0accf856ea539f33dcca6d18ca72a3624d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_almeida, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:26:38 compute-0 podman[296776]: 2025-10-08 10:26:38.490526756 +0000 UTC m=+0.142701764 container attach e0dbdb7bd00942091dffbb3e6368da0accf856ea539f33dcca6d18ca72a3624d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_almeida, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 10:26:38 compute-0 hardcore_almeida[296792]: 167 167
Oct 08 10:26:38 compute-0 systemd[1]: libpod-e0dbdb7bd00942091dffbb3e6368da0accf856ea539f33dcca6d18ca72a3624d.scope: Deactivated successfully.
Oct 08 10:26:38 compute-0 conmon[296792]: conmon e0dbdb7bd00942091dff <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e0dbdb7bd00942091dffbb3e6368da0accf856ea539f33dcca6d18ca72a3624d.scope/container/memory.events
Oct 08 10:26:38 compute-0 podman[296776]: 2025-10-08 10:26:38.495115265 +0000 UTC m=+0.147290273 container died e0dbdb7bd00942091dffbb3e6368da0accf856ea539f33dcca6d18ca72a3624d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_almeida, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Oct 08 10:26:38 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:26:38 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 10:26:38 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:26:38 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:26:38 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 10:26:38 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 10:26:38 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:26:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b5a44ff2ca548ca1c1ca953c92e87836ad16d3b98ce4a3edd6139245d0460eb-merged.mount: Deactivated successfully.
Oct 08 10:26:38 compute-0 podman[296776]: 2025-10-08 10:26:38.550460478 +0000 UTC m=+0.202635526 container remove e0dbdb7bd00942091dffbb3e6368da0accf856ea539f33dcca6d18ca72a3624d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:26:38 compute-0 systemd[1]: libpod-conmon-e0dbdb7bd00942091dffbb3e6368da0accf856ea539f33dcca6d18ca72a3624d.scope: Deactivated successfully.
Oct 08 10:26:38 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:57130 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:38 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:26:38 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:26:38 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:26:38.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:26:38 compute-0 podman[296816]: 2025-10-08 10:26:38.739353127 +0000 UTC m=+0.047636585 container create 86fe57aed5fe6185e2ac8eab6ce9b9a104dd429adfee3241794a3506d8804a90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_cray, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 08 10:26:38 compute-0 systemd[1]: Started libpod-conmon-86fe57aed5fe6185e2ac8eab6ce9b9a104dd429adfee3241794a3506d8804a90.scope.
Oct 08 10:26:38 compute-0 podman[296816]: 2025-10-08 10:26:38.715865286 +0000 UTC m=+0.024148774 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:26:38 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:26:38 compute-0 nova_compute[262220]: 2025-10-08 10:26:38.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:26:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf9dd4ca006d99d6c011cebc01515902a2671a893ef9d23f382f06be2a265085/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:26:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf9dd4ca006d99d6c011cebc01515902a2671a893ef9d23f382f06be2a265085/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:26:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf9dd4ca006d99d6c011cebc01515902a2671a893ef9d23f382f06be2a265085/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:26:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf9dd4ca006d99d6c011cebc01515902a2671a893ef9d23f382f06be2a265085/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:26:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf9dd4ca006d99d6c011cebc01515902a2671a893ef9d23f382f06be2a265085/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 10:26:38 compute-0 podman[296816]: 2025-10-08 10:26:38.836102892 +0000 UTC m=+0.144386380 container init 86fe57aed5fe6185e2ac8eab6ce9b9a104dd429adfee3241794a3506d8804a90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_cray, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 10:26:38 compute-0 podman[296816]: 2025-10-08 10:26:38.846315722 +0000 UTC m=+0.154599190 container start 86fe57aed5fe6185e2ac8eab6ce9b9a104dd429adfee3241794a3506d8804a90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_cray, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Oct 08 10:26:38 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:57132 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:38 compute-0 podman[296816]: 2025-10-08 10:26:38.850817799 +0000 UTC m=+0.159101377 container attach 86fe57aed5fe6185e2ac8eab6ce9b9a104dd429adfee3241794a3506d8804a90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_cray, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:26:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:26:38.890Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:26:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:26:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:26:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:26:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:39 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:26:39 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:57148 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:39 compute-0 infallible_cray[296832]: --> passed data devices: 0 physical, 1 LVM
Oct 08 10:26:39 compute-0 infallible_cray[296832]: --> All data devices are unavailable
Oct 08 10:26:39 compute-0 systemd[1]: libpod-86fe57aed5fe6185e2ac8eab6ce9b9a104dd429adfee3241794a3506d8804a90.scope: Deactivated successfully.
Oct 08 10:26:39 compute-0 conmon[296832]: conmon 86fe57aed5fe6185e2ac <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-86fe57aed5fe6185e2ac8eab6ce9b9a104dd429adfee3241794a3506d8804a90.scope/container/memory.events
Oct 08 10:26:39 compute-0 podman[296816]: 2025-10-08 10:26:39.172939603 +0000 UTC m=+0.481223091 container died 86fe57aed5fe6185e2ac8eab6ce9b9a104dd429adfee3241794a3506d8804a90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:26:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf9dd4ca006d99d6c011cebc01515902a2671a893ef9d23f382f06be2a265085-merged.mount: Deactivated successfully.
Oct 08 10:26:39 compute-0 podman[296816]: 2025-10-08 10:26:39.21544026 +0000 UTC m=+0.523723718 container remove 86fe57aed5fe6185e2ac8eab6ce9b9a104dd429adfee3241794a3506d8804a90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_cray, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 10:26:39 compute-0 systemd[1]: libpod-conmon-86fe57aed5fe6185e2ac8eab6ce9b9a104dd429adfee3241794a3506d8804a90.scope: Deactivated successfully.
Oct 08 10:26:39 compute-0 sudo[296710]: pam_unix(sudo:session): session closed for user root
Oct 08 10:26:39 compute-0 sudo[296859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:26:39 compute-0 sudo[296859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:26:39 compute-0 sudo[296859]: pam_unix(sudo:session): session closed for user root
Oct 08 10:26:39 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:57164 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:39 compute-0 sudo[296884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- lvm list --format json
Oct 08 10:26:39 compute-0 sudo[296884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:26:39 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:26:39 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:26:39 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:26:39 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:26:39.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:26:39 compute-0 ceph-mon[73572]: pgmap v1265: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 08 10:26:39 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:57170 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:39 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1266: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Oct 08 10:26:39 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:57180 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:39 compute-0 podman[296950]: 2025-10-08 10:26:39.82599754 +0000 UTC m=+0.054843687 container create e8a8d8e5c5361e83463db114c1a1a3ee29079a9132ba361faff6d628d3fcc2b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_greider, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:26:39 compute-0 systemd[1]: Started libpod-conmon-e8a8d8e5c5361e83463db114c1a1a3ee29079a9132ba361faff6d628d3fcc2b3.scope.
Oct 08 10:26:39 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:26:39 compute-0 podman[296950]: 2025-10-08 10:26:39.801142785 +0000 UTC m=+0.029989022 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:26:39 compute-0 podman[296950]: 2025-10-08 10:26:39.899562773 +0000 UTC m=+0.128408930 container init e8a8d8e5c5361e83463db114c1a1a3ee29079a9132ba361faff6d628d3fcc2b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_greider, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 08 10:26:39 compute-0 podman[296950]: 2025-10-08 10:26:39.911028145 +0000 UTC m=+0.139874292 container start e8a8d8e5c5361e83463db114c1a1a3ee29079a9132ba361faff6d628d3fcc2b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_greider, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:26:39 compute-0 podman[296950]: 2025-10-08 10:26:39.914707944 +0000 UTC m=+0.143554091 container attach e8a8d8e5c5361e83463db114c1a1a3ee29079a9132ba361faff6d628d3fcc2b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_greider, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:26:39 compute-0 nervous_greider[296966]: 167 167
Oct 08 10:26:39 compute-0 systemd[1]: libpod-e8a8d8e5c5361e83463db114c1a1a3ee29079a9132ba361faff6d628d3fcc2b3.scope: Deactivated successfully.
Oct 08 10:26:39 compute-0 podman[296950]: 2025-10-08 10:26:39.918202197 +0000 UTC m=+0.147048354 container died e8a8d8e5c5361e83463db114c1a1a3ee29079a9132ba361faff6d628d3fcc2b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_greider, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:26:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-27dd9aecf00198f77429f9757298b210fd5f84c26b9d4929d59ee8189abd6859-merged.mount: Deactivated successfully.
Oct 08 10:26:39 compute-0 podman[296950]: 2025-10-08 10:26:39.96306878 +0000 UTC m=+0.191914937 container remove e8a8d8e5c5361e83463db114c1a1a3ee29079a9132ba361faff6d628d3fcc2b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Oct 08 10:26:39 compute-0 systemd[1]: libpod-conmon-e8a8d8e5c5361e83463db114c1a1a3ee29079a9132ba361faff6d628d3fcc2b3.scope: Deactivated successfully.
Oct 08 10:26:40 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:57192 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:40 compute-0 podman[296991]: 2025-10-08 10:26:40.110885399 +0000 UTC m=+0.039056506 container create 556e41c5d265f1c98abdb7e209e26defd52f7273edad1dfc1b0490a2b539ce7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:26:40 compute-0 systemd[1]: Started libpod-conmon-556e41c5d265f1c98abdb7e209e26defd52f7273edad1dfc1b0490a2b539ce7b.scope.
Oct 08 10:26:40 compute-0 podman[296991]: 2025-10-08 10:26:40.094837829 +0000 UTC m=+0.023008956 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:26:40 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:26:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bbaa3c63a6f692f0b6aaec135085a5cda3b368cca0ccc65e8a0b879dc9a5848/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:26:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bbaa3c63a6f692f0b6aaec135085a5cda3b368cca0ccc65e8a0b879dc9a5848/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:26:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bbaa3c63a6f692f0b6aaec135085a5cda3b368cca0ccc65e8a0b879dc9a5848/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:26:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bbaa3c63a6f692f0b6aaec135085a5cda3b368cca0ccc65e8a0b879dc9a5848/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:26:40 compute-0 podman[296991]: 2025-10-08 10:26:40.22944202 +0000 UTC m=+0.157613197 container init 556e41c5d265f1c98abdb7e209e26defd52f7273edad1dfc1b0490a2b539ce7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 10:26:40 compute-0 podman[296991]: 2025-10-08 10:26:40.240143117 +0000 UTC m=+0.168314264 container start 556e41c5d265f1c98abdb7e209e26defd52f7273edad1dfc1b0490a2b539ce7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct 08 10:26:40 compute-0 podman[296991]: 2025-10-08 10:26:40.244346783 +0000 UTC m=+0.172517920 container attach 556e41c5d265f1c98abdb7e209e26defd52f7273edad1dfc1b0490a2b539ce7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_meninsky, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:26:40 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:57196 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:40 compute-0 vigilant_meninsky[297007]: {
Oct 08 10:26:40 compute-0 vigilant_meninsky[297007]:     "1": [
Oct 08 10:26:40 compute-0 vigilant_meninsky[297007]:         {
Oct 08 10:26:40 compute-0 vigilant_meninsky[297007]:             "devices": [
Oct 08 10:26:40 compute-0 vigilant_meninsky[297007]:                 "/dev/loop3"
Oct 08 10:26:40 compute-0 vigilant_meninsky[297007]:             ],
Oct 08 10:26:40 compute-0 vigilant_meninsky[297007]:             "lv_name": "ceph_lv0",
Oct 08 10:26:40 compute-0 vigilant_meninsky[297007]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:26:40 compute-0 vigilant_meninsky[297007]:             "lv_size": "21470642176",
Oct 08 10:26:40 compute-0 vigilant_meninsky[297007]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 08 10:26:40 compute-0 vigilant_meninsky[297007]:             "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 10:26:40 compute-0 vigilant_meninsky[297007]:             "name": "ceph_lv0",
Oct 08 10:26:40 compute-0 vigilant_meninsky[297007]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:26:40 compute-0 vigilant_meninsky[297007]:             "tags": {
Oct 08 10:26:40 compute-0 vigilant_meninsky[297007]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:26:40 compute-0 vigilant_meninsky[297007]:                 "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 10:26:40 compute-0 vigilant_meninsky[297007]:                 "ceph.cephx_lockbox_secret": "",
Oct 08 10:26:40 compute-0 vigilant_meninsky[297007]:                 "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct 08 10:26:40 compute-0 vigilant_meninsky[297007]:                 "ceph.cluster_name": "ceph",
Oct 08 10:26:40 compute-0 vigilant_meninsky[297007]:                 "ceph.crush_device_class": "",
Oct 08 10:26:40 compute-0 vigilant_meninsky[297007]:                 "ceph.encrypted": "0",
Oct 08 10:26:40 compute-0 vigilant_meninsky[297007]:                 "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct 08 10:26:40 compute-0 vigilant_meninsky[297007]:                 "ceph.osd_id": "1",
Oct 08 10:26:40 compute-0 vigilant_meninsky[297007]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 08 10:26:40 compute-0 vigilant_meninsky[297007]:                 "ceph.type": "block",
Oct 08 10:26:40 compute-0 vigilant_meninsky[297007]:                 "ceph.vdo": "0",
Oct 08 10:26:40 compute-0 vigilant_meninsky[297007]:                 "ceph.with_tpm": "0"
Oct 08 10:26:40 compute-0 vigilant_meninsky[297007]:             },
Oct 08 10:26:40 compute-0 vigilant_meninsky[297007]:             "type": "block",
Oct 08 10:26:40 compute-0 vigilant_meninsky[297007]:             "vg_name": "ceph_vg0"
Oct 08 10:26:40 compute-0 vigilant_meninsky[297007]:         }
Oct 08 10:26:40 compute-0 vigilant_meninsky[297007]:     ]
Oct 08 10:26:40 compute-0 vigilant_meninsky[297007]: }
Oct 08 10:26:40 compute-0 systemd[1]: libpod-556e41c5d265f1c98abdb7e209e26defd52f7273edad1dfc1b0490a2b539ce7b.scope: Deactivated successfully.
Oct 08 10:26:40 compute-0 podman[296991]: 2025-10-08 10:26:40.528531999 +0000 UTC m=+0.456703106 container died 556e41c5d265f1c98abdb7e209e26defd52f7273edad1dfc1b0490a2b539ce7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_meninsky, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 10:26:40 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:57202 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-5bbaa3c63a6f692f0b6aaec135085a5cda3b368cca0ccc65e8a0b879dc9a5848-merged.mount: Deactivated successfully.
Oct 08 10:26:40 compute-0 podman[296991]: 2025-10-08 10:26:40.572083831 +0000 UTC m=+0.500254948 container remove 556e41c5d265f1c98abdb7e209e26defd52f7273edad1dfc1b0490a2b539ce7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_meninsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:26:40 compute-0 systemd[1]: libpod-conmon-556e41c5d265f1c98abdb7e209e26defd52f7273edad1dfc1b0490a2b539ce7b.scope: Deactivated successfully.
Oct 08 10:26:40 compute-0 sudo[296884]: pam_unix(sudo:session): session closed for user root
Oct 08 10:26:40 compute-0 sudo[297028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:26:40 compute-0 sudo[297028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:26:40 compute-0 sudo[297028]: pam_unix(sudo:session): session closed for user root
Oct 08 10:26:40 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:26:40 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:26:40 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:26:40.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:26:40 compute-0 sudo[297053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- raw list --format json
Oct 08 10:26:40 compute-0 sudo[297053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:26:40 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:57208 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:41 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:57212 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:41 compute-0 podman[297119]: 2025-10-08 10:26:41.193161941 +0000 UTC m=+0.069729780 container create 60903b22f54e3c46a4bb069cd8ce742209b0972a4b6f5500f2ee091e3d5b30bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:26:41 compute-0 podman[297119]: 2025-10-08 10:26:41.144332049 +0000 UTC m=+0.020899868 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:26:41 compute-0 systemd[1]: Started libpod-conmon-60903b22f54e3c46a4bb069cd8ce742209b0972a4b6f5500f2ee091e3d5b30bf.scope.
Oct 08 10:26:41 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:26:41 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:57222 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:41 compute-0 podman[297119]: 2025-10-08 10:26:41.281331307 +0000 UTC m=+0.157899116 container init 60903b22f54e3c46a4bb069cd8ce742209b0972a4b6f5500f2ee091e3d5b30bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_elion, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 08 10:26:41 compute-0 podman[297119]: 2025-10-08 10:26:41.292304392 +0000 UTC m=+0.168872191 container start 60903b22f54e3c46a4bb069cd8ce742209b0972a4b6f5500f2ee091e3d5b30bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_elion, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid)
Oct 08 10:26:41 compute-0 podman[297119]: 2025-10-08 10:26:41.295395493 +0000 UTC m=+0.171963372 container attach 60903b22f54e3c46a4bb069cd8ce742209b0972a4b6f5500f2ee091e3d5b30bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_elion, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True)
Oct 08 10:26:41 compute-0 trusting_elion[297137]: 167 167
Oct 08 10:26:41 compute-0 systemd[1]: libpod-60903b22f54e3c46a4bb069cd8ce742209b0972a4b6f5500f2ee091e3d5b30bf.scope: Deactivated successfully.
Oct 08 10:26:41 compute-0 conmon[297137]: conmon 60903b22f54e3c46a4bb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-60903b22f54e3c46a4bb069cd8ce742209b0972a4b6f5500f2ee091e3d5b30bf.scope/container/memory.events
Oct 08 10:26:41 compute-0 podman[297119]: 2025-10-08 10:26:41.298352348 +0000 UTC m=+0.174920147 container died 60903b22f54e3c46a4bb069cd8ce742209b0972a4b6f5500f2ee091e3d5b30bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_elion, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325)
Oct 08 10:26:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-20aeb0da276dbddbd0e8626434fa93a44012a47d079b818e3359a2209a0be493-merged.mount: Deactivated successfully.
Oct 08 10:26:41 compute-0 nova_compute[262220]: 2025-10-08 10:26:41.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:26:41 compute-0 podman[297119]: 2025-10-08 10:26:41.338187249 +0000 UTC m=+0.214755048 container remove 60903b22f54e3c46a4bb069cd8ce742209b0972a4b6f5500f2ee091e3d5b30bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_elion, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 08 10:26:41 compute-0 systemd[1]: libpod-conmon-60903b22f54e3c46a4bb069cd8ce742209b0972a4b6f5500f2ee091e3d5b30bf.scope: Deactivated successfully.
Oct 08 10:26:41 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:26:41 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:26:41 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:26:41.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:26:41 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:57236 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:41 compute-0 podman[297161]: 2025-10-08 10:26:41.534895782 +0000 UTC m=+0.061120721 container create 3b9a5085d2d69327c54b1072a76209b0edcdfb819fb4a6a1d1018f9b6090dbbb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_ritchie, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 10:26:41 compute-0 ceph-mon[73572]: pgmap v1266: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Oct 08 10:26:41 compute-0 systemd[1]: Started libpod-conmon-3b9a5085d2d69327c54b1072a76209b0edcdfb819fb4a6a1d1018f9b6090dbbb.scope.
Oct 08 10:26:41 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:26:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de3d46992a61eea6338f48c53ed6152f0af869b81a26e841d884b358650c7698/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:26:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de3d46992a61eea6338f48c53ed6152f0af869b81a26e841d884b358650c7698/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:26:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de3d46992a61eea6338f48c53ed6152f0af869b81a26e841d884b358650c7698/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:26:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de3d46992a61eea6338f48c53ed6152f0af869b81a26e841d884b358650c7698/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:26:41 compute-0 podman[297161]: 2025-10-08 10:26:41.513835429 +0000 UTC m=+0.040060398 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:26:41 compute-0 podman[297161]: 2025-10-08 10:26:41.61819111 +0000 UTC m=+0.144416049 container init 3b9a5085d2d69327c54b1072a76209b0edcdfb819fb4a6a1d1018f9b6090dbbb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_ritchie, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 10:26:41 compute-0 podman[297161]: 2025-10-08 10:26:41.623575665 +0000 UTC m=+0.149800584 container start 3b9a5085d2d69327c54b1072a76209b0edcdfb819fb4a6a1d1018f9b6090dbbb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:26:41 compute-0 podman[297161]: 2025-10-08 10:26:41.627047167 +0000 UTC m=+0.153272096 container attach 3b9a5085d2d69327c54b1072a76209b0edcdfb819fb4a6a1d1018f9b6090dbbb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_ritchie, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 10:26:41 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:57250 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:41 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1267: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 08 10:26:41 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:57252 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:42 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:57260 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:42 compute-0 lvm[297253]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 08 10:26:42 compute-0 lvm[297253]: VG ceph_vg0 finished
Oct 08 10:26:42 compute-0 wonderful_ritchie[297178]: {}
Oct 08 10:26:42 compute-0 systemd[1]: libpod-3b9a5085d2d69327c54b1072a76209b0edcdfb819fb4a6a1d1018f9b6090dbbb.scope: Deactivated successfully.
Oct 08 10:26:42 compute-0 systemd[1]: libpod-3b9a5085d2d69327c54b1072a76209b0edcdfb819fb4a6a1d1018f9b6090dbbb.scope: Consumed 1.103s CPU time.
Oct 08 10:26:42 compute-0 podman[297161]: 2025-10-08 10:26:42.377299523 +0000 UTC m=+0.903524442 container died 3b9a5085d2d69327c54b1072a76209b0edcdfb819fb4a6a1d1018f9b6090dbbb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_ritchie, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Oct 08 10:26:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-de3d46992a61eea6338f48c53ed6152f0af869b81a26e841d884b358650c7698-merged.mount: Deactivated successfully.
Oct 08 10:26:42 compute-0 podman[297161]: 2025-10-08 10:26:42.43863959 +0000 UTC m=+0.964864549 container remove 3b9a5085d2d69327c54b1072a76209b0edcdfb819fb4a6a1d1018f9b6090dbbb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_ritchie, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct 08 10:26:42 compute-0 systemd[1]: libpod-conmon-3b9a5085d2d69327c54b1072a76209b0edcdfb819fb4a6a1d1018f9b6090dbbb.scope: Deactivated successfully.
Oct 08 10:26:42 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:57272 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:42 compute-0 sudo[297053]: pam_unix(sudo:session): session closed for user root
Oct 08 10:26:42 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 10:26:42 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:26:42 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 10:26:42 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:26:42 compute-0 sudo[297269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 08 10:26:42 compute-0 sudo[297269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:26:42 compute-0 sudo[297269]: pam_unix(sudo:session): session closed for user root
Oct 08 10:26:42 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:57278 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:42 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:26:42 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:26:42 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:26:42.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:26:42 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:57292 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:43 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:57308 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:43 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:57314 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:43 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:26:43 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:26:43 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:26:43.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:26:43 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:57316 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:43 compute-0 ceph-mon[73572]: pgmap v1267: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 08 10:26:43 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:26:43 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:26:43 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1268: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Oct 08 10:26:43 compute-0 nova_compute[262220]: 2025-10-08 10:26:43.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:26:43 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:57332 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:26:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:26:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:26:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:44 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:26:44 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:57340 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:44 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:57342 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:44 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:26:44 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:47840 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:44 compute-0 ceph-mon[73572]: pgmap v1268: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Oct 08 10:26:44 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:26:44 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:26:44 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:26:44.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:26:44 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:47844 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:45 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:47854 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:45 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:47870 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:45 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:26:45 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:26:45 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:26:45.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:26:45 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:47872 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:26:45] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Oct 08 10:26:45 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:26:45] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Oct 08 10:26:45 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:47886 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:45 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1269: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 08 10:26:45 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:47896 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:46 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:47908 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:26:46 compute-0 nova_compute[262220]: 2025-10-08 10:26:46.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:26:46 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:26:46 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:26:46 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:26:46.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:26:46 compute-0 ceph-mon[73572]: pgmap v1269: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 08 10:26:46 compute-0 sshd-session[297298]: Invalid user user from 196.203.106.113 port 47924
Oct 08 10:26:47 compute-0 sshd-session[297298]: pam_unix(sshd:auth): check pass; user unknown
Oct 08 10:26:47 compute-0 sshd-session[297298]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113
Oct 08 10:26:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:26:47.245Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:26:47 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:26:47 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:26:47 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:26:47.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:26:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:26:47
Oct 08 10:26:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 08 10:26:47 compute-0 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct 08 10:26:47 compute-0 ceph-mgr[73869]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', 'backups', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.log', '.mgr', 'images', 'vms', '.nfs']
Oct 08 10:26:47 compute-0 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct 08 10:26:47 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1270: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 08 10:26:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:26:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:26:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:26:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:26:47 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:26:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:26:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:26:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:26:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:26:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct 08 10:26:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:26:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 08 10:26:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:26:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 10:26:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:26:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:26:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:26:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:26:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:26:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 08 10:26:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:26:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 08 10:26:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:26:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:26:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:26:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 08 10:26:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:26:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 08 10:26:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:26:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:26:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:26:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 08 10:26:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:26:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 10:26:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 08 10:26:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:26:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:26:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:26:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:26:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 08 10:26:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:26:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:26:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:26:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:26:48 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:26:48 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:26:48 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:26:48.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:26:48 compute-0 nova_compute[262220]: 2025-10-08 10:26:48.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:26:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:26:48.891Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:26:48 compute-0 sshd-session[297298]: Failed password for invalid user user from 196.203.106.113 port 47924 ssh2
Oct 08 10:26:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:26:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:26:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:26:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:49 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:26:49 compute-0 ceph-mon[73572]: pgmap v1270: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 08 10:26:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:26:49 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:26:49 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:26:49 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:26:49.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:26:49 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1271: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:26:49 compute-0 sshd-session[297298]: Connection closed by invalid user user 196.203.106.113 port 47924 [preauth]
Oct 08 10:26:50 compute-0 sshd-session[297304]: Invalid user user from 196.203.106.113 port 47934
Oct 08 10:26:50 compute-0 sshd-session[297304]: pam_unix(sshd:auth): check pass; user unknown
Oct 08 10:26:50 compute-0 sshd-session[297304]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113
Oct 08 10:26:50 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:26:50 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:26:50 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:26:50.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:26:51 compute-0 sudo[297306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:26:51 compute-0 sudo[297306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:26:51 compute-0 sudo[297306]: pam_unix(sudo:session): session closed for user root
Oct 08 10:26:51 compute-0 ceph-mon[73572]: pgmap v1271: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:26:51 compute-0 nova_compute[262220]: 2025-10-08 10:26:51.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:26:51 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:26:51 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:26:51 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:26:51.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:26:51 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1272: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:26:52 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:26:52 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:26:52 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:26:52.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:26:52 compute-0 nova_compute[262220]: 2025-10-08 10:26:52.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:26:52 compute-0 nova_compute[262220]: 2025-10-08 10:26:52.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:26:52 compute-0 podman[297333]: 2025-10-08 10:26:52.916806437 +0000 UTC m=+0.074785907 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 08 10:26:53 compute-0 sshd-session[297304]: Failed password for invalid user user from 196.203.106.113 port 47934 ssh2
Oct 08 10:26:53 compute-0 ceph-mon[73572]: pgmap v1272: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:26:53 compute-0 sshd-session[297304]: Connection closed by invalid user user 196.203.106.113 port 47934 [preauth]
Oct 08 10:26:53 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:26:53 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:26:53 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:26:53.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:26:53 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1273: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:26:53 compute-0 nova_compute[262220]: 2025-10-08 10:26:53.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:26:53 compute-0 nova_compute[262220]: 2025-10-08 10:26:53.882 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:26:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:26:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:26:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:26:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:54 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:26:54 compute-0 sshd-session[297354]: Invalid user user from 196.203.106.113 port 47944
Oct 08 10:26:54 compute-0 sshd-session[297354]: pam_unix(sshd:auth): check pass; user unknown
Oct 08 10:26:54 compute-0 sshd-session[297354]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113
Oct 08 10:26:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:26:54 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:26:54 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:26:54 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:26:54.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:26:55 compute-0 ceph-mon[73572]: pgmap v1273: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:26:55 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:26:55 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:26:55 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:26:55.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:26:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:26:55] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Oct 08 10:26:55 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:26:55] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Oct 08 10:26:55 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1274: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:26:55 compute-0 nova_compute[262220]: 2025-10-08 10:26:55.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:26:55 compute-0 nova_compute[262220]: 2025-10-08 10:26:55.888 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 08 10:26:55 compute-0 nova_compute[262220]: 2025-10-08 10:26:55.888 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 08 10:26:55 compute-0 nova_compute[262220]: 2025-10-08 10:26:55.923 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 08 10:26:56 compute-0 sshd-session[297354]: Failed password for invalid user user from 196.203.106.113 port 47944 ssh2
Oct 08 10:26:56 compute-0 nova_compute[262220]: 2025-10-08 10:26:56.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:26:56 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:26:56 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:26:56 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:26:56.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:26:56 compute-0 nova_compute[262220]: 2025-10-08 10:26:56.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:26:56 compute-0 nova_compute[262220]: 2025-10-08 10:26:56.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:26:57 compute-0 sshd-session[297354]: Connection closed by invalid user user 196.203.106.113 port 47944 [preauth]
Oct 08 10:26:57 compute-0 nova_compute[262220]: 2025-10-08 10:26:57.121 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:26:57 compute-0 nova_compute[262220]: 2025-10-08 10:26:57.122 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:26:57 compute-0 nova_compute[262220]: 2025-10-08 10:26:57.122 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:26:57 compute-0 nova_compute[262220]: 2025-10-08 10:26:57.122 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 08 10:26:57 compute-0 nova_compute[262220]: 2025-10-08 10:26:57.122 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:26:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:26:57.247Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:26:57 compute-0 ceph-mon[73572]: pgmap v1274: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:26:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:26:57.426 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:26:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:26:57.426 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:26:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:26:57.426 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:26:57 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:26:57 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:26:57 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:26:57.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:26:57 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 08 10:26:57 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4278330368' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:26:57 compute-0 nova_compute[262220]: 2025-10-08 10:26:57.566 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:26:57 compute-0 sshd-session[297360]: Invalid user user from 196.203.106.113 port 44962
Oct 08 10:26:57 compute-0 nova_compute[262220]: 2025-10-08 10:26:57.720 2 WARNING nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 10:26:57 compute-0 nova_compute[262220]: 2025-10-08 10:26:57.721 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4464MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 08 10:26:57 compute-0 nova_compute[262220]: 2025-10-08 10:26:57.721 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:26:57 compute-0 nova_compute[262220]: 2025-10-08 10:26:57.722 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:26:57 compute-0 sshd-session[297360]: pam_unix(sshd:auth): check pass; user unknown
Oct 08 10:26:57 compute-0 sshd-session[297360]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113
Oct 08 10:26:57 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1275: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:26:57 compute-0 nova_compute[262220]: 2025-10-08 10:26:57.939 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 08 10:26:57 compute-0 nova_compute[262220]: 2025-10-08 10:26:57.939 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 08 10:26:57 compute-0 nova_compute[262220]: 2025-10-08 10:26:57.955 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:26:58 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/4278330368' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:26:58 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/143058814' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:26:58 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 08 10:26:58 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1826862654' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:26:58 compute-0 nova_compute[262220]: 2025-10-08 10:26:58.397 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:26:58 compute-0 nova_compute[262220]: 2025-10-08 10:26:58.403 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 08 10:26:58 compute-0 nova_compute[262220]: 2025-10-08 10:26:58.440 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 08 10:26:58 compute-0 nova_compute[262220]: 2025-10-08 10:26:58.443 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 08 10:26:58 compute-0 nova_compute[262220]: 2025-10-08 10:26:58.443 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:26:58 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:26:58 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:26:58 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:26:58.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:26:58 compute-0 nova_compute[262220]: 2025-10-08 10:26:58.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:26:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:26:58.891Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:26:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:26:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:26:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:26:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:59 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:26:59 compute-0 ceph-mon[73572]: pgmap v1275: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:26:59 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/1826862654' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:26:59 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/3955881954' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:26:59 compute-0 nova_compute[262220]: 2025-10-08 10:26:59.440 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:26:59 compute-0 nova_compute[262220]: 2025-10-08 10:26:59.441 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:26:59 compute-0 nova_compute[262220]: 2025-10-08 10:26:59.441 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:26:59 compute-0 nova_compute[262220]: 2025-10-08 10:26:59.441 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 08 10:26:59 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:26:59 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:26:59 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:26:59 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:26:59.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:26:59 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1276: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:27:00 compute-0 sshd-session[297360]: Failed password for invalid user user from 196.203.106.113 port 44962 ssh2
Oct 08 10:27:00 compute-0 sshd-session[297360]: Connection closed by invalid user user 196.203.106.113 port 44962 [preauth]
Oct 08 10:27:00 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:27:00 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:27:00 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:27:00.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:27:01 compute-0 sshd-session[297409]: Invalid user user from 196.203.106.113 port 44976
Oct 08 10:27:01 compute-0 sshd-session[297409]: pam_unix(sshd:auth): check pass; user unknown
Oct 08 10:27:01 compute-0 sshd-session[297409]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113
Oct 08 10:27:01 compute-0 nova_compute[262220]: 2025-10-08 10:27:01.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:27:01 compute-0 podman[297412]: 2025-10-08 10:27:01.420359901 +0000 UTC m=+0.107669785 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 08 10:27:01 compute-0 ceph-mon[73572]: pgmap v1276: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:27:01 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:27:01 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:27:01 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:27:01.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:27:01 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1277: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:27:02 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:27:02 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:27:02 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:27:02.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:27:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:27:02 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:27:03 compute-0 ceph-mon[73572]: pgmap v1277: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:27:03 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:27:03 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/2309891920' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:27:03 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:27:03 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:27:03 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:27:03.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:27:03 compute-0 sshd-session[297409]: Failed password for invalid user user from 196.203.106.113 port 44976 ssh2
Oct 08 10:27:03 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1278: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:27:03 compute-0 nova_compute[262220]: 2025-10-08 10:27:03.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:27:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:27:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:27:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:27:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:04 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:27:04 compute-0 sshd-session[297409]: Connection closed by invalid user user 196.203.106.113 port 44976 [preauth]
Oct 08 10:27:04 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:27:04 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/2927262353' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:27:04 compute-0 sshd-session[297441]: Invalid user user from 196.203.106.113 port 44978
Oct 08 10:27:04 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:27:04 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:27:04 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:27:04.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:27:04 compute-0 sshd-session[297441]: pam_unix(sshd:auth): check pass; user unknown
Oct 08 10:27:04 compute-0 sshd-session[297441]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113
Oct 08 10:27:05 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:27:05 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:27:05 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:27:05.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:27:05 compute-0 ceph-mon[73572]: pgmap v1278: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:27:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:27:05] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 08 10:27:05 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:27:05] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 08 10:27:05 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1279: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:27:05 compute-0 nova_compute[262220]: 2025-10-08 10:27:05.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:27:05 compute-0 podman[297445]: 2025-10-08 10:27:05.898297468 +0000 UTC m=+0.053172977 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 08 10:27:05 compute-0 podman[297444]: 2025-10-08 10:27:05.904851371 +0000 UTC m=+0.066919792 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 08 10:27:06 compute-0 sshd-session[297441]: Failed password for invalid user user from 196.203.106.113 port 44978 ssh2
Oct 08 10:27:06 compute-0 nova_compute[262220]: 2025-10-08 10:27:06.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:27:06 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:27:06 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:27:06 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:27:06.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:27:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:27:07.248Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:27:07 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:27:07 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:27:07 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:27:07.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:27:07 compute-0 sshd-session[297441]: Connection closed by invalid user user 196.203.106.113 port 44978 [preauth]
Oct 08 10:27:07 compute-0 ceph-mon[73572]: pgmap v1279: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:27:07 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1280: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:27:08 compute-0 sshd-session[297486]: Invalid user ubuntu from 196.203.106.113 port 56378
Oct 08 10:27:08 compute-0 sshd-session[297486]: pam_unix(sshd:auth): check pass; user unknown
Oct 08 10:27:08 compute-0 sshd-session[297486]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113
Oct 08 10:27:08 compute-0 nova_compute[262220]: 2025-10-08 10:27:08.509 2 DEBUG oslo_concurrency.processutils [None req-24a520ff-12fb-4617-8845-d2e911b0cf17 1a472abd070641609b2c942b11b1118f 9bebada0871a4efa9df99c6beff34c13 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:27:08 compute-0 nova_compute[262220]: 2025-10-08 10:27:08.562 2 DEBUG oslo_concurrency.processutils [None req-24a520ff-12fb-4617-8845-d2e911b0cf17 1a472abd070641609b2c942b11b1118f 9bebada0871a4efa9df99c6beff34c13 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:27:08 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:27:08 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:27:08 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:27:08.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:27:08 compute-0 ceph-mon[73572]: pgmap v1280: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:27:08 compute-0 nova_compute[262220]: 2025-10-08 10:27:08.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:27:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:27:08.892Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:27:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:27:08.893Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:27:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:27:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:27:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:27:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:09 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:27:09 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:27:09 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:27:09 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:27:09 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:27:09.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:27:09 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1281: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:27:10 compute-0 sshd-session[297486]: Failed password for invalid user ubuntu from 196.203.106.113 port 56378 ssh2
Oct 08 10:27:10 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:27:10 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:27:10 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:27:10.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:27:10 compute-0 ceph-mon[73572]: pgmap v1281: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:27:11 compute-0 sudo[297493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:27:11 compute-0 sudo[297493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:27:11 compute-0 sudo[297493]: pam_unix(sudo:session): session closed for user root
Oct 08 10:27:11 compute-0 nova_compute[262220]: 2025-10-08 10:27:11.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:27:11 compute-0 sshd-session[297486]: Connection closed by invalid user ubuntu 196.203.106.113 port 56378 [preauth]
Oct 08 10:27:11 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:27:11 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:27:11 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:27:11.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:27:11 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1282: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:27:12 compute-0 sshd-session[297518]: Invalid user ubuntu from 196.203.106.113 port 56392
Oct 08 10:27:12 compute-0 sshd-session[297518]: pam_unix(sshd:auth): check pass; user unknown
Oct 08 10:27:12 compute-0 sshd-session[297518]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113
Oct 08 10:27:12 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:27:12 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:27:12 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:27:12.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:27:13 compute-0 ceph-mon[73572]: pgmap v1282: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:27:13 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:27:13 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:27:13 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:27:13.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:27:13 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1283: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:27:13 compute-0 nova_compute[262220]: 2025-10-08 10:27:13.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:27:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:27:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:27:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:27:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:14 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:27:14 compute-0 sshd-session[297518]: Failed password for invalid user ubuntu from 196.203.106.113 port 56392 ssh2
Oct 08 10:27:14 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:27:14 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:27:14 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:27:14 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:27:14.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:27:15 compute-0 ceph-mon[73572]: pgmap v1283: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:27:15 compute-0 nova_compute[262220]: 2025-10-08 10:27:15.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:27:15 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:27:15.128 163175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2a:d6:ca', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'b2:8b:1e:40:84:a3'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 08 10:27:15 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:27:15.129 163175 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 08 10:27:15 compute-0 sshd-session[297518]: Connection closed by invalid user ubuntu 196.203.106.113 port 56392 [preauth]
Oct 08 10:27:15 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:27:15 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:27:15 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:27:15.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:27:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:27:15] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 08 10:27:15 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:27:15] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 08 10:27:15 compute-0 sshd-session[297524]: Invalid user ubuntu from 196.203.106.113 port 51468
Oct 08 10:27:15 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1284: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:27:15 compute-0 sshd-session[297524]: pam_unix(sshd:auth): check pass; user unknown
Oct 08 10:27:15 compute-0 sshd-session[297524]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113
Oct 08 10:27:16 compute-0 nova_compute[262220]: 2025-10-08 10:27:16.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:27:16 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:27:16 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:27:16 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:27:16.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:27:17 compute-0 ceph-mon[73572]: pgmap v1284: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:27:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:27:17.249Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:27:17 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:27:17 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:27:17 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:27:17.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:27:17 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1285: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:27:17 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:27:17 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:27:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:27:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:27:18 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:27:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:27:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:27:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:27:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:27:18 compute-0 sshd-session[297524]: Failed password for invalid user ubuntu from 196.203.106.113 port 51468 ssh2
Oct 08 10:27:18 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:27:18 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:27:18 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:27:18.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:27:18 compute-0 nova_compute[262220]: 2025-10-08 10:27:18.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:27:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:27:18.894Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:27:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:27:18.894Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:27:18 compute-0 sshd-session[297524]: Connection closed by invalid user ubuntu 196.203.106.113 port 51468 [preauth]
Oct 08 10:27:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:27:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:27:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:27:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:19 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:27:19 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:51478 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:19 compute-0 ceph-mon[73572]: pgmap v1285: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:27:19 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:51490 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:19 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:27:19 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:51494 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:19 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:27:19 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:27:19 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:27:19.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:27:19 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:51508 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:19 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1286: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:27:19 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:51520 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:20 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:27:20.132 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26869918-b723-425c-a2e1-0d697f3d0fec, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 10:27:20 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:51536 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:20 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:51538 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:20 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:51546 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:20 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:27:20 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:27:20 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:27:20.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:27:20 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:51562 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:21 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:51568 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:21 compute-0 ceph-mon[73572]: pgmap v1286: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:27:21 compute-0 ceph-mon[73572]: from='client.? 192.168.122.10:0/1902933864' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 08 10:27:21 compute-0 ceph-mon[73572]: from='client.? 192.168.122.10:0/1902933864' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 08 10:27:21 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:51580 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:21 compute-0 nova_compute[262220]: 2025-10-08 10:27:21.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:27:21 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:27:21 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:27:21 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:27:21.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:27:21 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:51590 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:21 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1287: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:27:21 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:51592 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:22 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:51596 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:22 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:51610 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:22 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:51620 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:22 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:27:22 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:27:22 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:27:22.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:27:22 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:51636 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:23 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:51646 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:23 compute-0 ceph-mon[73572]: pgmap v1287: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:27:23 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:51656 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:23 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:51670 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:23 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:27:23 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:27:23 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:27:23.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:27:23 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:51678 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:23 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1288: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:27:23 compute-0 nova_compute[262220]: 2025-10-08 10:27:23.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:27:23 compute-0 podman[297534]: 2025-10-08 10:27:23.897973597 +0000 UTC m=+0.054133428 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 08 10:27:23 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:51684 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:27:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:27:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:27:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:24 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:27:24 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:51700 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:24 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:51716 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:24 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:27:24 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:55418 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:24 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:27:24 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:27:24 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:27:24.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:27:24 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:55426 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:25 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:55438 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:25 compute-0 ceph-mon[73572]: pgmap v1288: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:27:25 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:55452 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:25 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:27:25 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:27:25 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:27:25.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:27:25 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:55458 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:27:25] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Oct 08 10:27:25 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:27:25] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Oct 08 10:27:25 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1289: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:27:25 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:55474 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:26 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:55486 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:26 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:55498 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:26 compute-0 nova_compute[262220]: 2025-10-08 10:27:26.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:27:26 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:55512 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:26 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:27:26 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:27:26 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:27:26.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:27:26 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:55520 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:27 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:55530 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:27:27.250Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:27:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:27:27.250Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:27:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:27:27.250Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:27:27 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:55538 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:27 compute-0 ceph-mon[73572]: pgmap v1289: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:27:27 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:27:27 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:27:27 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:27:27.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:27:27 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:55550 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:27 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1290: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:27:27 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:55564 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:28 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:55576 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:28 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:55592 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:28 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:55604 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:28 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:27:28 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:27:28 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:27:28.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:27:28 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:55612 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:28 compute-0 nova_compute[262220]: 2025-10-08 10:27:28.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:27:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:27:28.895Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:27:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:27:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:27:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:27:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:29 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:27:29 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:55620 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:29 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:55622 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:29 compute-0 ceph-mon[73572]: pgmap v1290: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:27:29 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:27:29 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:55628 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:29 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:27:29 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:27:29 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:27:29.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:27:29 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:55638 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:29 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1291: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:27:29 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:55644 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:30 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:55654 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:30 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:55670 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:30 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:55672 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:30 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:27:30 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:27:30 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:27:30.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:27:30 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:55684 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:31 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:55690 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:31 compute-0 sudo[297562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:27:31 compute-0 sudo[297562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:27:31 compute-0 sudo[297562]: pam_unix(sudo:session): session closed for user root
Oct 08 10:27:31 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:55696 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:31 compute-0 nova_compute[262220]: 2025-10-08 10:27:31.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:27:31 compute-0 ceph-mon[73572]: pgmap v1291: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:27:31 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:27:31 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:27:31 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:27:31.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:27:31 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:55710 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:31 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1292: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:27:31 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:55724 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:31 compute-0 podman[297587]: 2025-10-08 10:27:31.906214407 +0000 UTC m=+0.071223321 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 08 10:27:32 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:55738 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:32 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:55754 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:32 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:55756 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:32 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:55766 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:32 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:27:32 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:27:32 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:27:32.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:27:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:27:32 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:27:33 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:55774 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:33 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:55778 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:33 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:55788 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:33 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:27:33 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:27:33 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:27:33.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:27:33 compute-0 ceph-mon[73572]: pgmap v1292: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:27:33 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:27:33 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:55804 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:33 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1293: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:27:33 compute-0 nova_compute[262220]: 2025-10-08 10:27:33.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:27:33 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:55808 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:27:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:27:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:27:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:34 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:27:34 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:55816 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:34 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:55832 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:27:34 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:49594 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:34 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:27:34 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:27:34 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:27:34.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:27:34 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:49598 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:35 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:49600 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:27:35 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:27:35 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:27:35 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:27:35.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:27:35 compute-0 ceph-mon[73572]: pgmap v1293: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:27:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:27:35] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 08 10:27:35 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:27:35] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 08 10:27:35 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1294: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:27:35 compute-0 sshd-session[297617]: Invalid user ubuntu from 196.203.106.113 port 49606
Oct 08 10:27:35 compute-0 sshd-session[297617]: pam_unix(sshd:auth): check pass; user unknown
Oct 08 10:27:35 compute-0 sshd-session[297617]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113
Oct 08 10:27:36 compute-0 podman[297619]: 2025-10-08 10:27:36.021968441 +0000 UTC m=+0.058502780 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 08 10:27:36 compute-0 podman[297620]: 2025-10-08 10:27:36.026408465 +0000 UTC m=+0.054695396 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent)
Oct 08 10:27:36 compute-0 nova_compute[262220]: 2025-10-08 10:27:36.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:27:36 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:27:36 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:27:36 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:27:36.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:27:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:27:37.251Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:27:37 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:27:37 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:27:37 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:27:37.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:27:37 compute-0 ceph-mon[73572]: pgmap v1294: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:27:37 compute-0 sshd-session[297617]: Failed password for invalid user ubuntu from 196.203.106.113 port 49606 ssh2
Oct 08 10:27:37 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1295: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:27:38 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:27:38 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:27:38 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:27:38.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:27:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:27:38.896Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:27:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:27:38.896Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:27:38 compute-0 nova_compute[262220]: 2025-10-08 10:27:38.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:27:38 compute-0 sshd-session[297617]: Connection closed by invalid user ubuntu 196.203.106.113 port 49606 [preauth]
Oct 08 10:27:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:27:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:27:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:27:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:39 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:27:39 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:27:39 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:27:39 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:27:39 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:27:39.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:27:39 compute-0 sshd-session[297661]: Invalid user ubuntu from 196.203.106.113 port 49608
Oct 08 10:27:39 compute-0 ceph-mon[73572]: pgmap v1295: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:27:39 compute-0 sshd-session[297661]: pam_unix(sshd:auth): check pass; user unknown
Oct 08 10:27:39 compute-0 sshd-session[297661]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113
Oct 08 10:27:39 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1296: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:27:40 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:27:40 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:27:40 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:27:40.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:27:41 compute-0 nova_compute[262220]: 2025-10-08 10:27:41.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:27:41 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:27:41 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:27:41 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:27:41.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:27:41 compute-0 ceph-mon[73572]: pgmap v1296: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:27:41 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1297: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:27:42 compute-0 sshd-session[297661]: Failed password for invalid user ubuntu from 196.203.106.113 port 49608 ssh2
Oct 08 10:27:42 compute-0 sshd-session[297661]: Connection closed by invalid user ubuntu 196.203.106.113 port 49608 [preauth]
Oct 08 10:27:42 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:27:42 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:27:42 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:27:42.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:27:42 compute-0 sudo[297667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:27:42 compute-0 sudo[297667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:27:42 compute-0 sudo[297667]: pam_unix(sudo:session): session closed for user root
Oct 08 10:27:42 compute-0 sudo[297692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 08 10:27:42 compute-0 sudo[297692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:27:43 compute-0 sshd-session[297694]: Invalid user ubuntu from 196.203.106.113 port 49612
Oct 08 10:27:43 compute-0 sshd-session[297694]: pam_unix(sshd:auth): check pass; user unknown
Oct 08 10:27:43 compute-0 sshd-session[297694]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113
Oct 08 10:27:43 compute-0 sudo[297692]: pam_unix(sudo:session): session closed for user root
Oct 08 10:27:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 10:27:43 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:27:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 08 10:27:43 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 10:27:43 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:27:43 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:27:43 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:27:43.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:27:43 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1298: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 08 10:27:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 08 10:27:43 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:27:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 08 10:27:43 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:27:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 08 10:27:43 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 10:27:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 08 10:27:43 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 10:27:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 10:27:43 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:27:43 compute-0 sudo[297753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:27:43 compute-0 sudo[297753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:27:43 compute-0 sudo[297753]: pam_unix(sudo:session): session closed for user root
Oct 08 10:27:43 compute-0 ceph-mon[73572]: pgmap v1297: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:27:43 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:27:43 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 10:27:43 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:27:43 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:27:43 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 10:27:43 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 10:27:43 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:27:43 compute-0 sudo[297778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 08 10:27:43 compute-0 sudo[297778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:27:43 compute-0 nova_compute[262220]: 2025-10-08 10:27:43.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:27:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:27:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:27:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:27:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:44 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:27:44 compute-0 podman[297846]: 2025-10-08 10:27:44.148909045 +0000 UTC m=+0.042221891 container create d050d6ac13beaff3fcc9517b9a5c5d9c87888afbcadf8056fed70b34a1b60f36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:27:44 compute-0 systemd[1]: Started libpod-conmon-d050d6ac13beaff3fcc9517b9a5c5d9c87888afbcadf8056fed70b34a1b60f36.scope.
Oct 08 10:27:44 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:27:44 compute-0 podman[297846]: 2025-10-08 10:27:44.129011159 +0000 UTC m=+0.022324025 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:27:44 compute-0 podman[297846]: 2025-10-08 10:27:44.228015332 +0000 UTC m=+0.121328268 container init d050d6ac13beaff3fcc9517b9a5c5d9c87888afbcadf8056fed70b34a1b60f36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:27:44 compute-0 podman[297846]: 2025-10-08 10:27:44.234153762 +0000 UTC m=+0.127466608 container start d050d6ac13beaff3fcc9517b9a5c5d9c87888afbcadf8056fed70b34a1b60f36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_mclaren, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 08 10:27:44 compute-0 ecstatic_mclaren[297862]: 167 167
Oct 08 10:27:44 compute-0 systemd[1]: libpod-d050d6ac13beaff3fcc9517b9a5c5d9c87888afbcadf8056fed70b34a1b60f36.scope: Deactivated successfully.
Oct 08 10:27:44 compute-0 podman[297846]: 2025-10-08 10:27:44.24025965 +0000 UTC m=+0.133572596 container attach d050d6ac13beaff3fcc9517b9a5c5d9c87888afbcadf8056fed70b34a1b60f36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct 08 10:27:44 compute-0 podman[297846]: 2025-10-08 10:27:44.240971312 +0000 UTC m=+0.134284188 container died d050d6ac13beaff3fcc9517b9a5c5d9c87888afbcadf8056fed70b34a1b60f36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_mclaren, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct 08 10:27:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-c01bb07b18666073c2129c4d37dc210301cb93e1b6dec4087324bed443eb7e6e-merged.mount: Deactivated successfully.
Oct 08 10:27:44 compute-0 podman[297846]: 2025-10-08 10:27:44.27970437 +0000 UTC m=+0.173017216 container remove d050d6ac13beaff3fcc9517b9a5c5d9c87888afbcadf8056fed70b34a1b60f36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:27:44 compute-0 systemd[1]: libpod-conmon-d050d6ac13beaff3fcc9517b9a5c5d9c87888afbcadf8056fed70b34a1b60f36.scope: Deactivated successfully.
Oct 08 10:27:44 compute-0 podman[297886]: 2025-10-08 10:27:44.448314491 +0000 UTC m=+0.037926072 container create 4dd8b0039db1a3af72defb8080d106ed76c4b529794f7e8d5dfda6bad1e91a88 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_ride, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 08 10:27:44 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:27:44 compute-0 systemd[1]: Started libpod-conmon-4dd8b0039db1a3af72defb8080d106ed76c4b529794f7e8d5dfda6bad1e91a88.scope.
Oct 08 10:27:44 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:27:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/219d5eec44378df3ce78c9efe4748647782fe22a68630c44fbc59663e2b0001c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:27:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/219d5eec44378df3ce78c9efe4748647782fe22a68630c44fbc59663e2b0001c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:27:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/219d5eec44378df3ce78c9efe4748647782fe22a68630c44fbc59663e2b0001c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:27:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/219d5eec44378df3ce78c9efe4748647782fe22a68630c44fbc59663e2b0001c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:27:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/219d5eec44378df3ce78c9efe4748647782fe22a68630c44fbc59663e2b0001c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 10:27:44 compute-0 podman[297886]: 2025-10-08 10:27:44.513440864 +0000 UTC m=+0.103052435 container init 4dd8b0039db1a3af72defb8080d106ed76c4b529794f7e8d5dfda6bad1e91a88 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_ride, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Oct 08 10:27:44 compute-0 podman[297886]: 2025-10-08 10:27:44.520967938 +0000 UTC m=+0.110579509 container start 4dd8b0039db1a3af72defb8080d106ed76c4b529794f7e8d5dfda6bad1e91a88 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 10:27:44 compute-0 podman[297886]: 2025-10-08 10:27:44.52472019 +0000 UTC m=+0.114331761 container attach 4dd8b0039db1a3af72defb8080d106ed76c4b529794f7e8d5dfda6bad1e91a88 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:27:44 compute-0 podman[297886]: 2025-10-08 10:27:44.431681751 +0000 UTC m=+0.021293342 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:27:44 compute-0 ceph-mon[73572]: pgmap v1298: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 08 10:27:44 compute-0 beautiful_ride[297904]: --> passed data devices: 0 physical, 1 LVM
Oct 08 10:27:44 compute-0 beautiful_ride[297904]: --> All data devices are unavailable
Oct 08 10:27:44 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:27:44 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:27:44 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:27:44.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:27:44 compute-0 systemd[1]: libpod-4dd8b0039db1a3af72defb8080d106ed76c4b529794f7e8d5dfda6bad1e91a88.scope: Deactivated successfully.
Oct 08 10:27:44 compute-0 podman[297886]: 2025-10-08 10:27:44.824096675 +0000 UTC m=+0.413708246 container died 4dd8b0039db1a3af72defb8080d106ed76c4b529794f7e8d5dfda6bad1e91a88 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_ride, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:27:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-219d5eec44378df3ce78c9efe4748647782fe22a68630c44fbc59663e2b0001c-merged.mount: Deactivated successfully.
Oct 08 10:27:44 compute-0 podman[297886]: 2025-10-08 10:27:44.864352301 +0000 UTC m=+0.453963872 container remove 4dd8b0039db1a3af72defb8080d106ed76c4b529794f7e8d5dfda6bad1e91a88 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct 08 10:27:44 compute-0 systemd[1]: libpod-conmon-4dd8b0039db1a3af72defb8080d106ed76c4b529794f7e8d5dfda6bad1e91a88.scope: Deactivated successfully.
Oct 08 10:27:44 compute-0 sudo[297778]: pam_unix(sudo:session): session closed for user root
Oct 08 10:27:44 compute-0 sudo[297930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:27:44 compute-0 sudo[297930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:27:44 compute-0 sudo[297930]: pam_unix(sudo:session): session closed for user root
Oct 08 10:27:45 compute-0 sudo[297955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- lvm list --format json
Oct 08 10:27:45 compute-0 sudo[297955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:27:45 compute-0 sshd-session[297694]: Failed password for invalid user ubuntu from 196.203.106.113 port 49612 ssh2
Oct 08 10:27:45 compute-0 podman[298024]: 2025-10-08 10:27:45.442227312 +0000 UTC m=+0.032589778 container create a642d75b9403694e75aa2af67b642ccb92e3b1dfbb96067a0089cdbf158a9f8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_noyce, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct 08 10:27:45 compute-0 systemd[1]: Started libpod-conmon-a642d75b9403694e75aa2af67b642ccb92e3b1dfbb96067a0089cdbf158a9f8b.scope.
Oct 08 10:27:45 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:27:45 compute-0 podman[298024]: 2025-10-08 10:27:45.504179953 +0000 UTC m=+0.094542409 container init a642d75b9403694e75aa2af67b642ccb92e3b1dfbb96067a0089cdbf158a9f8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_noyce, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Oct 08 10:27:45 compute-0 podman[298024]: 2025-10-08 10:27:45.512171182 +0000 UTC m=+0.102533608 container start a642d75b9403694e75aa2af67b642ccb92e3b1dfbb96067a0089cdbf158a9f8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_noyce, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 08 10:27:45 compute-0 trusting_noyce[298040]: 167 167
Oct 08 10:27:45 compute-0 systemd[1]: libpod-a642d75b9403694e75aa2af67b642ccb92e3b1dfbb96067a0089cdbf158a9f8b.scope: Deactivated successfully.
Oct 08 10:27:45 compute-0 podman[298024]: 2025-10-08 10:27:45.515589963 +0000 UTC m=+0.105952409 container attach a642d75b9403694e75aa2af67b642ccb92e3b1dfbb96067a0089cdbf158a9f8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_noyce, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:27:45 compute-0 podman[298024]: 2025-10-08 10:27:45.515935735 +0000 UTC m=+0.106298181 container died a642d75b9403694e75aa2af67b642ccb92e3b1dfbb96067a0089cdbf158a9f8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_noyce, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 08 10:27:45 compute-0 podman[298024]: 2025-10-08 10:27:45.428672913 +0000 UTC m=+0.019035359 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:27:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d4a5f9a44e8c7bf61839417a2e9ab7d9a117fe1ee38b279d980379c47d8470c-merged.mount: Deactivated successfully.
Oct 08 10:27:45 compute-0 podman[298024]: 2025-10-08 10:27:45.551635493 +0000 UTC m=+0.141997909 container remove a642d75b9403694e75aa2af67b642ccb92e3b1dfbb96067a0089cdbf158a9f8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_noyce, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:27:45 compute-0 systemd[1]: libpod-conmon-a642d75b9403694e75aa2af67b642ccb92e3b1dfbb96067a0089cdbf158a9f8b.scope: Deactivated successfully.
Oct 08 10:27:45 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:27:45 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:27:45 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:27:45.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:27:45 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1299: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 08 10:27:45 compute-0 podman[298064]: 2025-10-08 10:27:45.704632468 +0000 UTC m=+0.036977471 container create 08bbbcb40e12965c0007bee4b9ff9da21812518f2deebcaa4fd488654ccb917f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Oct 08 10:27:45 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:27:45] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 08 10:27:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:27:45] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 08 10:27:45 compute-0 ceph-mon[73572]: pgmap v1299: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 08 10:27:45 compute-0 systemd[1]: Started libpod-conmon-08bbbcb40e12965c0007bee4b9ff9da21812518f2deebcaa4fd488654ccb917f.scope.
Oct 08 10:27:45 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:27:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfb608db7c96d5aec9d35d512cc5f2b061849b444e07716242eacefc06e90c54/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:27:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfb608db7c96d5aec9d35d512cc5f2b061849b444e07716242eacefc06e90c54/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:27:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfb608db7c96d5aec9d35d512cc5f2b061849b444e07716242eacefc06e90c54/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:27:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfb608db7c96d5aec9d35d512cc5f2b061849b444e07716242eacefc06e90c54/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:27:45 compute-0 podman[298064]: 2025-10-08 10:27:45.767285691 +0000 UTC m=+0.099630714 container init 08bbbcb40e12965c0007bee4b9ff9da21812518f2deebcaa4fd488654ccb917f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_moore, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:27:45 compute-0 podman[298064]: 2025-10-08 10:27:45.775960332 +0000 UTC m=+0.108305335 container start 08bbbcb40e12965c0007bee4b9ff9da21812518f2deebcaa4fd488654ccb917f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_moore, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:27:45 compute-0 podman[298064]: 2025-10-08 10:27:45.778947029 +0000 UTC m=+0.111292032 container attach 08bbbcb40e12965c0007bee4b9ff9da21812518f2deebcaa4fd488654ccb917f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_moore, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:27:45 compute-0 podman[298064]: 2025-10-08 10:27:45.689208657 +0000 UTC m=+0.021553680 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:27:46 compute-0 distracted_moore[298081]: {
Oct 08 10:27:46 compute-0 distracted_moore[298081]:     "1": [
Oct 08 10:27:46 compute-0 distracted_moore[298081]:         {
Oct 08 10:27:46 compute-0 distracted_moore[298081]:             "devices": [
Oct 08 10:27:46 compute-0 distracted_moore[298081]:                 "/dev/loop3"
Oct 08 10:27:46 compute-0 distracted_moore[298081]:             ],
Oct 08 10:27:46 compute-0 distracted_moore[298081]:             "lv_name": "ceph_lv0",
Oct 08 10:27:46 compute-0 distracted_moore[298081]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:27:46 compute-0 distracted_moore[298081]:             "lv_size": "21470642176",
Oct 08 10:27:46 compute-0 distracted_moore[298081]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 08 10:27:46 compute-0 distracted_moore[298081]:             "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 10:27:46 compute-0 distracted_moore[298081]:             "name": "ceph_lv0",
Oct 08 10:27:46 compute-0 distracted_moore[298081]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:27:46 compute-0 distracted_moore[298081]:             "tags": {
Oct 08 10:27:46 compute-0 distracted_moore[298081]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:27:46 compute-0 distracted_moore[298081]:                 "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 10:27:46 compute-0 distracted_moore[298081]:                 "ceph.cephx_lockbox_secret": "",
Oct 08 10:27:46 compute-0 distracted_moore[298081]:                 "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct 08 10:27:46 compute-0 distracted_moore[298081]:                 "ceph.cluster_name": "ceph",
Oct 08 10:27:46 compute-0 distracted_moore[298081]:                 "ceph.crush_device_class": "",
Oct 08 10:27:46 compute-0 distracted_moore[298081]:                 "ceph.encrypted": "0",
Oct 08 10:27:46 compute-0 distracted_moore[298081]:                 "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct 08 10:27:46 compute-0 distracted_moore[298081]:                 "ceph.osd_id": "1",
Oct 08 10:27:46 compute-0 distracted_moore[298081]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 08 10:27:46 compute-0 distracted_moore[298081]:                 "ceph.type": "block",
Oct 08 10:27:46 compute-0 distracted_moore[298081]:                 "ceph.vdo": "0",
Oct 08 10:27:46 compute-0 distracted_moore[298081]:                 "ceph.with_tpm": "0"
Oct 08 10:27:46 compute-0 distracted_moore[298081]:             },
Oct 08 10:27:46 compute-0 distracted_moore[298081]:             "type": "block",
Oct 08 10:27:46 compute-0 distracted_moore[298081]:             "vg_name": "ceph_vg0"
Oct 08 10:27:46 compute-0 distracted_moore[298081]:         }
Oct 08 10:27:46 compute-0 distracted_moore[298081]:     ]
Oct 08 10:27:46 compute-0 distracted_moore[298081]: }
Oct 08 10:27:46 compute-0 systemd[1]: libpod-08bbbcb40e12965c0007bee4b9ff9da21812518f2deebcaa4fd488654ccb917f.scope: Deactivated successfully.
Oct 08 10:27:46 compute-0 podman[298064]: 2025-10-08 10:27:46.09793497 +0000 UTC m=+0.430279983 container died 08bbbcb40e12965c0007bee4b9ff9da21812518f2deebcaa4fd488654ccb917f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_moore, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:27:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-dfb608db7c96d5aec9d35d512cc5f2b061849b444e07716242eacefc06e90c54-merged.mount: Deactivated successfully.
Oct 08 10:27:46 compute-0 podman[298064]: 2025-10-08 10:27:46.136813031 +0000 UTC m=+0.469158034 container remove 08bbbcb40e12965c0007bee4b9ff9da21812518f2deebcaa4fd488654ccb917f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 08 10:27:46 compute-0 systemd[1]: libpod-conmon-08bbbcb40e12965c0007bee4b9ff9da21812518f2deebcaa4fd488654ccb917f.scope: Deactivated successfully.
Oct 08 10:27:46 compute-0 sudo[297955]: pam_unix(sudo:session): session closed for user root
Oct 08 10:27:46 compute-0 sudo[298102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:27:46 compute-0 sudo[298102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:27:46 compute-0 sudo[298102]: pam_unix(sudo:session): session closed for user root
Oct 08 10:27:46 compute-0 sudo[298127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- raw list --format json
Oct 08 10:27:46 compute-0 sudo[298127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:27:46 compute-0 nova_compute[262220]: 2025-10-08 10:27:46.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:27:46 compute-0 sshd-session[297694]: Connection closed by invalid user ubuntu 196.203.106.113 port 49612 [preauth]
Oct 08 10:27:46 compute-0 podman[298194]: 2025-10-08 10:27:46.7173488 +0000 UTC m=+0.035700819 container create 0393711c4a2c3b9e8f27d3054781efb3132bf38c8418be3bbc67e25fad4c8ce6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 10:27:46 compute-0 systemd[1]: Started libpod-conmon-0393711c4a2c3b9e8f27d3054781efb3132bf38c8418be3bbc67e25fad4c8ce6.scope.
Oct 08 10:27:46 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:27:46 compute-0 podman[298194]: 2025-10-08 10:27:46.785964616 +0000 UTC m=+0.104316675 container init 0393711c4a2c3b9e8f27d3054781efb3132bf38c8418be3bbc67e25fad4c8ce6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_hopper, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct 08 10:27:46 compute-0 podman[298194]: 2025-10-08 10:27:46.792358204 +0000 UTC m=+0.110710233 container start 0393711c4a2c3b9e8f27d3054781efb3132bf38c8418be3bbc67e25fad4c8ce6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_hopper, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct 08 10:27:46 compute-0 podman[298194]: 2025-10-08 10:27:46.795158735 +0000 UTC m=+0.113510804 container attach 0393711c4a2c3b9e8f27d3054781efb3132bf38c8418be3bbc67e25fad4c8ce6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_hopper, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 08 10:27:46 compute-0 determined_hopper[298211]: 167 167
Oct 08 10:27:46 compute-0 systemd[1]: libpod-0393711c4a2c3b9e8f27d3054781efb3132bf38c8418be3bbc67e25fad4c8ce6.scope: Deactivated successfully.
Oct 08 10:27:46 compute-0 podman[298194]: 2025-10-08 10:27:46.796826128 +0000 UTC m=+0.115178157 container died 0393711c4a2c3b9e8f27d3054781efb3132bf38c8418be3bbc67e25fad4c8ce6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_hopper, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:27:46 compute-0 podman[298194]: 2025-10-08 10:27:46.702722805 +0000 UTC m=+0.021074844 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:27:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-caccd238503ba567f7ef2be4697329c61ffef5c90670dea2f230b4d60bff473e-merged.mount: Deactivated successfully.
Oct 08 10:27:46 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:27:46 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:27:46 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:27:46.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:27:46 compute-0 podman[298194]: 2025-10-08 10:27:46.83076349 +0000 UTC m=+0.149115509 container remove 0393711c4a2c3b9e8f27d3054781efb3132bf38c8418be3bbc67e25fad4c8ce6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_hopper, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:27:46 compute-0 systemd[1]: libpod-conmon-0393711c4a2c3b9e8f27d3054781efb3132bf38c8418be3bbc67e25fad4c8ce6.scope: Deactivated successfully.
Oct 08 10:27:46 compute-0 podman[298237]: 2025-10-08 10:27:46.988733776 +0000 UTC m=+0.047270335 container create d170c1159463621fbaba03c2bfd7243bc5ce8a78fbeb30a934a3668420ec1669 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_cerf, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid)
Oct 08 10:27:47 compute-0 systemd[1]: Started libpod-conmon-d170c1159463621fbaba03c2bfd7243bc5ce8a78fbeb30a934a3668420ec1669.scope.
Oct 08 10:27:47 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:27:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b8ca8b1733ba6f661c32948c5a0ed6a43c1c096b96264ca7e1490454247e6d5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:27:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b8ca8b1733ba6f661c32948c5a0ed6a43c1c096b96264ca7e1490454247e6d5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:27:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b8ca8b1733ba6f661c32948c5a0ed6a43c1c096b96264ca7e1490454247e6d5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:27:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b8ca8b1733ba6f661c32948c5a0ed6a43c1c096b96264ca7e1490454247e6d5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:27:47 compute-0 podman[298237]: 2025-10-08 10:27:46.968662555 +0000 UTC m=+0.027199144 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:27:47 compute-0 podman[298237]: 2025-10-08 10:27:47.078019513 +0000 UTC m=+0.136556092 container init d170c1159463621fbaba03c2bfd7243bc5ce8a78fbeb30a934a3668420ec1669 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_cerf, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:27:47 compute-0 podman[298237]: 2025-10-08 10:27:47.08378087 +0000 UTC m=+0.142317429 container start d170c1159463621fbaba03c2bfd7243bc5ce8a78fbeb30a934a3668420ec1669 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_cerf, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 08 10:27:47 compute-0 podman[298237]: 2025-10-08 10:27:47.086922382 +0000 UTC m=+0.145458971 container attach d170c1159463621fbaba03c2bfd7243bc5ce8a78fbeb30a934a3668420ec1669 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_cerf, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:27:47 compute-0 sshd-session[298188]: Invalid user ubuntu from 196.203.106.113 port 47474
Oct 08 10:27:47 compute-0 sshd-session[298188]: pam_unix(sshd:auth): check pass; user unknown
Oct 08 10:27:47 compute-0 sshd-session[298188]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113
Oct 08 10:27:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:27:47.251Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:27:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:27:47.252Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:27:47 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1300: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 08 10:27:47 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:27:47 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:27:47 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:27:47.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:27:47 compute-0 lvm[298330]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 08 10:27:47 compute-0 lvm[298330]: VG ceph_vg0 finished
Oct 08 10:27:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:27:47
Oct 08 10:27:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 08 10:27:47 compute-0 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct 08 10:27:47 compute-0 ceph-mgr[73869]: [balancer INFO root] pools ['default.rgw.meta', 'images', 'volumes', 'vms', '.mgr', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.nfs', 'backups', 'default.rgw.control']
Oct 08 10:27:47 compute-0 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct 08 10:27:47 compute-0 admiring_cerf[298254]: {}
Oct 08 10:27:47 compute-0 systemd[1]: libpod-d170c1159463621fbaba03c2bfd7243bc5ce8a78fbeb30a934a3668420ec1669.scope: Deactivated successfully.
Oct 08 10:27:47 compute-0 podman[298237]: 2025-10-08 10:27:47.797344325 +0000 UTC m=+0.855880884 container died d170c1159463621fbaba03c2bfd7243bc5ce8a78fbeb30a934a3668420ec1669 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_cerf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1)
Oct 08 10:27:47 compute-0 systemd[1]: libpod-d170c1159463621fbaba03c2bfd7243bc5ce8a78fbeb30a934a3668420ec1669.scope: Consumed 1.077s CPU time.
Oct 08 10:27:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b8ca8b1733ba6f661c32948c5a0ed6a43c1c096b96264ca7e1490454247e6d5-merged.mount: Deactivated successfully.
Oct 08 10:27:47 compute-0 podman[298237]: 2025-10-08 10:27:47.835153302 +0000 UTC m=+0.893689861 container remove d170c1159463621fbaba03c2bfd7243bc5ce8a78fbeb30a934a3668420ec1669 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_cerf, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 08 10:27:47 compute-0 systemd[1]: libpod-conmon-d170c1159463621fbaba03c2bfd7243bc5ce8a78fbeb30a934a3668420ec1669.scope: Deactivated successfully.
Oct 08 10:27:47 compute-0 sudo[298127]: pam_unix(sudo:session): session closed for user root
Oct 08 10:27:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:27:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:27:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 10:27:47 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:27:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 10:27:47 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:27:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:27:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:27:48 compute-0 sudo[298347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 08 10:27:48 compute-0 sudo[298347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:27:48 compute-0 sudo[298347]: pam_unix(sudo:session): session closed for user root
Oct 08 10:27:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:27:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:27:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:27:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:27:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct 08 10:27:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:27:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 08 10:27:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:27:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 10:27:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:27:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:27:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:27:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:27:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:27:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 08 10:27:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:27:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 08 10:27:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:27:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:27:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:27:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 08 10:27:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:27:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 08 10:27:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:27:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:27:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:27:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 08 10:27:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:27:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 10:27:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 08 10:27:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:27:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:27:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:27:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:27:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 08 10:27:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:27:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:27:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:27:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:27:48 compute-0 ceph-mon[73572]: pgmap v1300: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 08 10:27:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:27:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:27:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:27:48 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:27:48 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:27:48 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:27:48.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:27:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:27:48.897Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:27:48 compute-0 nova_compute[262220]: 2025-10-08 10:27:48.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:27:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:27:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:27:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:27:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:49 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:27:49 compute-0 sshd-session[298188]: Failed password for invalid user ubuntu from 196.203.106.113 port 47474 ssh2
Oct 08 10:27:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:27:49 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1301: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:27:49 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:27:49 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:27:49 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:27:49.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:27:49 compute-0 ceph-mgr[73869]: [devicehealth INFO root] Check health
Oct 08 10:27:50 compute-0 sshd-session[298188]: Connection closed by invalid user ubuntu 196.203.106.113 port 47474 [preauth]
Oct 08 10:27:50 compute-0 ceph-mon[73572]: pgmap v1301: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:27:50 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:27:50 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:27:50 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:27:50.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:27:50 compute-0 sshd-session[298375]: Invalid user ubuntu from 196.203.106.113 port 47490
Oct 08 10:27:51 compute-0 sshd-session[298375]: pam_unix(sshd:auth): check pass; user unknown
Oct 08 10:27:51 compute-0 sshd-session[298375]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113
Oct 08 10:27:51 compute-0 sudo[298378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:27:51 compute-0 sudo[298378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:27:51 compute-0 sudo[298378]: pam_unix(sudo:session): session closed for user root
Oct 08 10:27:51 compute-0 nova_compute[262220]: 2025-10-08 10:27:51.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:27:51 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1302: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 08 10:27:51 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:27:51 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:27:51 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:27:51.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:27:52 compute-0 ceph-mon[73572]: pgmap v1302: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 08 10:27:52 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:27:52 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:27:52 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:27:52.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:27:52 compute-0 nova_compute[262220]: 2025-10-08 10:27:52.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:27:53 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1303: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 08 10:27:53 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:27:53 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:27:53 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:27:53.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:27:53 compute-0 sshd-session[298375]: Failed password for invalid user ubuntu from 196.203.106.113 port 47490 ssh2
Oct 08 10:27:53 compute-0 ceph-mon[73572]: pgmap v1303: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 08 10:27:53 compute-0 nova_compute[262220]: 2025-10-08 10:27:53.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:27:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:27:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:27:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:27:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:54 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:27:54 compute-0 nova_compute[262220]: 2025-10-08 10:27:54.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:27:54 compute-0 sshd-session[298375]: Connection closed by invalid user ubuntu 196.203.106.113 port 47490 [preauth]
Oct 08 10:27:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:27:54 compute-0 sshd-session[298406]: Invalid user ubuntu from 196.203.106.113 port 47504
Oct 08 10:27:54 compute-0 podman[298408]: 2025-10-08 10:27:54.775894744 +0000 UTC m=+0.070465108 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 08 10:27:54 compute-0 sshd-session[298406]: pam_unix(sshd:auth): check pass; user unknown
Oct 08 10:27:54 compute-0 sshd-session[298406]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113
Oct 08 10:27:54 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:27:54 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:27:54 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:27:54.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:27:55 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1304: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:27:55 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:27:55 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:27:55 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:27:55.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:27:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:27:55] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct 08 10:27:55 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:27:55] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct 08 10:27:55 compute-0 nova_compute[262220]: 2025-10-08 10:27:55.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:27:55 compute-0 nova_compute[262220]: 2025-10-08 10:27:55.887 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 08 10:27:55 compute-0 nova_compute[262220]: 2025-10-08 10:27:55.887 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 08 10:27:55 compute-0 nova_compute[262220]: 2025-10-08 10:27:55.900 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 08 10:27:56 compute-0 nova_compute[262220]: 2025-10-08 10:27:56.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:27:56 compute-0 sshd-session[298406]: Failed password for invalid user ubuntu from 196.203.106.113 port 47504 ssh2
Oct 08 10:27:56 compute-0 ceph-mon[73572]: pgmap v1304: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:27:56 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:27:56 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:27:56 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:27:56.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:27:56 compute-0 nova_compute[262220]: 2025-10-08 10:27:56.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:27:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:27:57.254Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:27:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:27:57.427 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:27:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:27:57.427 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:27:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:27:57.427 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:27:57 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1305: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:27:57 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:27:57 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:27:57 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:27:57.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:27:57 compute-0 sshd-session[298406]: Connection closed by invalid user ubuntu 196.203.106.113 port 47504 [preauth]
Oct 08 10:27:57 compute-0 nova_compute[262220]: 2025-10-08 10:27:57.881 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:27:58 compute-0 sshd-session[298433]: Invalid user ubuntu from 196.203.106.113 port 36054
Oct 08 10:27:58 compute-0 sshd-session[298433]: pam_unix(sshd:auth): check pass; user unknown
Oct 08 10:27:58 compute-0 sshd-session[298433]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113
Oct 08 10:27:58 compute-0 ceph-mon[73572]: pgmap v1305: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:27:58 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:27:58 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:27:58 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:27:58.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:27:58 compute-0 nova_compute[262220]: 2025-10-08 10:27:58.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:27:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:27:58.898Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:27:58 compute-0 nova_compute[262220]: 2025-10-08 10:27:58.910 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:27:58 compute-0 nova_compute[262220]: 2025-10-08 10:27:58.911 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:27:58 compute-0 nova_compute[262220]: 2025-10-08 10:27:58.911 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:27:58 compute-0 nova_compute[262220]: 2025-10-08 10:27:58.911 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 08 10:27:58 compute-0 nova_compute[262220]: 2025-10-08 10:27:58.912 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:27:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:27:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:27:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:27:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:59 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:27:59 compute-0 nova_compute[262220]: 2025-10-08 10:27:59.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:27:59 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 08 10:27:59 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3787316052' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:27:59 compute-0 nova_compute[262220]: 2025-10-08 10:27:59.417 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:27:59 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:27:59 compute-0 nova_compute[262220]: 2025-10-08 10:27:59.557 2 WARNING nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 10:27:59 compute-0 nova_compute[262220]: 2025-10-08 10:27:59.558 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4437MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 08 10:27:59 compute-0 nova_compute[262220]: 2025-10-08 10:27:59.558 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:27:59 compute-0 nova_compute[262220]: 2025-10-08 10:27:59.558 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:27:59 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1306: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:27:59 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:27:59 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:27:59 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:27:59.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:27:59 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3787316052' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:27:59 compute-0 nova_compute[262220]: 2025-10-08 10:27:59.810 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 08 10:27:59 compute-0 nova_compute[262220]: 2025-10-08 10:27:59.810 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 08 10:27:59 compute-0 nova_compute[262220]: 2025-10-08 10:27:59.831 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:28:00 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 08 10:28:00 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/925374754' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:28:00 compute-0 nova_compute[262220]: 2025-10-08 10:28:00.256 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:28:00 compute-0 nova_compute[262220]: 2025-10-08 10:28:00.261 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 08 10:28:00 compute-0 nova_compute[262220]: 2025-10-08 10:28:00.290 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 08 10:28:00 compute-0 nova_compute[262220]: 2025-10-08 10:28:00.292 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 08 10:28:00 compute-0 nova_compute[262220]: 2025-10-08 10:28:00.292 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.733s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:28:00 compute-0 ceph-mon[73572]: pgmap v1306: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:28:00 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/3634036169' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:28:00 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/925374754' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:28:00 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:28:00 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:28:00 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:28:00.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:28:00 compute-0 sshd-session[298433]: Failed password for invalid user ubuntu from 196.203.106.113 port 36054 ssh2
Oct 08 10:28:01 compute-0 nova_compute[262220]: 2025-10-08 10:28:01.293 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:28:01 compute-0 nova_compute[262220]: 2025-10-08 10:28:01.293 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:28:01 compute-0 nova_compute[262220]: 2025-10-08 10:28:01.294 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 08 10:28:01 compute-0 nova_compute[262220]: 2025-10-08 10:28:01.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:28:01 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1307: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:28:01 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:28:01 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:28:01 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:28:01.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:28:01 compute-0 sshd-session[298433]: Connection closed by invalid user ubuntu 196.203.106.113 port 36054 [preauth]
Oct 08 10:28:01 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/1373630938' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:28:02 compute-0 sshd-session[298483]: Invalid user ubuntu from 196.203.106.113 port 36062
Oct 08 10:28:02 compute-0 podman[298486]: 2025-10-08 10:28:02.317833035 +0000 UTC m=+0.070868080 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 08 10:28:02 compute-0 sshd-session[298483]: pam_unix(sshd:auth): check pass; user unknown
Oct 08 10:28:02 compute-0 sshd-session[298483]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113
Oct 08 10:28:02 compute-0 ceph-mon[73572]: pgmap v1307: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:28:02 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:28:02 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:28:02 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:28:02.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:28:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:28:02 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:28:03 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1308: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:28:03 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:28:03 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:28:03 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:28:03.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:28:03 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:28:03 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/2306011842' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:28:03 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/549232925' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:28:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:28:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:28:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:28:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:04 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:28:04 compute-0 nova_compute[262220]: 2025-10-08 10:28:04.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:28:04 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:28:04 compute-0 sshd-session[298483]: Failed password for invalid user ubuntu from 196.203.106.113 port 36062 ssh2
Oct 08 10:28:04 compute-0 ceph-mon[73572]: pgmap v1308: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:28:04 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:28:04 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:28:04 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:28:04.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:28:05 compute-0 sshd-session[298483]: Connection closed by invalid user ubuntu 196.203.106.113 port 36062 [preauth]
Oct 08 10:28:05 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1309: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:28:05 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:28:05 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:28:05 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:28:05.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:28:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:28:05] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 08 10:28:05 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:28:05] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 08 10:28:05 compute-0 ceph-mon[73572]: pgmap v1309: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:28:05 compute-0 nova_compute[262220]: 2025-10-08 10:28:05.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:28:05 compute-0 sshd-session[298516]: Invalid user ubuntu from 196.203.106.113 port 42476
Oct 08 10:28:06 compute-0 sshd-session[298516]: pam_unix(sshd:auth): check pass; user unknown
Oct 08 10:28:06 compute-0 sshd-session[298516]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113
Oct 08 10:28:06 compute-0 podman[298519]: 2025-10-08 10:28:06.202897864 +0000 UTC m=+0.071934876 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 08 10:28:06 compute-0 podman[298520]: 2025-10-08 10:28:06.219222263 +0000 UTC m=+0.083977946 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 08 10:28:06 compute-0 nova_compute[262220]: 2025-10-08 10:28:06.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:28:06 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:28:06 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:28:06 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:28:06.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:28:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:28:07.255Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:28:07 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1310: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:28:07 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:28:07 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:28:07 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:28:07.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:28:07 compute-0 sshd-session[298516]: Failed password for invalid user ubuntu from 196.203.106.113 port 42476 ssh2
Oct 08 10:28:08 compute-0 ceph-mon[73572]: pgmap v1310: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:28:08 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:28:08 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:28:08 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:28:08.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:28:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:28:08.899Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:28:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:28:08.899Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:28:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:28:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:28:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:28:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:09 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:28:09 compute-0 nova_compute[262220]: 2025-10-08 10:28:09.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:28:09 compute-0 sshd-session[298516]: Connection closed by invalid user ubuntu 196.203.106.113 port 42476 [preauth]
Oct 08 10:28:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=cleanup t=2025-10-08T10:28:09.471085884Z level=info msg="Completed cleanup jobs" duration=35.47843ms
Oct 08 10:28:09 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:28:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=plugins.update.checker t=2025-10-08T10:28:09.569023472Z level=info msg="Update check succeeded" duration=56.791982ms
Oct 08 10:28:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=grafana.update.checker t=2025-10-08T10:28:09.569073124Z level=info msg="Update check succeeded" duration=56.236505ms
Oct 08 10:28:09 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1311: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:28:09 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:28:09 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:28:09 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:28:09.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:28:09 compute-0 sshd-session[298560]: Invalid user ubuntu from 196.203.106.113 port 42478
Oct 08 10:28:09 compute-0 sshd-session[298560]: pam_unix(sshd:auth): check pass; user unknown
Oct 08 10:28:09 compute-0 sshd-session[298560]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113
Oct 08 10:28:10 compute-0 ceph-mon[73572]: pgmap v1311: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:28:10 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:28:10 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:28:10 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:28:10.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:28:11 compute-0 nova_compute[262220]: 2025-10-08 10:28:11.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:28:11 compute-0 sudo[298564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:28:11 compute-0 sudo[298564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:28:11 compute-0 sudo[298564]: pam_unix(sudo:session): session closed for user root
Oct 08 10:28:11 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1312: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:28:11 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:28:11 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:28:11 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:28:11.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:28:12 compute-0 sshd-session[298560]: Failed password for invalid user ubuntu from 196.203.106.113 port 42478 ssh2
Oct 08 10:28:12 compute-0 ceph-mon[73572]: pgmap v1312: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:28:12 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:28:12 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:28:12 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:28:12.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:28:12 compute-0 sshd-session[298560]: Connection closed by invalid user ubuntu 196.203.106.113 port 42478 [preauth]
Oct 08 10:28:13 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:42482 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:13 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:42494 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:13 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1313: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:28:13 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:28:13 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:28:13 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:28:13.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:28:13 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:42498 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:13 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:42504 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:28:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:28:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:28:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:14 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:28:14 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:42508 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:14 compute-0 nova_compute[262220]: 2025-10-08 10:28:14.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:28:14 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:42512 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:14 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:28:14 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:42516 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:14 compute-0 ceph-mon[73572]: pgmap v1313: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:28:14 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40466 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:14 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:28:14 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:28:14 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:28:14.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:28:15 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40474 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:15 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40490 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:15 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40494 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:15 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1314: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:28:15 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:28:15 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:28:15 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:28:15.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:28:15 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40510 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:28:15] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 08 10:28:15 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:28:15] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 08 10:28:15 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40518 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:16 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40522 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:16 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40532 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:16 compute-0 nova_compute[262220]: 2025-10-08 10:28:16.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:28:16 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40540 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:16 compute-0 ceph-mon[73572]: pgmap v1314: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:28:16 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:28:16 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:28:16 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:28:16.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:28:16 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40556 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:17 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40558 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:28:17.255Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:28:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:28:17.256Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:28:17 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40562 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:17 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1315: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:28:17 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:28:17 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:28:17 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:28:17.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:28:17 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40578 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:17 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40588 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:17 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:28:17 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:28:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:28:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:28:18 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40594 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:28:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:28:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:28:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:28:18 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40598 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:18 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40612 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:18 compute-0 ceph-mon[73572]: pgmap v1315: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:28:18 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:28:18 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40618 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:18 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:28:18 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:28:18 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:28:18.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:28:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:28:18.900Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:28:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:28:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:28:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:28:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:19 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:28:19 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40622 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:19 compute-0 nova_compute[262220]: 2025-10-08 10:28:19.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:28:19 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40628 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:19 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:28:19 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40644 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:19 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1316: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:28:19 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:28:19 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:28:19 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:28:19.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:28:19 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40660 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:19 compute-0 ceph-mon[73572]: pgmap v1316: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:28:19 compute-0 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Oct 08 10:28:19 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:28:19.808537) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 08 10:28:19 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Oct 08 10:28:19 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759919299808573, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 1370, "num_deletes": 251, "total_data_size": 2498152, "memory_usage": 2528808, "flush_reason": "Manual Compaction"}
Oct 08 10:28:19 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Oct 08 10:28:19 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759919299822217, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 2442536, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35622, "largest_seqno": 36991, "table_properties": {"data_size": 2436177, "index_size": 3558, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13584, "raw_average_key_size": 20, "raw_value_size": 2423354, "raw_average_value_size": 3574, "num_data_blocks": 156, "num_entries": 678, "num_filter_entries": 678, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759919169, "oldest_key_time": 1759919169, "file_creation_time": 1759919299, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Oct 08 10:28:19 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 13755 microseconds, and 4918 cpu microseconds.
Oct 08 10:28:19 compute-0 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 08 10:28:19 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:28:19.822298) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 2442536 bytes OK
Oct 08 10:28:19 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:28:19.822319) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Oct 08 10:28:19 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:28:19.824852) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Oct 08 10:28:19 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:28:19.824866) EVENT_LOG_v1 {"time_micros": 1759919299824862, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 08 10:28:19 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:28:19.824884) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 08 10:28:19 compute-0 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 2492247, prev total WAL file size 2492247, number of live WAL files 2.
Oct 08 10:28:19 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 10:28:19 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:28:19.825499) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Oct 08 10:28:19 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 08 10:28:19 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(2385KB)], [77(11MB)]
Oct 08 10:28:19 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759919299825552, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 14624212, "oldest_snapshot_seqno": -1}
Oct 08 10:28:19 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 6684 keys, 12528018 bytes, temperature: kUnknown
Oct 08 10:28:19 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759919299915945, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 12528018, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12486230, "index_size": 23948, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16773, "raw_key_size": 175678, "raw_average_key_size": 26, "raw_value_size": 12368654, "raw_average_value_size": 1850, "num_data_blocks": 937, "num_entries": 6684, "num_filter_entries": 6684, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916581, "oldest_key_time": 0, "file_creation_time": 1759919299, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Oct 08 10:28:19 compute-0 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 08 10:28:19 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:28:19.917305) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 12528018 bytes
Oct 08 10:28:19 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:28:19.918554) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 161.7 rd, 138.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 11.6 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(11.1) write-amplify(5.1) OK, records in: 7200, records dropped: 516 output_compression: NoCompression
Oct 08 10:28:19 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:28:19.918580) EVENT_LOG_v1 {"time_micros": 1759919299918571, "job": 44, "event": "compaction_finished", "compaction_time_micros": 90451, "compaction_time_cpu_micros": 35383, "output_level": 6, "num_output_files": 1, "total_output_size": 12528018, "num_input_records": 7200, "num_output_records": 6684, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 08 10:28:19 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 10:28:19 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759919299919055, "job": 44, "event": "table_file_deletion", "file_number": 79}
Oct 08 10:28:19 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 10:28:19 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759919299920795, "job": 44, "event": "table_file_deletion", "file_number": 77}
Oct 08 10:28:19 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:28:19.825409) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:28:19 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:28:19.920880) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:28:19 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:28:19.920885) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:28:19 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:28:19.920887) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:28:19 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:28:19.920888) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:28:19 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:28:19.920890) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:28:19 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40662 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:20 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40664 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:20 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40672 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:20 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40688 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:20 compute-0 ceph-mon[73572]: from='client.? 192.168.122.10:0/1972566767' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 08 10:28:20 compute-0 ceph-mon[73572]: from='client.? 192.168.122.10:0/1972566767' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 08 10:28:20 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:28:20 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:28:20 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:28:20.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:28:20 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40690 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:21 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40698 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:21 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40700 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:21 compute-0 nova_compute[262220]: 2025-10-08 10:28:21.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:28:21 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1317: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:28:21 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:28:21 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:28:21 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:28:21.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:28:21 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40704 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:21 compute-0 ceph-mon[73572]: pgmap v1317: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:28:21 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40708 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:22 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40714 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:22 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40720 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:22 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40722 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:22 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40736 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:22 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:28:22 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:28:22 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:28:22.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:28:23 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40740 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:23 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40742 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:23 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40750 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:23 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1318: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:28:23 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:28:23 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:28:23 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:28:23.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:28:23 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40762 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:23 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40774 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:28:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:28:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:28:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:24 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:28:24 compute-0 nova_compute[262220]: 2025-10-08 10:28:24.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:28:24 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40776 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:24 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:40778 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:24 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:28:24 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:32954 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:24 compute-0 ceph-mon[73572]: pgmap v1318: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:28:24 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:28:24 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:28:24 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:28:24.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:28:24 compute-0 podman[298602]: 2025-10-08 10:28:24.893547086 +0000 UTC m=+0.052689561 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Oct 08 10:28:24 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:32962 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:25 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:32972 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:25 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:32982 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:25 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1319: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:28:25 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:28:25 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:28:25 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:28:25.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:28:25 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:32988 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:28:25] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Oct 08 10:28:25 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:28:25] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Oct 08 10:28:25 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:33004 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:26 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:33010 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:26 compute-0 ceph-mon[73572]: pgmap v1319: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:28:26 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:33014 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:26 compute-0 nova_compute[262220]: 2025-10-08 10:28:26.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:28:26 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:33022 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:26 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:33028 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:26 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:28:26 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:28:26 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:28:26.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:28:27 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:33040 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:28:27.257Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:28:27 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:33052 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:27 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:33064 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:27 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1320: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:28:27 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:28:27 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:28:27 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:28:27.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:28:27 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:33068 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:28 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:33082 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:28 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:33088 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:28 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:33104 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:28 compute-0 ceph-mon[73572]: pgmap v1320: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:28:28 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:33114 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:28 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:28:28 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:28:28 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:28:28.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:28:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:28:28.901Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:28:28 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:33130 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:28:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:29 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:28:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:29 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:28:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:29 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:28:29 compute-0 nova_compute[262220]: 2025-10-08 10:28:29.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:28:29 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:33132 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:28:29 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:28:29 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1321: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:28:29 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:28:29 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:28:29 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:28:29.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:28:29 compute-0 sshd-session[298627]: Invalid user debian from 196.203.106.113 port 33148
Oct 08 10:28:30 compute-0 sshd-session[298627]: pam_unix(sshd:auth): check pass; user unknown
Oct 08 10:28:30 compute-0 sshd-session[298627]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113
Oct 08 10:28:30 compute-0 ceph-mon[73572]: pgmap v1321: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:28:30 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:28:30 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:28:30 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:28:30.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:28:31 compute-0 nova_compute[262220]: 2025-10-08 10:28:31.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:28:31 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1322: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:28:31 compute-0 sudo[298631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:28:31 compute-0 sudo[298631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:28:31 compute-0 sudo[298631]: pam_unix(sudo:session): session closed for user root
Oct 08 10:28:31 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:28:31 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:28:31 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:28:31.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:28:32 compute-0 sshd-session[298627]: Failed password for invalid user debian from 196.203.106.113 port 33148 ssh2
Oct 08 10:28:32 compute-0 ceph-mon[73572]: pgmap v1322: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:28:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:28:32 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:28:32 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:28:32 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:28:32 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:28:32.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:28:32 compute-0 podman[298657]: 2025-10-08 10:28:32.924738073 +0000 UTC m=+0.084816034 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct 08 10:28:33 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1323: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:28:33 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:28:33 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:28:33 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:28:33.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:28:33 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:28:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:34 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:28:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:34 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:28:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:34 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:28:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:34 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:28:34 compute-0 nova_compute[262220]: 2025-10-08 10:28:34.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:28:34 compute-0 sshd-session[298627]: Connection closed by invalid user debian 196.203.106.113 port 33148 [preauth]
Oct 08 10:28:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:28:34 compute-0 sshd-session[298685]: Invalid user debian from 196.203.106.113 port 33162
Oct 08 10:28:34 compute-0 ceph-mon[73572]: pgmap v1323: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:28:34 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:28:34 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:28:34 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:28:34.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:28:34 compute-0 sshd-session[298685]: pam_unix(sshd:auth): check pass; user unknown
Oct 08 10:28:34 compute-0 sshd-session[298685]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113
Oct 08 10:28:35 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1324: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:28:35 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:28:35 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:28:35 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:28:35.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:28:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:28:35] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct 08 10:28:35 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:28:35] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct 08 10:28:35 compute-0 ceph-mon[73572]: pgmap v1324: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:28:36 compute-0 nova_compute[262220]: 2025-10-08 10:28:36.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:28:36 compute-0 podman[298689]: 2025-10-08 10:28:36.894797188 +0000 UTC m=+0.054349035 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct 08 10:28:36 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:28:36 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:28:36 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:28:36.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:28:36 compute-0 podman[298690]: 2025-10-08 10:28:36.913836676 +0000 UTC m=+0.059447110 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 08 10:28:37 compute-0 sshd-session[298685]: Failed password for invalid user debian from 196.203.106.113 port 33162 ssh2
Oct 08 10:28:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:28:37.258Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:28:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:28:37.259Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:28:37 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1325: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:28:37 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:28:37 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:28:37 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:28:37.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:28:38 compute-0 ceph-mon[73572]: pgmap v1325: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:28:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:28:38.902Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:28:38 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:28:38 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:28:38 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:28:38.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:28:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:28:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:28:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:28:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:39 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:28:39 compute-0 nova_compute[262220]: 2025-10-08 10:28:39.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:28:39 compute-0 sshd-session[298685]: Connection closed by invalid user debian 196.203.106.113 port 33162 [preauth]
Oct 08 10:28:39 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:28:39 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1326: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:28:39 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:28:39 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:28:39 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:28:39.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:28:39 compute-0 sshd-session[298728]: Invalid user debian from 196.203.106.113 port 33412
Oct 08 10:28:39 compute-0 sshd-session[298728]: pam_unix(sshd:auth): check pass; user unknown
Oct 08 10:28:39 compute-0 sshd-session[298728]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113
Oct 08 10:28:39 compute-0 ceph-mon[73572]: pgmap v1326: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:28:40 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:28:40 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:28:40 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:28:40.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:28:41 compute-0 nova_compute[262220]: 2025-10-08 10:28:41.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:28:41 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1327: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:28:41 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:28:41 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:28:41 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:28:41.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:28:42 compute-0 sshd-session[298728]: Failed password for invalid user debian from 196.203.106.113 port 33412 ssh2
Oct 08 10:28:42 compute-0 ceph-mon[73572]: pgmap v1327: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:28:42 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:28:42 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:28:42 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:28:42.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:28:43 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1328: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:28:43 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:28:43 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:28:43 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:28:43.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:28:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:28:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:28:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:28:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:44 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:28:44 compute-0 nova_compute[262220]: 2025-10-08 10:28:44.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:28:44 compute-0 sshd-session[298728]: Connection closed by invalid user debian 196.203.106.113 port 33412 [preauth]
Oct 08 10:28:44 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:28:44 compute-0 ceph-mon[73572]: pgmap v1328: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:28:44 compute-0 sshd-session[298735]: Invalid user debian from 196.203.106.113 port 33416
Oct 08 10:28:44 compute-0 sshd-session[298735]: pam_unix(sshd:auth): check pass; user unknown
Oct 08 10:28:44 compute-0 sshd-session[298735]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113
Oct 08 10:28:44 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:28:44 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:28:44 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:28:44.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:28:45 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1329: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:28:45 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:28:45 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:28:45 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:28:45.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:28:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:28:45] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct 08 10:28:45 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:28:45] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct 08 10:28:46 compute-0 nova_compute[262220]: 2025-10-08 10:28:46.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:28:46 compute-0 ceph-mon[73572]: pgmap v1329: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:28:46 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:28:46 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:28:46 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:28:46.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:28:47 compute-0 sshd-session[298735]: Failed password for invalid user debian from 196.203.106.113 port 33416 ssh2
Oct 08 10:28:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:28:47.260Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:28:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:28:47.260Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:28:47 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1330: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:28:47 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:28:47 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:28:47 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:28:47.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:28:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:28:47
Oct 08 10:28:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 08 10:28:47 compute-0 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct 08 10:28:47 compute-0 ceph-mgr[73869]: [balancer INFO root] pools ['volumes', 'vms', 'default.rgw.log', 'images', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.data', '.nfs', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr']
Oct 08 10:28:47 compute-0 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct 08 10:28:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:28:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:28:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:28:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:28:48 compute-0 ceph-mon[73572]: pgmap v1330: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:28:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:28:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:28:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:28:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:28:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:28:48 compute-0 sudo[298741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:28:48 compute-0 sudo[298741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:28:48 compute-0 sudo[298741]: pam_unix(sudo:session): session closed for user root
Oct 08 10:28:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct 08 10:28:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:28:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 08 10:28:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:28:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 10:28:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:28:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:28:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:28:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:28:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:28:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 08 10:28:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:28:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 08 10:28:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:28:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:28:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:28:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 08 10:28:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:28:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 08 10:28:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:28:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:28:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:28:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 08 10:28:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:28:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 10:28:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 08 10:28:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:28:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:28:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:28:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:28:48 compute-0 sudo[298766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 08 10:28:48 compute-0 sudo[298766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:28:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 08 10:28:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:28:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:28:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:28:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:28:48 compute-0 sudo[298766]: pam_unix(sudo:session): session closed for user root
Oct 08 10:28:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:28:48.903Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:28:48 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Oct 08 10:28:48 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 08 10:28:48 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:28:48 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:28:48 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:28:48.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:28:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:28:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:28:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:28:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:49 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:28:49 compute-0 nova_compute[262220]: 2025-10-08 10:28:49.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:28:49 compute-0 sshd-session[298735]: Connection closed by invalid user debian 196.203.106.113 port 33416 [preauth]
Oct 08 10:28:49 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 08 10:28:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:28:49 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1331: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:28:49 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:28:49 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:28:49 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:28:49.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:28:49 compute-0 sshd-session[298823]: Invalid user debian from 196.203.106.113 port 34448
Oct 08 10:28:49 compute-0 sshd-session[298823]: pam_unix(sshd:auth): check pass; user unknown
Oct 08 10:28:49 compute-0 sshd-session[298823]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113
Oct 08 10:28:50 compute-0 ceph-mon[73572]: pgmap v1331: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:28:50 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 08 10:28:50 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:28:50 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 08 10:28:50 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:28:50 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:28:50 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:28:50.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:28:50 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:28:51 compute-0 nova_compute[262220]: 2025-10-08 10:28:51.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:28:51 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1332: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:28:51 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:28:51 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:28:51 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:28:51.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:28:51 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Oct 08 10:28:51 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 08 10:28:51 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 08 10:28:51 compute-0 sudo[298827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:28:51 compute-0 sudo[298827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:28:51 compute-0 sudo[298827]: pam_unix(sudo:session): session closed for user root
Oct 08 10:28:52 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:28:52 compute-0 sshd-session[298823]: Failed password for invalid user debian from 196.203.106.113 port 34448 ssh2
Oct 08 10:28:52 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 08 10:28:52 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:28:52 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:28:52 compute-0 ceph-mon[73572]: pgmap v1332: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:28:52 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 08 10:28:52 compute-0 nova_compute[262220]: 2025-10-08 10:28:52.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:28:52 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:28:52 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:28:52 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:28:52.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:28:53 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1333: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:28:53 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:28:53 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:28:53 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:28:53.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:28:53 compute-0 nova_compute[262220]: 2025-10-08 10:28:53.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:28:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:28:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:28:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:28:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:54 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:28:54 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:28:54 compute-0 nova_compute[262220]: 2025-10-08 10:28:54.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:28:54 compute-0 sshd-session[298823]: Connection closed by invalid user debian 196.203.106.113 port 34448 [preauth]
Oct 08 10:28:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:28:54 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:28:54 compute-0 sshd-session[298855]: Invalid user debian from 196.203.106.113 port 34460
Oct 08 10:28:54 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:28:54 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:28:54 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:28:54.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:28:54 compute-0 sshd-session[298855]: pam_unix(sshd:auth): check pass; user unknown
Oct 08 10:28:54 compute-0 sshd-session[298855]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113
Oct 08 10:28:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Oct 08 10:28:54 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 08 10:28:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 10:28:54 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:28:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 08 10:28:54 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 10:28:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 08 10:28:54 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1334: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Oct 08 10:28:55 compute-0 podman[298857]: 2025-10-08 10:28:55.111253132 +0000 UTC m=+0.108595806 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct 08 10:28:55 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:28:55 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 08 10:28:55 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:28:55 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:28:55 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:28:55.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:28:55 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:28:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:28:55] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Oct 08 10:28:55 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:28:55] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Oct 08 10:28:55 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 08 10:28:55 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 10:28:55 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 08 10:28:55 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 10:28:55 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 10:28:55 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:28:55 compute-0 nova_compute[262220]: 2025-10-08 10:28:55.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:28:55 compute-0 nova_compute[262220]: 2025-10-08 10:28:55.887 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 08 10:28:55 compute-0 nova_compute[262220]: 2025-10-08 10:28:55.887 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 08 10:28:55 compute-0 nova_compute[262220]: 2025-10-08 10:28:55.903 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 08 10:28:55 compute-0 sudo[298879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:28:55 compute-0 sudo[298879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:28:55 compute-0 sudo[298879]: pam_unix(sudo:session): session closed for user root
Oct 08 10:28:56 compute-0 sudo[298904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 08 10:28:56 compute-0 sudo[298904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:28:56 compute-0 ceph-mon[73572]: pgmap v1333: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:28:56 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:28:56 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 08 10:28:56 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:28:56 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 10:28:56 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:28:56 compute-0 nova_compute[262220]: 2025-10-08 10:28:56.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:28:56 compute-0 podman[298972]: 2025-10-08 10:28:56.516224732 +0000 UTC m=+0.038258642 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:28:56 compute-0 podman[298972]: 2025-10-08 10:28:56.850721996 +0000 UTC m=+0.372755846 container create ceac3142b5f5c5998fd2844342ac206dff1d308aee9394551af206319bae04dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_sammet, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:28:56 compute-0 nova_compute[262220]: 2025-10-08 10:28:56.898 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:28:56 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:28:56 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:28:56 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:28:56.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:28:56 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1335: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 08 10:28:57 compute-0 systemd[1]: Started libpod-conmon-ceac3142b5f5c5998fd2844342ac206dff1d308aee9394551af206319bae04dc.scope.
Oct 08 10:28:57 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:28:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:28:57.261Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:28:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:28:57.261Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:28:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:28:57.262Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:28:57 compute-0 sshd-session[298855]: Failed password for invalid user debian from 196.203.106.113 port 34460 ssh2
Oct 08 10:28:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:28:57.428 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:28:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:28:57.430 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:28:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:28:57.430 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:28:57 compute-0 podman[298972]: 2025-10-08 10:28:57.434912494 +0000 UTC m=+0.956946324 container init ceac3142b5f5c5998fd2844342ac206dff1d308aee9394551af206319bae04dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_sammet, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 10:28:57 compute-0 podman[298972]: 2025-10-08 10:28:57.443078629 +0000 UTC m=+0.965112479 container start ceac3142b5f5c5998fd2844342ac206dff1d308aee9394551af206319bae04dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_sammet, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:28:57 compute-0 zealous_sammet[298988]: 167 167
Oct 08 10:28:57 compute-0 systemd[1]: libpod-ceac3142b5f5c5998fd2844342ac206dff1d308aee9394551af206319bae04dc.scope: Deactivated successfully.
Oct 08 10:28:57 compute-0 ceph-mon[73572]: pgmap v1334: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Oct 08 10:28:57 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:28:57 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 10:28:57 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 10:28:57 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:28:57 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:28:57 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:28:57 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:28:57.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:28:57 compute-0 podman[298972]: 2025-10-08 10:28:57.659784219 +0000 UTC m=+1.181818139 container attach ceac3142b5f5c5998fd2844342ac206dff1d308aee9394551af206319bae04dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_sammet, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct 08 10:28:57 compute-0 podman[298972]: 2025-10-08 10:28:57.660522724 +0000 UTC m=+1.182556574 container died ceac3142b5f5c5998fd2844342ac206dff1d308aee9394551af206319bae04dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct 08 10:28:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-2168519ec43c6cba2f9cee37d7d6468f05af415f19e02204e732202c7cc9de6d-merged.mount: Deactivated successfully.
Oct 08 10:28:58 compute-0 podman[298972]: 2025-10-08 10:28:58.802692157 +0000 UTC m=+2.324725967 container remove ceac3142b5f5c5998fd2844342ac206dff1d308aee9394551af206319bae04dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_sammet, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:28:58 compute-0 ceph-mon[73572]: pgmap v1335: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 08 10:28:58 compute-0 systemd[1]: libpod-conmon-ceac3142b5f5c5998fd2844342ac206dff1d308aee9394551af206319bae04dc.scope: Deactivated successfully.
Oct 08 10:28:58 compute-0 nova_compute[262220]: 2025-10-08 10:28:58.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:28:58 compute-0 nova_compute[262220]: 2025-10-08 10:28:58.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:28:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:28:58.903Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:28:58 compute-0 nova_compute[262220]: 2025-10-08 10:28:58.913 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:28:58 compute-0 nova_compute[262220]: 2025-10-08 10:28:58.914 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:28:58 compute-0 nova_compute[262220]: 2025-10-08 10:28:58.914 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:28:58 compute-0 nova_compute[262220]: 2025-10-08 10:28:58.914 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 08 10:28:58 compute-0 nova_compute[262220]: 2025-10-08 10:28:58.914 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:28:58 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:28:58 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:28:58 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:28:58.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:28:58 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1336: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Oct 08 10:28:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:28:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:28:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:28:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:59 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:28:59 compute-0 podman[299014]: 2025-10-08 10:28:58.941319706 +0000 UTC m=+0.024772656 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:28:59 compute-0 podman[299014]: 2025-10-08 10:28:59.161668736 +0000 UTC m=+0.245121636 container create 97f7178627145bc594939b2ef24951dadc15b1d122361a8b9355067efdbee3d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_mccarthy, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct 08 10:28:59 compute-0 nova_compute[262220]: 2025-10-08 10:28:59.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:28:59 compute-0 sshd-session[298855]: Connection closed by invalid user debian 196.203.106.113 port 34460 [preauth]
Oct 08 10:28:59 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 08 10:28:59 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3200628335' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:28:59 compute-0 systemd[1]: Started libpod-conmon-97f7178627145bc594939b2ef24951dadc15b1d122361a8b9355067efdbee3d7.scope.
Oct 08 10:28:59 compute-0 nova_compute[262220]: 2025-10-08 10:28:59.410 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:28:59 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:28:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40ed76bd35235059f93ff30fcfcbd64a816daec159cf3cfaa918b70af1c42dc0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:28:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40ed76bd35235059f93ff30fcfcbd64a816daec159cf3cfaa918b70af1c42dc0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:28:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40ed76bd35235059f93ff30fcfcbd64a816daec159cf3cfaa918b70af1c42dc0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:28:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40ed76bd35235059f93ff30fcfcbd64a816daec159cf3cfaa918b70af1c42dc0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:28:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40ed76bd35235059f93ff30fcfcbd64a816daec159cf3cfaa918b70af1c42dc0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 10:28:59 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:28:59 compute-0 podman[299014]: 2025-10-08 10:28:59.578600655 +0000 UTC m=+0.662053575 container init 97f7178627145bc594939b2ef24951dadc15b1d122361a8b9355067efdbee3d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 10:28:59 compute-0 podman[299014]: 2025-10-08 10:28:59.585724846 +0000 UTC m=+0.669177736 container start 97f7178627145bc594939b2ef24951dadc15b1d122361a8b9355067efdbee3d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_mccarthy, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 08 10:28:59 compute-0 nova_compute[262220]: 2025-10-08 10:28:59.628 2 WARNING nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 10:28:59 compute-0 nova_compute[262220]: 2025-10-08 10:28:59.631 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4481MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 08 10:28:59 compute-0 nova_compute[262220]: 2025-10-08 10:28:59.632 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:28:59 compute-0 nova_compute[262220]: 2025-10-08 10:28:59.632 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:28:59 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:28:59 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:28:59 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:28:59.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:28:59 compute-0 nova_compute[262220]: 2025-10-08 10:28:59.697 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 08 10:28:59 compute-0 nova_compute[262220]: 2025-10-08 10:28:59.697 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 08 10:28:59 compute-0 podman[299014]: 2025-10-08 10:28:59.714863406 +0000 UTC m=+0.798316326 container attach 97f7178627145bc594939b2ef24951dadc15b1d122361a8b9355067efdbee3d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_mccarthy, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 08 10:28:59 compute-0 nova_compute[262220]: 2025-10-08 10:28:59.716 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:28:59 compute-0 sshd-session[299049]: Invalid user debian from 196.203.106.113 port 42180
Oct 08 10:28:59 compute-0 eager_mccarthy[299055]: --> passed data devices: 0 physical, 1 LVM
Oct 08 10:28:59 compute-0 eager_mccarthy[299055]: --> All data devices are unavailable
Oct 08 10:28:59 compute-0 sshd-session[299049]: pam_unix(sshd:auth): check pass; user unknown
Oct 08 10:28:59 compute-0 sshd-session[299049]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113
Oct 08 10:28:59 compute-0 systemd[1]: libpod-97f7178627145bc594939b2ef24951dadc15b1d122361a8b9355067efdbee3d7.scope: Deactivated successfully.
Oct 08 10:28:59 compute-0 podman[299014]: 2025-10-08 10:28:59.95409527 +0000 UTC m=+1.037548190 container died 97f7178627145bc594939b2ef24951dadc15b1d122361a8b9355067efdbee3d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_mccarthy, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct 08 10:29:00 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 08 10:29:00 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/812258560' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:29:00 compute-0 nova_compute[262220]: 2025-10-08 10:29:00.258 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:29:00 compute-0 nova_compute[262220]: 2025-10-08 10:29:00.270 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 08 10:29:00 compute-0 nova_compute[262220]: 2025-10-08 10:29:00.301 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 08 10:29:00 compute-0 nova_compute[262220]: 2025-10-08 10:29:00.306 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 08 10:29:00 compute-0 nova_compute[262220]: 2025-10-08 10:29:00.307 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:29:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-40ed76bd35235059f93ff30fcfcbd64a816daec159cf3cfaa918b70af1c42dc0-merged.mount: Deactivated successfully.
Oct 08 10:29:00 compute-0 ceph-mon[73572]: pgmap v1336: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Oct 08 10:29:00 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3200628335' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:29:00 compute-0 podman[299014]: 2025-10-08 10:29:00.798266733 +0000 UTC m=+1.881719623 container remove 97f7178627145bc594939b2ef24951dadc15b1d122361a8b9355067efdbee3d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_mccarthy, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 10:29:00 compute-0 systemd[1]: libpod-conmon-97f7178627145bc594939b2ef24951dadc15b1d122361a8b9355067efdbee3d7.scope: Deactivated successfully.
Oct 08 10:29:00 compute-0 sudo[298904]: pam_unix(sudo:session): session closed for user root
Oct 08 10:29:00 compute-0 sudo[299105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:29:00 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:29:00 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:29:00 compute-0 sudo[299105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:29:00 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:29:00.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:29:00 compute-0 sudo[299105]: pam_unix(sudo:session): session closed for user root
Oct 08 10:29:00 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1337: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 08 10:29:01 compute-0 sudo[299130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- lvm list --format json
Oct 08 10:29:01 compute-0 sudo[299130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:29:01 compute-0 nova_compute[262220]: 2025-10-08 10:29:01.306 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:29:01 compute-0 nova_compute[262220]: 2025-10-08 10:29:01.308 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:29:01 compute-0 nova_compute[262220]: 2025-10-08 10:29:01.308 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:29:01 compute-0 nova_compute[262220]: 2025-10-08 10:29:01.309 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 08 10:29:01 compute-0 podman[299198]: 2025-10-08 10:29:01.420311048 +0000 UTC m=+0.023249426 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:29:01 compute-0 podman[299198]: 2025-10-08 10:29:01.529275353 +0000 UTC m=+0.132213711 container create 4952df5a9822d3a7bff33fe93646892ca69ef41279995e80435f2029a827eed8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Oct 08 10:29:01 compute-0 nova_compute[262220]: 2025-10-08 10:29:01.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:29:01 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:29:01 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:29:01 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:29:01.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:29:01 compute-0 systemd[1]: Started libpod-conmon-4952df5a9822d3a7bff33fe93646892ca69ef41279995e80435f2029a827eed8.scope.
Oct 08 10:29:01 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:29:01 compute-0 podman[299198]: 2025-10-08 10:29:01.866572339 +0000 UTC m=+0.469510717 container init 4952df5a9822d3a7bff33fe93646892ca69ef41279995e80435f2029a827eed8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct 08 10:29:01 compute-0 podman[299198]: 2025-10-08 10:29:01.875956702 +0000 UTC m=+0.478895090 container start 4952df5a9822d3a7bff33fe93646892ca69ef41279995e80435f2029a827eed8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_carson, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 10:29:01 compute-0 competent_carson[299214]: 167 167
Oct 08 10:29:01 compute-0 systemd[1]: libpod-4952df5a9822d3a7bff33fe93646892ca69ef41279995e80435f2029a827eed8.scope: Deactivated successfully.
Oct 08 10:29:01 compute-0 podman[299198]: 2025-10-08 10:29:01.899613091 +0000 UTC m=+0.502551489 container attach 4952df5a9822d3a7bff33fe93646892ca69ef41279995e80435f2029a827eed8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct 08 10:29:01 compute-0 podman[299198]: 2025-10-08 10:29:01.900225461 +0000 UTC m=+0.503163839 container died 4952df5a9822d3a7bff33fe93646892ca69ef41279995e80435f2029a827eed8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_carson, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 08 10:29:01 compute-0 sshd-session[299049]: Failed password for invalid user debian from 196.203.106.113 port 42180 ssh2
Oct 08 10:29:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-d68361f17fc5ccc827a3203d3374739968927eb032e090682d3c606211123eb9-merged.mount: Deactivated successfully.
Oct 08 10:29:02 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/812258560' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:29:02 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/1085750379' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:29:02 compute-0 sshd-session[299049]: Connection closed by invalid user debian 196.203.106.113 port 42180 [preauth]
Oct 08 10:29:02 compute-0 podman[299198]: 2025-10-08 10:29:02.16337257 +0000 UTC m=+0.766310928 container remove 4952df5a9822d3a7bff33fe93646892ca69ef41279995e80435f2029a827eed8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_carson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:29:02 compute-0 systemd[1]: libpod-conmon-4952df5a9822d3a7bff33fe93646892ca69ef41279995e80435f2029a827eed8.scope: Deactivated successfully.
Oct 08 10:29:02 compute-0 podman[299243]: 2025-10-08 10:29:02.43922463 +0000 UTC m=+0.103357404 container create 8cd177beba76edccf526909df31a1c982c1bd44d2920a7b64ea1d4d4c76a1135 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Oct 08 10:29:02 compute-0 podman[299243]: 2025-10-08 10:29:02.378975296 +0000 UTC m=+0.043108090 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:29:02 compute-0 systemd[1]: Started libpod-conmon-8cd177beba76edccf526909df31a1c982c1bd44d2920a7b64ea1d4d4c76a1135.scope.
Oct 08 10:29:02 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:29:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c118635e7596082c7be1bbe0502e9f48ab4103d5d3a92eeb004560355062106/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:29:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c118635e7596082c7be1bbe0502e9f48ab4103d5d3a92eeb004560355062106/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:29:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c118635e7596082c7be1bbe0502e9f48ab4103d5d3a92eeb004560355062106/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:29:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c118635e7596082c7be1bbe0502e9f48ab4103d5d3a92eeb004560355062106/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:29:02 compute-0 podman[299243]: 2025-10-08 10:29:02.58498548 +0000 UTC m=+0.249118284 container init 8cd177beba76edccf526909df31a1c982c1bd44d2920a7b64ea1d4d4c76a1135 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_borg, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 08 10:29:02 compute-0 podman[299243]: 2025-10-08 10:29:02.59269587 +0000 UTC m=+0.256828634 container start 8cd177beba76edccf526909df31a1c982c1bd44d2920a7b64ea1d4d4c76a1135 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_borg, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:29:02 compute-0 podman[299243]: 2025-10-08 10:29:02.618811178 +0000 UTC m=+0.282943942 container attach 8cd177beba76edccf526909df31a1c982c1bd44d2920a7b64ea1d4d4c76a1135 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 08 10:29:02 compute-0 sshd-session[299236]: Invalid user debian from 196.203.106.113 port 42196
Oct 08 10:29:02 compute-0 sshd-session[299236]: pam_unix(sshd:auth): check pass; user unknown
Oct 08 10:29:02 compute-0 sshd-session[299236]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113
Oct 08 10:29:02 compute-0 busy_borg[299259]: {
Oct 08 10:29:02 compute-0 busy_borg[299259]:     "1": [
Oct 08 10:29:02 compute-0 busy_borg[299259]:         {
Oct 08 10:29:02 compute-0 busy_borg[299259]:             "devices": [
Oct 08 10:29:02 compute-0 busy_borg[299259]:                 "/dev/loop3"
Oct 08 10:29:02 compute-0 busy_borg[299259]:             ],
Oct 08 10:29:02 compute-0 busy_borg[299259]:             "lv_name": "ceph_lv0",
Oct 08 10:29:02 compute-0 busy_borg[299259]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:29:02 compute-0 busy_borg[299259]:             "lv_size": "21470642176",
Oct 08 10:29:02 compute-0 busy_borg[299259]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 08 10:29:02 compute-0 busy_borg[299259]:             "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 10:29:02 compute-0 busy_borg[299259]:             "name": "ceph_lv0",
Oct 08 10:29:02 compute-0 busy_borg[299259]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:29:02 compute-0 busy_borg[299259]:             "tags": {
Oct 08 10:29:02 compute-0 busy_borg[299259]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:29:02 compute-0 busy_borg[299259]:                 "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 10:29:02 compute-0 busy_borg[299259]:                 "ceph.cephx_lockbox_secret": "",
Oct 08 10:29:02 compute-0 busy_borg[299259]:                 "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct 08 10:29:02 compute-0 busy_borg[299259]:                 "ceph.cluster_name": "ceph",
Oct 08 10:29:02 compute-0 busy_borg[299259]:                 "ceph.crush_device_class": "",
Oct 08 10:29:02 compute-0 busy_borg[299259]:                 "ceph.encrypted": "0",
Oct 08 10:29:02 compute-0 busy_borg[299259]:                 "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct 08 10:29:02 compute-0 busy_borg[299259]:                 "ceph.osd_id": "1",
Oct 08 10:29:02 compute-0 busy_borg[299259]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 08 10:29:02 compute-0 busy_borg[299259]:                 "ceph.type": "block",
Oct 08 10:29:02 compute-0 busy_borg[299259]:                 "ceph.vdo": "0",
Oct 08 10:29:02 compute-0 busy_borg[299259]:                 "ceph.with_tpm": "0"
Oct 08 10:29:02 compute-0 busy_borg[299259]:             },
Oct 08 10:29:02 compute-0 busy_borg[299259]:             "type": "block",
Oct 08 10:29:02 compute-0 busy_borg[299259]:             "vg_name": "ceph_vg0"
Oct 08 10:29:02 compute-0 busy_borg[299259]:         }
Oct 08 10:29:02 compute-0 busy_borg[299259]:     ]
Oct 08 10:29:02 compute-0 busy_borg[299259]: }
Oct 08 10:29:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:29:02 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:29:02 compute-0 systemd[1]: libpod-8cd177beba76edccf526909df31a1c982c1bd44d2920a7b64ea1d4d4c76a1135.scope: Deactivated successfully.
Oct 08 10:29:02 compute-0 podman[299243]: 2025-10-08 10:29:02.928679333 +0000 UTC m=+0.592812087 container died 8cd177beba76edccf526909df31a1c982c1bd44d2920a7b64ea1d4d4c76a1135 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:29:02 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:29:02 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:29:02 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:29:02.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:29:02 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1338: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 08 10:29:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c118635e7596082c7be1bbe0502e9f48ab4103d5d3a92eeb004560355062106-merged.mount: Deactivated successfully.
Oct 08 10:29:03 compute-0 podman[299243]: 2025-10-08 10:29:03.138583894 +0000 UTC m=+0.802716658 container remove 8cd177beba76edccf526909df31a1c982c1bd44d2920a7b64ea1d4d4c76a1135 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_borg, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:29:03 compute-0 systemd[1]: libpod-conmon-8cd177beba76edccf526909df31a1c982c1bd44d2920a7b64ea1d4d4c76a1135.scope: Deactivated successfully.
Oct 08 10:29:03 compute-0 sudo[299130]: pam_unix(sudo:session): session closed for user root
Oct 08 10:29:03 compute-0 sudo[299299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:29:03 compute-0 podman[299279]: 2025-10-08 10:29:03.254153714 +0000 UTC m=+0.220447164 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct 08 10:29:03 compute-0 sudo[299299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:29:03 compute-0 sudo[299299]: pam_unix(sudo:session): session closed for user root
Oct 08 10:29:03 compute-0 sudo[299332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- raw list --format json
Oct 08 10:29:03 compute-0 sudo[299332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:29:03 compute-0 ceph-mon[73572]: pgmap v1337: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 08 10:29:03 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/393265661' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:29:03 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:29:03 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:29:03 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:29:03 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:29:03.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:29:03 compute-0 podman[299397]: 2025-10-08 10:29:03.799243102 +0000 UTC m=+0.081555447 container create c8115a747135c6e811c979c60aee78e9b00613117ffaf14a1d8a430e17c9feb6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_leavitt, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Oct 08 10:29:03 compute-0 podman[299397]: 2025-10-08 10:29:03.744517026 +0000 UTC m=+0.026829401 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:29:03 compute-0 systemd[1]: Started libpod-conmon-c8115a747135c6e811c979c60aee78e9b00613117ffaf14a1d8a430e17c9feb6.scope.
Oct 08 10:29:03 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:29:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:29:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:29:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:29:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:04 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:29:04 compute-0 podman[299397]: 2025-10-08 10:29:04.004066779 +0000 UTC m=+0.286379144 container init c8115a747135c6e811c979c60aee78e9b00613117ffaf14a1d8a430e17c9feb6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_leavitt, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 08 10:29:04 compute-0 podman[299397]: 2025-10-08 10:29:04.011741467 +0000 UTC m=+0.294053832 container start c8115a747135c6e811c979c60aee78e9b00613117ffaf14a1d8a430e17c9feb6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_leavitt, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct 08 10:29:04 compute-0 zen_leavitt[299413]: 167 167
Oct 08 10:29:04 compute-0 systemd[1]: libpod-c8115a747135c6e811c979c60aee78e9b00613117ffaf14a1d8a430e17c9feb6.scope: Deactivated successfully.
Oct 08 10:29:04 compute-0 conmon[299413]: conmon c8115a747135c6e811c9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c8115a747135c6e811c979c60aee78e9b00613117ffaf14a1d8a430e17c9feb6.scope/container/memory.events
Oct 08 10:29:04 compute-0 podman[299397]: 2025-10-08 10:29:04.055356433 +0000 UTC m=+0.337668828 container attach c8115a747135c6e811c979c60aee78e9b00613117ffaf14a1d8a430e17c9feb6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_leavitt, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct 08 10:29:04 compute-0 podman[299397]: 2025-10-08 10:29:04.056987966 +0000 UTC m=+0.339300341 container died c8115a747135c6e811c979c60aee78e9b00613117ffaf14a1d8a430e17c9feb6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:29:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a2fdb29e1cfc2ef52b78fc031ac9e99f9c0cda1766335aafd13f2f5c49127e1-merged.mount: Deactivated successfully.
Oct 08 10:29:04 compute-0 nova_compute[262220]: 2025-10-08 10:29:04.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:29:04 compute-0 podman[299397]: 2025-10-08 10:29:04.307545627 +0000 UTC m=+0.589858012 container remove c8115a747135c6e811c979c60aee78e9b00613117ffaf14a1d8a430e17c9feb6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_leavitt, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct 08 10:29:04 compute-0 systemd[1]: libpod-conmon-c8115a747135c6e811c979c60aee78e9b00613117ffaf14a1d8a430e17c9feb6.scope: Deactivated successfully.
Oct 08 10:29:04 compute-0 ceph-mon[73572]: pgmap v1338: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 08 10:29:04 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/3158630014' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:29:04 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:29:04 compute-0 podman[299438]: 2025-10-08 10:29:04.620672807 +0000 UTC m=+0.125400380 container create d7ff7ffe5d8101e40e71445440970e76b40ef6dbab9c32d1fdd123433d8908ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_aryabhata, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0)
Oct 08 10:29:04 compute-0 podman[299438]: 2025-10-08 10:29:04.538464229 +0000 UTC m=+0.043191812 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:29:04 compute-0 systemd[1]: Started libpod-conmon-d7ff7ffe5d8101e40e71445440970e76b40ef6dbab9c32d1fdd123433d8908ef.scope.
Oct 08 10:29:04 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:29:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28e6eda10f905cdf74d5e4b78145586c07dcd45de90d4dcf4e61ce0ea10b3fcd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:29:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28e6eda10f905cdf74d5e4b78145586c07dcd45de90d4dcf4e61ce0ea10b3fcd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:29:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28e6eda10f905cdf74d5e4b78145586c07dcd45de90d4dcf4e61ce0ea10b3fcd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:29:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28e6eda10f905cdf74d5e4b78145586c07dcd45de90d4dcf4e61ce0ea10b3fcd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:29:04 compute-0 podman[299438]: 2025-10-08 10:29:04.864508469 +0000 UTC m=+0.369236112 container init d7ff7ffe5d8101e40e71445440970e76b40ef6dbab9c32d1fdd123433d8908ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_aryabhata, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:29:04 compute-0 podman[299438]: 2025-10-08 10:29:04.872636753 +0000 UTC m=+0.377364336 container start d7ff7ffe5d8101e40e71445440970e76b40ef6dbab9c32d1fdd123433d8908ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:29:04 compute-0 podman[299438]: 2025-10-08 10:29:04.926479291 +0000 UTC m=+0.431206964 container attach d7ff7ffe5d8101e40e71445440970e76b40ef6dbab9c32d1fdd123433d8908ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:29:04 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:29:04 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:29:04 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:29:04.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:29:04 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1339: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Oct 08 10:29:05 compute-0 sshd-session[299236]: Failed password for invalid user debian from 196.203.106.113 port 42196 ssh2
Oct 08 10:29:05 compute-0 lvm[299529]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 08 10:29:05 compute-0 lvm[299529]: VG ceph_vg0 finished
Oct 08 10:29:05 compute-0 brave_aryabhata[299454]: {}
Oct 08 10:29:05 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/2277510334' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:29:05 compute-0 podman[299438]: 2025-10-08 10:29:05.64173745 +0000 UTC m=+1.146465023 container died d7ff7ffe5d8101e40e71445440970e76b40ef6dbab9c32d1fdd123433d8908ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_aryabhata, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct 08 10:29:05 compute-0 systemd[1]: libpod-d7ff7ffe5d8101e40e71445440970e76b40ef6dbab9c32d1fdd123433d8908ef.scope: Deactivated successfully.
Oct 08 10:29:05 compute-0 systemd[1]: libpod-d7ff7ffe5d8101e40e71445440970e76b40ef6dbab9c32d1fdd123433d8908ef.scope: Consumed 1.260s CPU time.
Oct 08 10:29:05 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:29:05 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:29:05 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:29:05.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:29:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:29:05] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Oct 08 10:29:05 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:29:05] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Oct 08 10:29:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-28e6eda10f905cdf74d5e4b78145586c07dcd45de90d4dcf4e61ce0ea10b3fcd-merged.mount: Deactivated successfully.
Oct 08 10:29:06 compute-0 podman[299438]: 2025-10-08 10:29:06.046771984 +0000 UTC m=+1.551499567 container remove d7ff7ffe5d8101e40e71445440970e76b40ef6dbab9c32d1fdd123433d8908ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_aryabhata, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 08 10:29:06 compute-0 sudo[299332]: pam_unix(sudo:session): session closed for user root
Oct 08 10:29:06 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 10:29:06 compute-0 systemd[1]: libpod-conmon-d7ff7ffe5d8101e40e71445440970e76b40ef6dbab9c32d1fdd123433d8908ef.scope: Deactivated successfully.
Oct 08 10:29:06 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:29:06 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 10:29:06 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:29:06 compute-0 sudo[299549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 08 10:29:06 compute-0 sudo[299549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:29:06 compute-0 sudo[299549]: pam_unix(sudo:session): session closed for user root
Oct 08 10:29:06 compute-0 nova_compute[262220]: 2025-10-08 10:29:06.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:29:06 compute-0 ceph-mon[73572]: pgmap v1339: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Oct 08 10:29:06 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:29:06 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:29:06 compute-0 nova_compute[262220]: 2025-10-08 10:29:06.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:29:06 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:29:06 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:29:06 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:29:06.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:29:06 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1340: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:29:07 compute-0 sshd-session[299236]: Connection closed by invalid user debian 196.203.106.113 port 42196 [preauth]
Oct 08 10:29:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:29:07.263Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:29:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:29:07.264Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:29:07 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:29:07 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:29:07 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:29:07.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:29:07 compute-0 sshd-session[299575]: Invalid user debian from 196.203.106.113 port 49146
Oct 08 10:29:07 compute-0 podman[299577]: 2025-10-08 10:29:07.814923209 +0000 UTC m=+0.064176784 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd)
Oct 08 10:29:07 compute-0 podman[299578]: 2025-10-08 10:29:07.815028692 +0000 UTC m=+0.063803861 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Oct 08 10:29:07 compute-0 sshd-session[299575]: pam_unix(sshd:auth): check pass; user unknown
Oct 08 10:29:07 compute-0 sshd-session[299575]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113
Oct 08 10:29:07 compute-0 ceph-mon[73572]: pgmap v1340: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:29:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:29:08.904Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:29:08 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:29:08 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:29:08 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:29:08.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:29:08 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1341: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:29:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:29:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:29:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:29:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:09 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:29:09 compute-0 nova_compute[262220]: 2025-10-08 10:29:09.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:29:09 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:29:09 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:29:09 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:29:09 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:29:09.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:29:09 compute-0 sshd-session[299575]: Failed password for invalid user debian from 196.203.106.113 port 49146 ssh2
Oct 08 10:29:10 compute-0 sshd-session[299575]: Connection closed by invalid user debian 196.203.106.113 port 49146 [preauth]
Oct 08 10:29:10 compute-0 ceph-mon[73572]: pgmap v1341: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:29:10 compute-0 sshd-session[299619]: Invalid user debian from 196.203.106.113 port 49148
Oct 08 10:29:10 compute-0 sshd-session[299619]: pam_unix(sshd:auth): check pass; user unknown
Oct 08 10:29:10 compute-0 sshd-session[299619]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113
Oct 08 10:29:10 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:29:10 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:29:10 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:29:10.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:29:10 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1342: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:29:11 compute-0 nova_compute[262220]: 2025-10-08 10:29:11.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:29:11 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:29:11 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:29:11 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:29:11.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:29:11 compute-0 sudo[299622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:29:11 compute-0 sudo[299622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:29:11 compute-0 sudo[299622]: pam_unix(sudo:session): session closed for user root
Oct 08 10:29:12 compute-0 ceph-mon[73572]: pgmap v1342: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:29:12 compute-0 sshd-session[299619]: Failed password for invalid user debian from 196.203.106.113 port 49148 ssh2
Oct 08 10:29:12 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:29:12 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:29:12 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:29:12.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:29:12 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1343: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:29:13 compute-0 sshd-session[299619]: Connection closed by invalid user debian 196.203.106.113 port 49148 [preauth]
Oct 08 10:29:13 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:29:13 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:29:13 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:29:13.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:29:13 compute-0 sshd-session[299648]: Invalid user debian from 196.203.106.113 port 49152
Oct 08 10:29:13 compute-0 sshd-session[299648]: pam_unix(sshd:auth): check pass; user unknown
Oct 08 10:29:13 compute-0 sshd-session[299648]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113
Oct 08 10:29:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:29:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:29:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:29:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:14 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:29:14 compute-0 nova_compute[262220]: 2025-10-08 10:29:14.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:29:14 compute-0 ceph-mon[73572]: pgmap v1343: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:29:14 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:29:14 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:29:14 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:29:14 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:29:14.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:29:14 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1344: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:29:15 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:29:15 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:29:15 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:29:15.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:29:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:29:15] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Oct 08 10:29:15 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:29:15] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Oct 08 10:29:16 compute-0 sshd-session[299648]: Failed password for invalid user debian from 196.203.106.113 port 49152 ssh2
Oct 08 10:29:16 compute-0 nova_compute[262220]: 2025-10-08 10:29:16.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:29:16 compute-0 ceph-mon[73572]: pgmap v1344: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:29:16 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:29:16 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:29:16 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:29:16.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:29:16 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1345: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:29:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:29:17.264Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:29:17 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:29:17 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:29:17 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:29:17.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:29:17 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:29:17 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:29:17 compute-0 ceph-mon[73572]: pgmap v1345: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:29:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:29:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:29:18 compute-0 sshd-session[299648]: Connection closed by invalid user debian 196.203.106.113 port 49152 [preauth]
Oct 08 10:29:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:29:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:29:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:29:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:29:18 compute-0 sshd-session[299656]: Invalid user debian from 196.203.106.113 port 38774
Oct 08 10:29:18 compute-0 sshd-session[299656]: pam_unix(sshd:auth): check pass; user unknown
Oct 08 10:29:18 compute-0 sshd-session[299656]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113
Oct 08 10:29:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:29:18.905Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:29:18 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:29:18 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:29:18 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:29:18.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:29:18 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1346: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:29:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:29:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:29:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:29:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:19 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:29:19 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:29:19 compute-0 nova_compute[262220]: 2025-10-08 10:29:19.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:29:19 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:29:19 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:29:19 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:29:19 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:29:19.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:29:20 compute-0 ceph-mon[73572]: pgmap v1346: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:29:20 compute-0 ceph-mon[73572]: from='client.? 192.168.122.10:0/403308089' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 08 10:29:20 compute-0 ceph-mon[73572]: from='client.? 192.168.122.10:0/403308089' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 08 10:29:20 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:29:20 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:29:20 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:29:20.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:29:20 compute-0 sshd-session[299656]: Failed password for invalid user debian from 196.203.106.113 port 38774 ssh2
Oct 08 10:29:20 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1347: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:29:21 compute-0 nova_compute[262220]: 2025-10-08 10:29:21.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:29:21 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:29:21 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:29:21 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:29:21.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:29:22 compute-0 ceph-mon[73572]: pgmap v1347: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:29:22 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:29:22 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:29:22 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:29:22.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:29:22 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1348: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:29:22 compute-0 sshd-session[299656]: Connection closed by invalid user debian 196.203.106.113 port 38774 [preauth]
Oct 08 10:29:23 compute-0 sshd-session[299662]: Invalid user debian from 196.203.106.113 port 38788
Oct 08 10:29:23 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:29:23 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:29:23 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:29:23.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:29:23 compute-0 sshd-session[299662]: pam_unix(sshd:auth): check pass; user unknown
Oct 08 10:29:23 compute-0 sshd-session[299662]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113
Oct 08 10:29:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:29:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:29:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:29:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:24 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:29:24 compute-0 nova_compute[262220]: 2025-10-08 10:29:24.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:29:24 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:29:24 compute-0 ceph-mon[73572]: pgmap v1348: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:29:24 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:29:24 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:29:24 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:29:24.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:29:24 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1349: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:29:25 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:29:25 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:29:25 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:29:25.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:29:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:29:25] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 08 10:29:25 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:29:25] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 08 10:29:25 compute-0 podman[299667]: 2025-10-08 10:29:25.901195373 +0000 UTC m=+0.055674507 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3)
Oct 08 10:29:26 compute-0 sshd-session[299662]: Failed password for invalid user debian from 196.203.106.113 port 38788 ssh2
Oct 08 10:29:26 compute-0 nova_compute[262220]: 2025-10-08 10:29:26.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:29:26 compute-0 ceph-mon[73572]: pgmap v1349: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:29:26 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:29:26 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:29:26 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:29:26.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:29:26 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1350: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:29:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:29:27.265Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:29:27 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:29:27 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:29:27 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:29:27.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:29:28 compute-0 sshd-session[299662]: Connection closed by invalid user debian 196.203.106.113 port 38788 [preauth]
Oct 08 10:29:28 compute-0 sshd-session[299692]: Invalid user admin from 196.203.106.113 port 53836
Oct 08 10:29:28 compute-0 ceph-mon[73572]: pgmap v1350: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:29:28 compute-0 sshd-session[299692]: pam_unix(sshd:auth): check pass; user unknown
Oct 08 10:29:28 compute-0 sshd-session[299692]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113
Oct 08 10:29:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:29:28.906Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:29:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:29:28.906Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:29:28 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:29:28 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:29:28 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:29:28.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:29:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:29:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:29:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:29:29 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1351: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:29:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:29 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:29:29 compute-0 nova_compute[262220]: 2025-10-08 10:29:29.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:29:29 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:29:29 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:29:29 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:29:29 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:29:29.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:29:29 compute-0 ceph-mon[73572]: pgmap v1351: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:29:30 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:29:30 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:29:30 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:29:30.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:29:31 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1352: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:29:31 compute-0 sshd-session[299692]: Failed password for invalid user admin from 196.203.106.113 port 53836 ssh2
Oct 08 10:29:31 compute-0 nova_compute[262220]: 2025-10-08 10:29:31.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:29:31 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:29:31 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:29:31 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:29:31.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:29:31 compute-0 sudo[299697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:29:31 compute-0 sudo[299697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:29:31 compute-0 sudo[299697]: pam_unix(sudo:session): session closed for user root
Oct 08 10:29:32 compute-0 ceph-mon[73572]: pgmap v1352: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:29:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:29:32 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:29:32 compute-0 sshd-session[299692]: Connection closed by invalid user admin 196.203.106.113 port 53836 [preauth]
Oct 08 10:29:32 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:29:32 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:29:32 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:29:32.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:29:33 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1353: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:29:33 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:29:33 compute-0 sshd-session[299723]: Invalid user admin from 196.203.106.113 port 53852
Oct 08 10:29:33 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:29:33 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:29:33 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:29:33.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:29:33 compute-0 sshd-session[299723]: pam_unix(sshd:auth): check pass; user unknown
Oct 08 10:29:33 compute-0 sshd-session[299723]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113
Oct 08 10:29:33 compute-0 podman[299726]: 2025-10-08 10:29:33.728139052 +0000 UTC m=+0.113828994 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 08 10:29:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:34 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:29:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:34 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:29:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:34 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:29:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:34 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:29:34 compute-0 nova_compute[262220]: 2025-10-08 10:29:34.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:29:34 compute-0 ceph-mon[73572]: pgmap v1353: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:29:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:29:34 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:29:34 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:29:34 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:29:34.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:29:35 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1354: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:29:35 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:29:35 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:29:35 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:29:35.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:29:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:29:35] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Oct 08 10:29:35 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:29:35] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Oct 08 10:29:36 compute-0 sshd-session[299723]: Failed password for invalid user admin from 196.203.106.113 port 53852 ssh2
Oct 08 10:29:36 compute-0 ceph-mon[73572]: pgmap v1354: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:29:36 compute-0 nova_compute[262220]: 2025-10-08 10:29:36.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:29:37 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:29:37 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:29:37 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:29:37.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:29:37 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1355: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:29:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:29:37.265Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:29:37 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:29:37 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:29:37 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:29:37.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:29:37 compute-0 sshd-session[299723]: Connection closed by invalid user admin 196.203.106.113 port 53852 [preauth]
Oct 08 10:29:38 compute-0 ceph-mon[73572]: pgmap v1355: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:29:38 compute-0 sshd-session[299756]: Invalid user admin from 196.203.106.113 port 41946
Oct 08 10:29:38 compute-0 podman[299759]: 2025-10-08 10:29:38.576801189 +0000 UTC m=+0.059369078 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 08 10:29:38 compute-0 podman[299760]: 2025-10-08 10:29:38.593882713 +0000 UTC m=+0.071300445 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 08 10:29:38 compute-0 sshd-session[299756]: pam_unix(sshd:auth): check pass; user unknown
Oct 08 10:29:38 compute-0 sshd-session[299756]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113
Oct 08 10:29:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:29:38.906Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:29:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:29:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:29:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:29:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:39 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:29:39 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:29:39 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:29:39 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:29:39.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:29:39 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1356: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:29:39 compute-0 nova_compute[262220]: 2025-10-08 10:29:39.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:29:39 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:29:39 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:29:39 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:29:39 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:29:39.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:29:40 compute-0 sshd-session[299756]: Failed password for invalid user admin from 196.203.106.113 port 41946 ssh2
Oct 08 10:29:40 compute-0 ceph-mon[73572]: pgmap v1356: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:29:40 compute-0 sshd-session[299756]: Connection closed by invalid user admin 196.203.106.113 port 41946 [preauth]
Oct 08 10:29:41 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:29:41 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1357: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:29:41 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:29:41 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:29:41.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:29:41 compute-0 sshd-session[299799]: Invalid user admin from 196.203.106.113 port 41962
Oct 08 10:29:41 compute-0 sshd-session[299799]: pam_unix(sshd:auth): check pass; user unknown
Oct 08 10:29:41 compute-0 sshd-session[299799]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113
Oct 08 10:29:41 compute-0 nova_compute[262220]: 2025-10-08 10:29:41.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:29:41 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:29:41 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:29:41 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:29:41.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:29:42 compute-0 ceph-mon[73572]: pgmap v1357: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:29:43 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1358: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:29:43 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:29:43 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:29:43 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:29:43.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:29:43 compute-0 sshd-session[299799]: Failed password for invalid user admin from 196.203.106.113 port 41962 ssh2
Oct 08 10:29:43 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:29:43 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:29:43 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:29:43.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:29:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:29:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:29:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:29:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:44 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:29:44 compute-0 nova_compute[262220]: 2025-10-08 10:29:44.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:29:44 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:29:44 compute-0 ceph-mon[73572]: pgmap v1358: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:29:45 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1359: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:29:45 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:29:45 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:29:45 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:29:45.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:29:45 compute-0 sshd-session[299799]: Connection closed by invalid user admin 196.203.106.113 port 41962 [preauth]
Oct 08 10:29:45 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:29:45 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:29:45 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:29:45.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:29:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:29:45] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Oct 08 10:29:45 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:29:45] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Oct 08 10:29:46 compute-0 sshd-session[299806]: Invalid user admin from 196.203.106.113 port 41338
Oct 08 10:29:46 compute-0 sshd-session[299806]: pam_unix(sshd:auth): check pass; user unknown
Oct 08 10:29:46 compute-0 sshd-session[299806]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113
Oct 08 10:29:46 compute-0 nova_compute[262220]: 2025-10-08 10:29:46.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:29:46 compute-0 ceph-mon[73572]: pgmap v1359: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:29:47 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1360: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:29:47 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:29:47 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:29:47 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:29:47.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:29:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:29:47.266Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:29:47 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:29:47 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:29:47 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:29:47.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:29:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:29:47
Oct 08 10:29:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 08 10:29:47 compute-0 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct 08 10:29:47 compute-0 ceph-mgr[73869]: [balancer INFO root] pools ['volumes', 'vms', 'cephfs.cephfs.data', 'backups', 'images', '.nfs', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.log', 'default.rgw.meta']
Oct 08 10:29:47 compute-0 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct 08 10:29:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:29:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:29:47 compute-0 ceph-mon[73572]: pgmap v1360: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:29:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:29:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:29:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:29:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:29:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:29:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:29:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct 08 10:29:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:29:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 08 10:29:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:29:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 10:29:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:29:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:29:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:29:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:29:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:29:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 08 10:29:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:29:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 08 10:29:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:29:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:29:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:29:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 08 10:29:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:29:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 08 10:29:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:29:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:29:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:29:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 08 10:29:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:29:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 10:29:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 08 10:29:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:29:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:29:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:29:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:29:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 08 10:29:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:29:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:29:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:29:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:29:48 compute-0 sshd-session[299806]: Failed password for invalid user admin from 196.203.106.113 port 41338 ssh2
Oct 08 10:29:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:29:48.908Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:29:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:29:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:29:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:29:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:29:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:49 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:29:49 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1361: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:29:49 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:29:49 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:29:49 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:29:49.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:29:49 compute-0 nova_compute[262220]: 2025-10-08 10:29:49.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:29:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:29:49 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:29:49 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:29:49 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:29:49.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:29:49 compute-0 ceph-mon[73572]: pgmap v1361: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:29:50 compute-0 sshd-session[299806]: Connection closed by invalid user admin 196.203.106.113 port 41338 [preauth]
Oct 08 10:29:51 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1362: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:29:51 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:29:51 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:29:51 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:29:51.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:29:51 compute-0 sshd-session[299813]: Invalid user admin from 196.203.106.113 port 41350
Oct 08 10:29:51 compute-0 sshd-session[299813]: pam_unix(sshd:auth): check pass; user unknown
Oct 08 10:29:51 compute-0 sshd-session[299813]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113
Oct 08 10:29:51 compute-0 nova_compute[262220]: 2025-10-08 10:29:51.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:29:51 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:29:51 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:29:51 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:29:51.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:29:51 compute-0 sudo[299816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:29:51 compute-0 sudo[299816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:29:51 compute-0 sudo[299816]: pam_unix(sudo:session): session closed for user root
Oct 08 10:29:52 compute-0 ceph-mon[73572]: pgmap v1362: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:29:53 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1363: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:29:53 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:29:53 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:29:53 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:29:53.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:29:53 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:29:53 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:29:53 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:29:53.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:29:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:29:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:29:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:29:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:54 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:29:54 compute-0 sshd-session[299813]: Failed password for invalid user admin from 196.203.106.113 port 41350 ssh2
Oct 08 10:29:54 compute-0 ceph-mon[73572]: pgmap v1363: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:29:54 compute-0 nova_compute[262220]: 2025-10-08 10:29:54.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:29:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:29:54 compute-0 nova_compute[262220]: 2025-10-08 10:29:54.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:29:55 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1364: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:29:55 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:29:55 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:29:55 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:29:55.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:29:55 compute-0 sshd-session[299813]: Connection closed by invalid user admin 196.203.106.113 port 41350 [preauth]
Oct 08 10:29:55 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:29:55 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:29:55 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:29:55.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:29:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:29:55] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 08 10:29:55 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:29:55] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 08 10:29:55 compute-0 nova_compute[262220]: 2025-10-08 10:29:55.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:29:56 compute-0 sshd-session[299845]: Invalid user admin from 196.203.106.113 port 39258
Oct 08 10:29:56 compute-0 podman[299847]: 2025-10-08 10:29:56.098890551 +0000 UTC m=+0.083235572 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, org.label-schema.build-date=20251001, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 08 10:29:56 compute-0 sshd-session[299845]: pam_unix(sshd:auth): check pass; user unknown
Oct 08 10:29:56 compute-0 sshd-session[299845]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113
Oct 08 10:29:56 compute-0 ceph-mon[73572]: pgmap v1364: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:29:56 compute-0 nova_compute[262220]: 2025-10-08 10:29:56.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:29:56 compute-0 nova_compute[262220]: 2025-10-08 10:29:56.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:29:56 compute-0 nova_compute[262220]: 2025-10-08 10:29:56.888 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 08 10:29:56 compute-0 nova_compute[262220]: 2025-10-08 10:29:56.888 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 08 10:29:57 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1365: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:29:57 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:29:57 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:29:57 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:29:57.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:29:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:29:57.267Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:29:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:29:57.270Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:29:57 compute-0 nova_compute[262220]: 2025-10-08 10:29:57.374 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 08 10:29:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:29:57.429 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:29:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:29:57.429 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:29:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:29:57.429 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:29:57 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:29:57 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:29:57 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:29:57.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:29:57 compute-0 nova_compute[262220]: 2025-10-08 10:29:57.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:29:57 compute-0 nova_compute[262220]: 2025-10-08 10:29:57.887 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 08 10:29:58 compute-0 ceph-mon[73572]: pgmap v1365: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:29:58 compute-0 sshd-session[299845]: Failed password for invalid user admin from 196.203.106.113 port 39258 ssh2
Oct 08 10:29:58 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:29:58.909Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:29:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:29:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:59 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:29:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:59 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:29:59 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:59 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:29:59 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1366: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:29:59 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:29:59 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:29:59 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:29:59.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:29:59 compute-0 nova_compute[262220]: 2025-10-08 10:29:59.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:29:59 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:29:59 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:29:59 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:29:59 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:29:59.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:30:00 compute-0 ceph-mon[73572]: log_channel(cluster) log [WRN] : overall HEALTH_WARN 1 failed cephadm daemon(s)
Oct 08 10:30:00 compute-0 sshd-session[299845]: Connection closed by invalid user admin 196.203.106.113 port 39258 [preauth]
Oct 08 10:30:00 compute-0 ceph-mon[73572]: pgmap v1366: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:30:00 compute-0 ceph-mon[73572]: overall HEALTH_WARN 1 failed cephadm daemon(s)
Oct 08 10:30:00 compute-0 sshd-session[299873]: Invalid user admin from 196.203.106.113 port 39268
Oct 08 10:30:01 compute-0 sshd-session[299873]: pam_unix(sshd:auth): check pass; user unknown
Oct 08 10:30:01 compute-0 sshd-session[299873]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113
Oct 08 10:30:01 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1367: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:30:01 compute-0 nova_compute[262220]: 2025-10-08 10:30:01.020 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:30:01 compute-0 nova_compute[262220]: 2025-10-08 10:30:01.021 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:30:01 compute-0 nova_compute[262220]: 2025-10-08 10:30:01.021 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:30:01 compute-0 nova_compute[262220]: 2025-10-08 10:30:01.021 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 08 10:30:01 compute-0 nova_compute[262220]: 2025-10-08 10:30:01.021 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:30:01 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:30:01 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:30:01 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:30:01.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:30:01 compute-0 nova_compute[262220]: 2025-10-08 10:30:01.259 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:30:01 compute-0 nova_compute[262220]: 2025-10-08 10:30:01.259 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:30:01 compute-0 nova_compute[262220]: 2025-10-08 10:30:01.259 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:30:01 compute-0 nova_compute[262220]: 2025-10-08 10:30:01.260 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 08 10:30:01 compute-0 nova_compute[262220]: 2025-10-08 10:30:01.260 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:30:01 compute-0 nova_compute[262220]: 2025-10-08 10:30:01.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:30:01 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 08 10:30:01 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3116006053' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:30:01 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:30:01 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:30:01 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:30:01.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:30:01 compute-0 nova_compute[262220]: 2025-10-08 10:30:01.719 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:30:01 compute-0 nova_compute[262220]: 2025-10-08 10:30:01.890 2 WARNING nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 10:30:01 compute-0 nova_compute[262220]: 2025-10-08 10:30:01.892 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4524MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 08 10:30:01 compute-0 nova_compute[262220]: 2025-10-08 10:30:01.892 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:30:01 compute-0 nova_compute[262220]: 2025-10-08 10:30:01.892 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:30:02 compute-0 nova_compute[262220]: 2025-10-08 10:30:02.239 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 08 10:30:02 compute-0 nova_compute[262220]: 2025-10-08 10:30:02.240 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 08 10:30:02 compute-0 nova_compute[262220]: 2025-10-08 10:30:02.399 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Refreshing inventories for resource provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 08 10:30:02 compute-0 sshd-session[299873]: Failed password for invalid user admin from 196.203.106.113 port 39268 ssh2
Oct 08 10:30:02 compute-0 ceph-mon[73572]: pgmap v1367: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:30:02 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3116006053' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:30:02 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/3090993139' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:30:02 compute-0 nova_compute[262220]: 2025-10-08 10:30:02.582 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Updating ProviderTree inventory for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 08 10:30:02 compute-0 nova_compute[262220]: 2025-10-08 10:30:02.582 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Updating inventory in ProviderTree for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 08 10:30:02 compute-0 nova_compute[262220]: 2025-10-08 10:30:02.629 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Refreshing aggregate associations for resource provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 08 10:30:02 compute-0 nova_compute[262220]: 2025-10-08 10:30:02.649 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Refreshing trait associations for resource provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2, traits: HW_CPU_X86_CLMUL,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_FMA3,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ACCELERATORS,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE41,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AMD_SVM,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_RESCUE_BFV,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SVM,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,HW_CPU_X86_SSE2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 08 10:30:02 compute-0 nova_compute[262220]: 2025-10-08 10:30:02.667 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 08 10:30:02 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:30:02 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:30:03 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1368: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:30:03 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:30:03 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:30:03 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:30:03.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:30:03 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 08 10:30:03 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2598333599' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:30:03 compute-0 nova_compute[262220]: 2025-10-08 10:30:03.137 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 08 10:30:03 compute-0 sshd-session[299873]: Connection closed by invalid user admin 196.203.106.113 port 39268 [preauth]
Oct 08 10:30:03 compute-0 nova_compute[262220]: 2025-10-08 10:30:03.144 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 08 10:30:03 compute-0 nova_compute[262220]: 2025-10-08 10:30:03.169 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 08 10:30:03 compute-0 nova_compute[262220]: 2025-10-08 10:30:03.171 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 08 10:30:03 compute-0 nova_compute[262220]: 2025-10-08 10:30:03.171 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.279s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:30:03 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:39270 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:03 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:39278 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:03 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:30:03 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/1790876073' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:30:03 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/2598333599' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:30:03 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:30:03 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:30:03 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:30:03.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:30:03 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:39280 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:03 compute-0 podman[299922]: 2025-10-08 10:30:03.931871627 +0000 UTC m=+0.090382104 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 08 10:30:03 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:39286 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:04 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:30:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:04 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:30:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:04 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:30:04 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:04 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:30:04 compute-0 nova_compute[262220]: 2025-10-08 10:30:04.037 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:30:04 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:39294 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:04 compute-0 nova_compute[262220]: 2025-10-08 10:30:04.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:30:04 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:39306 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:04 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:30:04 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:53268 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:04 compute-0 ceph-mon[73572]: pgmap v1368: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:30:04 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:53278 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:05 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1369: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:30:05 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:30:05 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:30:05 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:30:05.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:30:05 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:53282 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:05 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:53286 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:05 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:53290 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:05 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:30:05 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:30:05 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:30:05.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:30:05 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:30:05] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Oct 08 10:30:05 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:30:05] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Oct 08 10:30:05 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/3377334363' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:30:05 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/1415216909' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 08 10:30:05 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:53304 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:06 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:53308 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:06 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:53320 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:06 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:53334 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:06 compute-0 nova_compute[262220]: 2025-10-08 10:30:06.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:30:06 compute-0 sudo[299952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:30:06 compute-0 sudo[299952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:30:06 compute-0 sudo[299952]: pam_unix(sudo:session): session closed for user root
Oct 08 10:30:06 compute-0 sudo[299977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 08 10:30:06 compute-0 sudo[299977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:30:06 compute-0 ceph-mon[73572]: pgmap v1369: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:30:06 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:53350 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:07 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1370: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:30:07 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:53356 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:07 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:30:07 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:30:07 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:30:07.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:30:07 compute-0 sudo[299977]: pam_unix(sudo:session): session closed for user root
Oct 08 10:30:07 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:30:07.271Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:30:07 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:53358 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:07 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 10:30:07 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:30:07 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 08 10:30:07 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 10:30:07 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1371: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 08 10:30:07 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 08 10:30:07 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:30:07 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 08 10:30:07 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:30:07 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 08 10:30:07 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 10:30:07 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 08 10:30:07 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 10:30:07 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 10:30:07 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:30:07 compute-0 sudo[300036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:30:07 compute-0 sudo[300036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:30:07 compute-0 sudo[300036]: pam_unix(sudo:session): session closed for user root
Oct 08 10:30:07 compute-0 sudo[300061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 08 10:30:07 compute-0 sudo[300061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:30:07 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:53370 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:07 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:30:07 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:30:07 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:30:07.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:30:07 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:53386 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:07 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:30:07 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 08 10:30:07 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:30:07 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:30:07 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 08 10:30:07 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 08 10:30:07 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:30:07 compute-0 podman[300128]: 2025-10-08 10:30:07.917055693 +0000 UTC m=+0.046527481 container create d098acd2f27010e81bb06ced4f2d87fffa714b501601f81bd44318b5f5d8898f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct 08 10:30:07 compute-0 systemd[1]: Started libpod-conmon-d098acd2f27010e81bb06ced4f2d87fffa714b501601f81bd44318b5f5d8898f.scope.
Oct 08 10:30:07 compute-0 podman[300128]: 2025-10-08 10:30:07.89724855 +0000 UTC m=+0.026720348 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:30:07 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:53396 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:08 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:30:08 compute-0 podman[300128]: 2025-10-08 10:30:08.024248831 +0000 UTC m=+0.153720629 container init d098acd2f27010e81bb06ced4f2d87fffa714b501601f81bd44318b5f5d8898f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_nobel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 10:30:08 compute-0 podman[300128]: 2025-10-08 10:30:08.034998551 +0000 UTC m=+0.164470369 container start d098acd2f27010e81bb06ced4f2d87fffa714b501601f81bd44318b5f5d8898f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_nobel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 08 10:30:08 compute-0 podman[300128]: 2025-10-08 10:30:08.03901522 +0000 UTC m=+0.168487028 container attach d098acd2f27010e81bb06ced4f2d87fffa714b501601f81bd44318b5f5d8898f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_nobel, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 08 10:30:08 compute-0 inspiring_nobel[300144]: 167 167
Oct 08 10:30:08 compute-0 systemd[1]: libpod-d098acd2f27010e81bb06ced4f2d87fffa714b501601f81bd44318b5f5d8898f.scope: Deactivated successfully.
Oct 08 10:30:08 compute-0 podman[300128]: 2025-10-08 10:30:08.043799755 +0000 UTC m=+0.173271543 container died d098acd2f27010e81bb06ced4f2d87fffa714b501601f81bd44318b5f5d8898f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_nobel, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:30:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-3295e923f97dc2889ae0f1b34cbccba0f83c4cf7b8573df253f1bd3e4a3c113d-merged.mount: Deactivated successfully.
Oct 08 10:30:08 compute-0 podman[300128]: 2025-10-08 10:30:08.087954689 +0000 UTC m=+0.217426477 container remove d098acd2f27010e81bb06ced4f2d87fffa714b501601f81bd44318b5f5d8898f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 10:30:08 compute-0 systemd[1]: libpod-conmon-d098acd2f27010e81bb06ced4f2d87fffa714b501601f81bd44318b5f5d8898f.scope: Deactivated successfully.
Oct 08 10:30:08 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:53412 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:08 compute-0 podman[300170]: 2025-10-08 10:30:08.292684312 +0000 UTC m=+0.046180840 container create a449e64c6340a16fcc0c888322e9e369dee01def0617e07a31d2d9ff90060524 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_ptolemy, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 10:30:08 compute-0 systemd[1]: Started libpod-conmon-a449e64c6340a16fcc0c888322e9e369dee01def0617e07a31d2d9ff90060524.scope.
Oct 08 10:30:08 compute-0 podman[300170]: 2025-10-08 10:30:08.274068458 +0000 UTC m=+0.027565036 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:30:08 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:30:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a6e2a7dbdbc514f5696146d6b6450d357f951968b71509a420822c57456080c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:30:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a6e2a7dbdbc514f5696146d6b6450d357f951968b71509a420822c57456080c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:30:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a6e2a7dbdbc514f5696146d6b6450d357f951968b71509a420822c57456080c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:30:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a6e2a7dbdbc514f5696146d6b6450d357f951968b71509a420822c57456080c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:30:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a6e2a7dbdbc514f5696146d6b6450d357f951968b71509a420822c57456080c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 08 10:30:08 compute-0 podman[300170]: 2025-10-08 10:30:08.384646016 +0000 UTC m=+0.138142544 container init a449e64c6340a16fcc0c888322e9e369dee01def0617e07a31d2d9ff90060524 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_ptolemy, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 10:30:08 compute-0 podman[300170]: 2025-10-08 10:30:08.394977251 +0000 UTC m=+0.148473779 container start a449e64c6340a16fcc0c888322e9e369dee01def0617e07a31d2d9ff90060524 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 10:30:08 compute-0 podman[300170]: 2025-10-08 10:30:08.398252648 +0000 UTC m=+0.151749206 container attach a449e64c6340a16fcc0c888322e9e369dee01def0617e07a31d2d9ff90060524 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_ptolemy, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Oct 08 10:30:08 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:53424 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:08 compute-0 zen_ptolemy[300186]: --> passed data devices: 0 physical, 1 LVM
Oct 08 10:30:08 compute-0 zen_ptolemy[300186]: --> All data devices are unavailable
Oct 08 10:30:08 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:53426 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:08 compute-0 systemd[1]: libpod-a449e64c6340a16fcc0c888322e9e369dee01def0617e07a31d2d9ff90060524.scope: Deactivated successfully.
Oct 08 10:30:08 compute-0 podman[300170]: 2025-10-08 10:30:08.730814649 +0000 UTC m=+0.484311217 container died a449e64c6340a16fcc0c888322e9e369dee01def0617e07a31d2d9ff90060524 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_ptolemy, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct 08 10:30:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a6e2a7dbdbc514f5696146d6b6450d357f951968b71509a420822c57456080c-merged.mount: Deactivated successfully.
Oct 08 10:30:08 compute-0 podman[300170]: 2025-10-08 10:30:08.792211131 +0000 UTC m=+0.545707669 container remove a449e64c6340a16fcc0c888322e9e369dee01def0617e07a31d2d9ff90060524 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:30:08 compute-0 ceph-mon[73572]: pgmap v1370: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:30:08 compute-0 ceph-mon[73572]: pgmap v1371: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 08 10:30:08 compute-0 systemd[1]: libpod-conmon-a449e64c6340a16fcc0c888322e9e369dee01def0617e07a31d2d9ff90060524.scope: Deactivated successfully.
Oct 08 10:30:08 compute-0 sudo[300061]: pam_unix(sudo:session): session closed for user root
Oct 08 10:30:08 compute-0 podman[300205]: 2025-10-08 10:30:08.837706138 +0000 UTC m=+0.064533005 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Oct 08 10:30:08 compute-0 podman[300202]: 2025-10-08 10:30:08.84363487 +0000 UTC m=+0.083462039 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 08 10:30:08 compute-0 nova_compute[262220]: 2025-10-08 10:30:08.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:30:08 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:30:08.909Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:30:08 compute-0 sudo[300248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:30:08 compute-0 sudo[300248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:30:08 compute-0 sudo[300248]: pam_unix(sudo:session): session closed for user root
Oct 08 10:30:08 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:53442 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:08 compute-0 sudo[300273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- lvm list --format json
Oct 08 10:30:08 compute-0 sudo[300273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:30:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:30:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:30:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:30:09 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:09 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:30:09 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:30:09 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:30:09 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:30:09.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:30:09 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:53446 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:09 compute-0 nova_compute[262220]: 2025-10-08 10:30:09.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:30:09 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1372: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 08 10:30:09 compute-0 podman[300339]: 2025-10-08 10:30:09.391349583 +0000 UTC m=+0.041698154 container create e6bb578b1d6167189d10c1039494f67adaab727e642ea3860d2b92efecd0468b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 10:30:09 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:53456 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:09 compute-0 systemd[1]: Started libpod-conmon-e6bb578b1d6167189d10c1039494f67adaab727e642ea3860d2b92efecd0468b.scope.
Oct 08 10:30:09 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:30:09 compute-0 podman[300339]: 2025-10-08 10:30:09.45411378 +0000 UTC m=+0.104462351 container init e6bb578b1d6167189d10c1039494f67adaab727e642ea3860d2b92efecd0468b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:30:09 compute-0 podman[300339]: 2025-10-08 10:30:09.459870097 +0000 UTC m=+0.110218668 container start e6bb578b1d6167189d10c1039494f67adaab727e642ea3860d2b92efecd0468b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:30:09 compute-0 podman[300339]: 2025-10-08 10:30:09.464625851 +0000 UTC m=+0.114974452 container attach e6bb578b1d6167189d10c1039494f67adaab727e642ea3860d2b92efecd0468b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_lamarr, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True)
Oct 08 10:30:09 compute-0 naughty_lamarr[300356]: 167 167
Oct 08 10:30:09 compute-0 systemd[1]: libpod-e6bb578b1d6167189d10c1039494f67adaab727e642ea3860d2b92efecd0468b.scope: Deactivated successfully.
Oct 08 10:30:09 compute-0 podman[300339]: 2025-10-08 10:30:09.466627435 +0000 UTC m=+0.116976006 container died e6bb578b1d6167189d10c1039494f67adaab727e642ea3860d2b92efecd0468b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_lamarr, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 08 10:30:09 compute-0 podman[300339]: 2025-10-08 10:30:09.374719863 +0000 UTC m=+0.025068454 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:30:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-685c1d224828c200b2ec478fed76961c9fd557d5bf14932b09fb554f15f25a03-merged.mount: Deactivated successfully.
Oct 08 10:30:09 compute-0 podman[300339]: 2025-10-08 10:30:09.504123173 +0000 UTC m=+0.154471744 container remove e6bb578b1d6167189d10c1039494f67adaab727e642ea3860d2b92efecd0468b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_lamarr, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Oct 08 10:30:09 compute-0 systemd[1]: libpod-conmon-e6bb578b1d6167189d10c1039494f67adaab727e642ea3860d2b92efecd0468b.scope: Deactivated successfully.
Oct 08 10:30:09 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:30:09 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:53472 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:09 compute-0 podman[300382]: 2025-10-08 10:30:09.689797637 +0000 UTC m=+0.048538605 container create 0e09e8d59d15a8b78092a6d5772dc618c02102dbfdc2008abac593a1aecbf79b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_pare, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct 08 10:30:09 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:30:09 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:30:09 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:30:09.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:30:09 compute-0 systemd[1]: Started libpod-conmon-0e09e8d59d15a8b78092a6d5772dc618c02102dbfdc2008abac593a1aecbf79b.scope.
Oct 08 10:30:09 compute-0 podman[300382]: 2025-10-08 10:30:09.667294067 +0000 UTC m=+0.026035095 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:30:09 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:30:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2df8e227a896d23638c0c760fce7667ae2276e216f53a6c6a846e74caf3bd965/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:30:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2df8e227a896d23638c0c760fce7667ae2276e216f53a6c6a846e74caf3bd965/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:30:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2df8e227a896d23638c0c760fce7667ae2276e216f53a6c6a846e74caf3bd965/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:30:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2df8e227a896d23638c0c760fce7667ae2276e216f53a6c6a846e74caf3bd965/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:30:09 compute-0 podman[300382]: 2025-10-08 10:30:09.782181485 +0000 UTC m=+0.140922483 container init 0e09e8d59d15a8b78092a6d5772dc618c02102dbfdc2008abac593a1aecbf79b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_pare, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct 08 10:30:09 compute-0 podman[300382]: 2025-10-08 10:30:09.788617584 +0000 UTC m=+0.147358562 container start 0e09e8d59d15a8b78092a6d5772dc618c02102dbfdc2008abac593a1aecbf79b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_pare, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:30:09 compute-0 podman[300382]: 2025-10-08 10:30:09.793481242 +0000 UTC m=+0.152222230 container attach 0e09e8d59d15a8b78092a6d5772dc618c02102dbfdc2008abac593a1aecbf79b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_pare, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:30:09 compute-0 ceph-mon[73572]: pgmap v1372: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 08 10:30:09 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:53474 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:09 compute-0 nova_compute[262220]: 2025-10-08 10:30:09.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:30:10 compute-0 silly_pare[300398]: {
Oct 08 10:30:10 compute-0 silly_pare[300398]:     "1": [
Oct 08 10:30:10 compute-0 silly_pare[300398]:         {
Oct 08 10:30:10 compute-0 silly_pare[300398]:             "devices": [
Oct 08 10:30:10 compute-0 silly_pare[300398]:                 "/dev/loop3"
Oct 08 10:30:10 compute-0 silly_pare[300398]:             ],
Oct 08 10:30:10 compute-0 silly_pare[300398]:             "lv_name": "ceph_lv0",
Oct 08 10:30:10 compute-0 silly_pare[300398]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:30:10 compute-0 silly_pare[300398]:             "lv_size": "21470642176",
Oct 08 10:30:10 compute-0 silly_pare[300398]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 08 10:30:10 compute-0 silly_pare[300398]:             "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 10:30:10 compute-0 silly_pare[300398]:             "name": "ceph_lv0",
Oct 08 10:30:10 compute-0 silly_pare[300398]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:30:10 compute-0 silly_pare[300398]:             "tags": {
Oct 08 10:30:10 compute-0 silly_pare[300398]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 08 10:30:10 compute-0 silly_pare[300398]:                 "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct 08 10:30:10 compute-0 silly_pare[300398]:                 "ceph.cephx_lockbox_secret": "",
Oct 08 10:30:10 compute-0 silly_pare[300398]:                 "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct 08 10:30:10 compute-0 silly_pare[300398]:                 "ceph.cluster_name": "ceph",
Oct 08 10:30:10 compute-0 silly_pare[300398]:                 "ceph.crush_device_class": "",
Oct 08 10:30:10 compute-0 silly_pare[300398]:                 "ceph.encrypted": "0",
Oct 08 10:30:10 compute-0 silly_pare[300398]:                 "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct 08 10:30:10 compute-0 silly_pare[300398]:                 "ceph.osd_id": "1",
Oct 08 10:30:10 compute-0 silly_pare[300398]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 08 10:30:10 compute-0 silly_pare[300398]:                 "ceph.type": "block",
Oct 08 10:30:10 compute-0 silly_pare[300398]:                 "ceph.vdo": "0",
Oct 08 10:30:10 compute-0 silly_pare[300398]:                 "ceph.with_tpm": "0"
Oct 08 10:30:10 compute-0 silly_pare[300398]:             },
Oct 08 10:30:10 compute-0 silly_pare[300398]:             "type": "block",
Oct 08 10:30:10 compute-0 silly_pare[300398]:             "vg_name": "ceph_vg0"
Oct 08 10:30:10 compute-0 silly_pare[300398]:         }
Oct 08 10:30:10 compute-0 silly_pare[300398]:     ]
Oct 08 10:30:10 compute-0 silly_pare[300398]: }
Oct 08 10:30:10 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:53480 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:10 compute-0 systemd[1]: libpod-0e09e8d59d15a8b78092a6d5772dc618c02102dbfdc2008abac593a1aecbf79b.scope: Deactivated successfully.
Oct 08 10:30:10 compute-0 podman[300382]: 2025-10-08 10:30:10.115243603 +0000 UTC m=+0.473984641 container died 0e09e8d59d15a8b78092a6d5772dc618c02102dbfdc2008abac593a1aecbf79b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_pare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid)
Oct 08 10:30:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-2df8e227a896d23638c0c760fce7667ae2276e216f53a6c6a846e74caf3bd965-merged.mount: Deactivated successfully.
Oct 08 10:30:10 compute-0 podman[300382]: 2025-10-08 10:30:10.16814241 +0000 UTC m=+0.526883388 container remove 0e09e8d59d15a8b78092a6d5772dc618c02102dbfdc2008abac593a1aecbf79b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_pare, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:30:10 compute-0 systemd[1]: libpod-conmon-0e09e8d59d15a8b78092a6d5772dc618c02102dbfdc2008abac593a1aecbf79b.scope: Deactivated successfully.
Oct 08 10:30:10 compute-0 sudo[300273]: pam_unix(sudo:session): session closed for user root
Oct 08 10:30:10 compute-0 sudo[300421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 08 10:30:10 compute-0 sudo[300421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:30:10 compute-0 sudo[300421]: pam_unix(sudo:session): session closed for user root
Oct 08 10:30:10 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:53488 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:10 compute-0 sudo[300446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -- raw list --format json
Oct 08 10:30:10 compute-0 sudo[300446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:30:10 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:53500 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:10 compute-0 podman[300511]: 2025-10-08 10:30:10.73242903 +0000 UTC m=+0.037121405 container create 738c2e4f6966399859fe9bf4901e85de92989a7e589f2ed77efbb34f73680e7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_lumiere, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True)
Oct 08 10:30:10 compute-0 systemd[1]: Started libpod-conmon-738c2e4f6966399859fe9bf4901e85de92989a7e589f2ed77efbb34f73680e7f.scope.
Oct 08 10:30:10 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:53508 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:10 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:30:10 compute-0 podman[300511]: 2025-10-08 10:30:10.716845355 +0000 UTC m=+0.021537750 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:30:10 compute-0 podman[300511]: 2025-10-08 10:30:10.824373964 +0000 UTC m=+0.129066529 container init 738c2e4f6966399859fe9bf4901e85de92989a7e589f2ed77efbb34f73680e7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_lumiere, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 08 10:30:10 compute-0 podman[300511]: 2025-10-08 10:30:10.833642965 +0000 UTC m=+0.138335330 container start 738c2e4f6966399859fe9bf4901e85de92989a7e589f2ed77efbb34f73680e7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct 08 10:30:10 compute-0 podman[300511]: 2025-10-08 10:30:10.837104608 +0000 UTC m=+0.141796983 container attach 738c2e4f6966399859fe9bf4901e85de92989a7e589f2ed77efbb34f73680e7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 08 10:30:10 compute-0 zealous_lumiere[300525]: 167 167
Oct 08 10:30:10 compute-0 systemd[1]: libpod-738c2e4f6966399859fe9bf4901e85de92989a7e589f2ed77efbb34f73680e7f.scope: Deactivated successfully.
Oct 08 10:30:10 compute-0 podman[300511]: 2025-10-08 10:30:10.841176129 +0000 UTC m=+0.145868504 container died 738c2e4f6966399859fe9bf4901e85de92989a7e589f2ed77efbb34f73680e7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct 08 10:30:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-b65c6f77ac492eaf6911505614378ec441239cb6ccc15719a57b43fb190b7088-merged.mount: Deactivated successfully.
Oct 08 10:30:10 compute-0 podman[300511]: 2025-10-08 10:30:10.889081424 +0000 UTC m=+0.193773799 container remove 738c2e4f6966399859fe9bf4901e85de92989a7e589f2ed77efbb34f73680e7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 08 10:30:10 compute-0 systemd[1]: libpod-conmon-738c2e4f6966399859fe9bf4901e85de92989a7e589f2ed77efbb34f73680e7f.scope: Deactivated successfully.
Oct 08 10:30:10 compute-0 nova_compute[262220]: 2025-10-08 10:30:10.902 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:30:10 compute-0 nova_compute[262220]: 2025-10-08 10:30:10.904 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 08 10:30:11 compute-0 nova_compute[262220]: 2025-10-08 10:30:11.015 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 08 10:30:11 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:53518 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:11 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:30:11 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:30:11 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:30:11.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:30:11 compute-0 podman[300553]: 2025-10-08 10:30:11.099715369 +0000 UTC m=+0.050970795 container create 5aeed10626f4b32e07c97221b0d5dd9e7d9d0adcb83bf07ca2929f424d8fe113 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_bohr, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 08 10:30:11 compute-0 systemd[1]: Started libpod-conmon-5aeed10626f4b32e07c97221b0d5dd9e7d9d0adcb83bf07ca2929f424d8fe113.scope.
Oct 08 10:30:11 compute-0 systemd[1]: Started libcrun container.
Oct 08 10:30:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49ab9eb17c89b3c64ae678196cee872fc5188e32f7b635cda2bff7af1fbd9f64/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 08 10:30:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49ab9eb17c89b3c64ae678196cee872fc5188e32f7b635cda2bff7af1fbd9f64/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 08 10:30:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49ab9eb17c89b3c64ae678196cee872fc5188e32f7b635cda2bff7af1fbd9f64/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 08 10:30:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49ab9eb17c89b3c64ae678196cee872fc5188e32f7b635cda2bff7af1fbd9f64/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 08 10:30:11 compute-0 podman[300553]: 2025-10-08 10:30:11.08464681 +0000 UTC m=+0.035902276 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 08 10:30:11 compute-0 podman[300553]: 2025-10-08 10:30:11.17928588 +0000 UTC m=+0.130541386 container init 5aeed10626f4b32e07c97221b0d5dd9e7d9d0adcb83bf07ca2929f424d8fe113 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct 08 10:30:11 compute-0 podman[300553]: 2025-10-08 10:30:11.190520585 +0000 UTC m=+0.141776011 container start 5aeed10626f4b32e07c97221b0d5dd9e7d9d0adcb83bf07ca2929f424d8fe113 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_bohr, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 08 10:30:11 compute-0 podman[300553]: 2025-10-08 10:30:11.195954741 +0000 UTC m=+0.147210247 container attach 5aeed10626f4b32e07c97221b0d5dd9e7d9d0adcb83bf07ca2929f424d8fe113 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 08 10:30:11 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:53522 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:11 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1373: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 08 10:30:11 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:53526 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:11 compute-0 nova_compute[262220]: 2025-10-08 10:30:11.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:30:11 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:30:11 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:30:11 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:30:11.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:30:11 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:53542 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:11 compute-0 lvm[300646]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 08 10:30:11 compute-0 lvm[300646]: VG ceph_vg0 finished
Oct 08 10:30:11 compute-0 gracious_bohr[300571]: {}
Oct 08 10:30:11 compute-0 systemd[1]: libpod-5aeed10626f4b32e07c97221b0d5dd9e7d9d0adcb83bf07ca2929f424d8fe113.scope: Deactivated successfully.
Oct 08 10:30:11 compute-0 systemd[1]: libpod-5aeed10626f4b32e07c97221b0d5dd9e7d9d0adcb83bf07ca2929f424d8fe113.scope: Consumed 1.213s CPU time.
Oct 08 10:30:11 compute-0 podman[300553]: 2025-10-08 10:30:11.931347175 +0000 UTC m=+0.882602681 container died 5aeed10626f4b32e07c97221b0d5dd9e7d9d0adcb83bf07ca2929f424d8fe113 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_bohr, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct 08 10:30:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-49ab9eb17c89b3c64ae678196cee872fc5188e32f7b635cda2bff7af1fbd9f64-merged.mount: Deactivated successfully.
Oct 08 10:30:11 compute-0 podman[300553]: 2025-10-08 10:30:11.990153663 +0000 UTC m=+0.941409079 container remove 5aeed10626f4b32e07c97221b0d5dd9e7d9d0adcb83bf07ca2929f424d8fe113 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 08 10:30:12 compute-0 systemd[1]: libpod-conmon-5aeed10626f4b32e07c97221b0d5dd9e7d9d0adcb83bf07ca2929f424d8fe113.scope: Deactivated successfully.
Oct 08 10:30:12 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:53558 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:12 compute-0 sudo[300446]: pam_unix(sudo:session): session closed for user root
Oct 08 10:30:12 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 08 10:30:12 compute-0 sudo[300662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:30:12 compute-0 sudo[300662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:30:12 compute-0 sudo[300662]: pam_unix(sudo:session): session closed for user root
Oct 08 10:30:12 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:30:12 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 08 10:30:12 compute-0 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:30:12 compute-0 sudo[300687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 08 10:30:12 compute-0 sudo[300687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:30:12 compute-0 sudo[300687]: pam_unix(sudo:session): session closed for user root
Oct 08 10:30:12 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:53566 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:12 compute-0 ceph-mon[73572]: pgmap v1373: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 08 10:30:12 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:30:12 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct 08 10:30:12 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:53572 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:12 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:53578 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:12 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:53586 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:13 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:30:13 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:30:13 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:30:13.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:30:13 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:53588 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:13 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1374: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 08 10:30:13 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:53592 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:13 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:53600 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:13 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:30:13 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:30:13 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:30:13.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:30:13 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:53614 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:30:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:14 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:30:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:14 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:30:14 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:14 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:30:14 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:53628 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:14 compute-0 nova_compute[262220]: 2025-10-08 10:30:14.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:30:14 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:53642 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:14 compute-0 ceph-mon[73572]: pgmap v1374: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 08 10:30:14 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:53650 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:14 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:30:14 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:36874 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:15 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:36890 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:15 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:30:15 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:30:15 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:30:15.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:30:15 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:36898 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:15 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1375: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 08 10:30:15 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:36914 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:15 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:36922 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:15 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:30:15 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:30:15 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:30:15.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:30:15 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:30:15] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Oct 08 10:30:15 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:30:15] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Oct 08 10:30:15 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:36938 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:16 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:36948 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:16 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:36950 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:16 compute-0 ceph-mon[73572]: pgmap v1375: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 08 10:30:16 compute-0 nova_compute[262220]: 2025-10-08 10:30:16.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:30:16 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:36952 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:16 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:36954 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:17 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:30:17 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:30:17 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:30:17.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:30:17 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:36964 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:17 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:30:17.272Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:30:17 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:36976 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:17 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1376: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 08 10:30:17 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:36986 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:17 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:30:17 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:30:17 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:30:17.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:30:17 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:36994 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:17 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:30:17 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:30:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:30:17 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:30:18 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:36998 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:30:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:30:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:30:18 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:30:18 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:37000 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:18 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:37004 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:18 compute-0 ceph-mon[73572]: pgmap v1376: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 08 10:30:18 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:30:18 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:37008 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:18 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:30:18.911Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:30:18 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:37014 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:30:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:30:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:30:19 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:19 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:30:19 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:30:19 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:30:19 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:30:19.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:30:19 compute-0 sshd[189680]: drop connection #0 from [196.203.106.113]:37028 on [38.102.83.224]:22 penalty: failed authentication
Oct 08 10:30:19 compute-0 nova_compute[262220]: 2025-10-08 10:30:19.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:30:19 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1377: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:30:19 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:30:19 compute-0 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Oct 08 10:30:19 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:30:19.577369) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 08 10:30:19 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Oct 08 10:30:19 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759919419577436, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 1314, "num_deletes": 255, "total_data_size": 2407130, "memory_usage": 2446752, "flush_reason": "Manual Compaction"}
Oct 08 10:30:19 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Oct 08 10:30:19 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759919419590807, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 2354481, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36992, "largest_seqno": 38305, "table_properties": {"data_size": 2348246, "index_size": 3434, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13278, "raw_average_key_size": 19, "raw_value_size": 2335655, "raw_average_value_size": 3496, "num_data_blocks": 149, "num_entries": 668, "num_filter_entries": 668, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759919300, "oldest_key_time": 1759919300, "file_creation_time": 1759919419, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Oct 08 10:30:19 compute-0 ceph-mon[73572]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 13451 microseconds, and 5398 cpu microseconds.
Oct 08 10:30:19 compute-0 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 08 10:30:19 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:30:19.590847) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 2354481 bytes OK
Oct 08 10:30:19 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:30:19.590870) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Oct 08 10:30:19 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:30:19.592611) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Oct 08 10:30:19 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:30:19.592624) EVENT_LOG_v1 {"time_micros": 1759919419592620, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 08 10:30:19 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:30:19.592646) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 08 10:30:19 compute-0 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 2401349, prev total WAL file size 2401349, number of live WAL files 2.
Oct 08 10:30:19 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 10:30:19 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:30:19.593999) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303037' seq:72057594037927935, type:22 .. '6C6F676D0031323538' seq:0, type:0; will stop at (end)
Oct 08 10:30:19 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 08 10:30:19 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(2299KB)], [80(11MB)]
Oct 08 10:30:19 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759919419594190, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 14882499, "oldest_snapshot_seqno": -1}
Oct 08 10:30:19 compute-0 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 6824 keys, 14719138 bytes, temperature: kUnknown
Oct 08 10:30:19 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759919419710154, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 14719138, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14674184, "index_size": 26794, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17093, "raw_key_size": 179545, "raw_average_key_size": 26, "raw_value_size": 14551831, "raw_average_value_size": 2132, "num_data_blocks": 1056, "num_entries": 6824, "num_filter_entries": 6824, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916581, "oldest_key_time": 0, "file_creation_time": 1759919419, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Oct 08 10:30:19 compute-0 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 08 10:30:19 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:30:19.710484) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 14719138 bytes
Oct 08 10:30:19 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:30:19.711977) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 128.2 rd, 126.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 11.9 +0.0 blob) out(14.0 +0.0 blob), read-write-amplify(12.6) write-amplify(6.3) OK, records in: 7352, records dropped: 528 output_compression: NoCompression
Oct 08 10:30:19 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:30:19.712010) EVENT_LOG_v1 {"time_micros": 1759919419711995, "job": 46, "event": "compaction_finished", "compaction_time_micros": 116045, "compaction_time_cpu_micros": 58132, "output_level": 6, "num_output_files": 1, "total_output_size": 14719138, "num_input_records": 7352, "num_output_records": 6824, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 08 10:30:19 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 10:30:19 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759919419712925, "job": 46, "event": "table_file_deletion", "file_number": 82}
Oct 08 10:30:19 compute-0 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 08 10:30:19 compute-0 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759919419716536, "job": 46, "event": "table_file_deletion", "file_number": 80}
Oct 08 10:30:19 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:30:19.593196) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:30:19 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:30:19.716628) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:30:19 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:30:19.716633) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:30:19 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:30:19.716635) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:30:19 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:30:19.716637) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:30:19 compute-0 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:30:19.716639) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 08 10:30:19 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:30:19 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:30:19 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:30:19.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:30:19 compute-0 sshd-session[300719]: Invalid user admin from 196.203.106.113 port 37034
Oct 08 10:30:20 compute-0 sshd-session[300719]: pam_unix(sshd:auth): check pass; user unknown
Oct 08 10:30:20 compute-0 sshd-session[300719]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113
Oct 08 10:30:20 compute-0 ceph-mon[73572]: pgmap v1377: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:30:21 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:30:21 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:30:21 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:30:21.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:30:21 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1378: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:30:21 compute-0 nova_compute[262220]: 2025-10-08 10:30:21.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:30:21 compute-0 ceph-mon[73572]: from='client.? 192.168.122.10:0/1420832955' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 08 10:30:21 compute-0 ceph-mon[73572]: from='client.? 192.168.122.10:0/1420832955' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 08 10:30:21 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:30:21 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:30:21 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:30:21.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:30:21 compute-0 sshd-session[300719]: Failed password for invalid user admin from 196.203.106.113 port 37034 ssh2
Oct 08 10:30:22 compute-0 sshd-session[300719]: Connection closed by invalid user admin 196.203.106.113 port 37034 [preauth]
Oct 08 10:30:22 compute-0 ceph-mon[73572]: pgmap v1378: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:30:22 compute-0 sshd-session[300724]: Invalid user admin from 196.203.106.113 port 37044
Oct 08 10:30:22 compute-0 sshd-session[300724]: pam_unix(sshd:auth): check pass; user unknown
Oct 08 10:30:22 compute-0 sshd-session[300724]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113
Oct 08 10:30:23 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:30:23 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:30:23 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:30:23.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:30:23 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1379: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:30:23 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:30:23 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:30:23 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:30:23.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:30:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:30:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:30:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:30:24 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:24 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:30:24 compute-0 nova_compute[262220]: 2025-10-08 10:30:24.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:30:24 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:30:24 compute-0 sshd-session[300724]: Failed password for invalid user admin from 196.203.106.113 port 37044 ssh2
Oct 08 10:30:24 compute-0 ceph-mon[73572]: pgmap v1379: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:30:25 compute-0 sshd-session[300724]: Connection closed by invalid user admin 196.203.106.113 port 37044 [preauth]
Oct 08 10:30:25 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:30:25 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:30:25 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:30:25.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:30:25 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1380: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:30:25 compute-0 sshd-session[300728]: Invalid user admin from 196.203.106.113 port 51022
Oct 08 10:30:25 compute-0 sshd-session[300728]: pam_unix(sshd:auth): check pass; user unknown
Oct 08 10:30:25 compute-0 sshd-session[300728]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113
Oct 08 10:30:25 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:30:25 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:30:25 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:30:25.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:30:25 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:30:25] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 08 10:30:25 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:30:25] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 08 10:30:26 compute-0 nova_compute[262220]: 2025-10-08 10:30:26.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:30:26 compute-0 ceph-mon[73572]: pgmap v1380: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:30:26 compute-0 podman[300732]: 2025-10-08 10:30:26.91733188 +0000 UTC m=+0.075189810 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=iscsid, container_name=iscsid, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 08 10:30:27 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:30:27 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:30:27 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:30:27.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:30:27 compute-0 sshd-session[300728]: Failed password for invalid user admin from 196.203.106.113 port 51022 ssh2
Oct 08 10:30:27 compute-0 sshd-session[300754]: Accepted publickey for zuul from 192.168.122.10 port 44882 ssh2: ECDSA SHA256:7LvTHAj52RCSkKXOsIbzSDrEw7lwj23D0dAJ0Qgx0Rg
Oct 08 10:30:27 compute-0 systemd-logind[798]: New session 61 of user zuul.
Oct 08 10:30:27 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:30:27.273Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:30:27 compute-0 systemd[1]: Started Session 61 of User zuul.
Oct 08 10:30:27 compute-0 sshd-session[300754]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 08 10:30:27 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1381: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:30:27 compute-0 sudo[300759]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp -p container,openstack_edpm,system,storage,virt'
Oct 08 10:30:27 compute-0 sudo[300759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 10:30:27 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:30:27 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:30:27 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:30:27.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:30:27 compute-0 sshd-session[300728]: Connection closed by invalid user admin 196.203.106.113 port 51022 [preauth]
Oct 08 10:30:28 compute-0 sshd-session[300793]: Invalid user admin from 196.203.106.113 port 51028
Oct 08 10:30:28 compute-0 sshd-session[300793]: pam_unix(sshd:auth): check pass; user unknown
Oct 08 10:30:28 compute-0 sshd-session[300793]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113
Oct 08 10:30:28 compute-0 ceph-mon[73572]: pgmap v1381: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:30:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:30:28.912Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:30:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:30:28.913Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:30:28 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:30:28.914Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:30:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:30:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:30:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:30:29 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:29 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:30:29 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:30:29 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:30:29 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:30:29.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:30:29 compute-0 nova_compute[262220]: 2025-10-08 10:30:29.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:30:29 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1382: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:30:29 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:30:29 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:30:29 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:30:29 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:30:29.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:30:29 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27563 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:29 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27268 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:30 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17466 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:30 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27575 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:30 compute-0 sshd-session[300793]: Failed password for invalid user admin from 196.203.106.113 port 51028 ssh2
Oct 08 10:30:30 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27274 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:30 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17475 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:30 compute-0 ceph-mon[73572]: pgmap v1382: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:30:30 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/153556304' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 08 10:30:30 compute-0 sshd-session[300793]: Connection closed by invalid user admin 196.203.106.113 port 51028 [preauth]
Oct 08 10:30:31 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0)
Oct 08 10:30:31 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1971704690' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 08 10:30:31 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:30:31 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:30:31 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:30:31.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:30:31 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1383: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:30:31 compute-0 sshd-session[300989]: Invalid user admin from 196.203.106.113 port 51030
Oct 08 10:30:31 compute-0 sshd-session[300989]: pam_unix(sshd:auth): check pass; user unknown
Oct 08 10:30:31 compute-0 sshd-session[300989]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113
Oct 08 10:30:31 compute-0 nova_compute[262220]: 2025-10-08 10:30:31.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:30:31 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:30:31 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:30:31 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:30:31.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:30:31 compute-0 ceph-mon[73572]: from='client.27563 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:31 compute-0 ceph-mon[73572]: from='client.27268 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:31 compute-0 ceph-mon[73572]: from='client.17466 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:31 compute-0 ceph-mon[73572]: from='client.27575 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:31 compute-0 ceph-mon[73572]: from='client.27274 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:31 compute-0 ceph-mon[73572]: from='client.17475 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:31 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/4178397777' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 08 10:30:31 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/1971704690' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 08 10:30:32 compute-0 sudo[301056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:30:32 compute-0 sudo[301056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:30:32 compute-0 sudo[301056]: pam_unix(sudo:session): session closed for user root
Oct 08 10:30:32 compute-0 ceph-mon[73572]: pgmap v1383: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:30:32 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:30:32 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:30:33 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:30:33 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:30:33 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:30:33.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:30:33 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1384: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:30:33 compute-0 sshd-session[300989]: Failed password for invalid user admin from 196.203.106.113 port 51030 ssh2
Oct 08 10:30:33 compute-0 sshd-session[300989]: Connection closed by invalid user admin 196.203.106.113 port 51030 [preauth]
Oct 08 10:30:33 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:30:33 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:30:33 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:30:33.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:30:33 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:30:33 compute-0 ceph-mon[73572]: pgmap v1384: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:30:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:30:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:30:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:30:34 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:34 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:30:34 compute-0 nova_compute[262220]: 2025-10-08 10:30:34.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:30:34 compute-0 ovs-vsctl[301114]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct 08 10:30:34 compute-0 sshd-session[301085]: Invalid user pi from 196.203.106.113 port 51044
Oct 08 10:30:34 compute-0 podman[301124]: 2025-10-08 10:30:34.463771858 +0000 UTC m=+0.094881200 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 08 10:30:34 compute-0 sshd-session[301085]: pam_unix(sshd:auth): check pass; user unknown
Oct 08 10:30:34 compute-0 sshd-session[301085]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113
Oct 08 10:30:34 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:30:35 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:30:35 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:30:35 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:30:35.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:30:35 compute-0 virtqemud[261885]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct 08 10:30:35 compute-0 virtqemud[261885]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct 08 10:30:35 compute-0 virtqemud[261885]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 08 10:30:35 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1385: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:30:35 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27596 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:35 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:30:35] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 08 10:30:35 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:30:35] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 08 10:30:35 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:30:35 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:30:35 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:30:35.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:30:35 compute-0 ceph-mds[95385]: mds.cephfs.compute-0.lphril asok_command: cache status {prefix=cache status} (starting...)
Oct 08 10:30:35 compute-0 ceph-mds[95385]: mds.cephfs.compute-0.lphril Can't run that command on an inactive MDS!
Oct 08 10:30:35 compute-0 lvm[301450]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 08 10:30:35 compute-0 lvm[301450]: VG ceph_vg0 finished
Oct 08 10:30:35 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Oct 08 10:30:35 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 08 10:30:36 compute-0 ceph-mds[95385]: mds.cephfs.compute-0.lphril asok_command: client ls {prefix=client ls} (starting...)
Oct 08 10:30:36 compute-0 ceph-mds[95385]: mds.cephfs.compute-0.lphril Can't run that command on an inactive MDS!
Oct 08 10:30:36 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27608 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:36 compute-0 sshd-session[301085]: Failed password for invalid user pi from 196.203.106.113 port 51044 ssh2
Oct 08 10:30:36 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27620 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:36 compute-0 ceph-mon[73572]: pgmap v1385: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:30:36 compute-0 ceph-mon[73572]: from='client.27596 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:36 compute-0 ceph-mon[73572]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 08 10:30:36 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/837932214' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 08 10:30:36 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/348261753' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:30:36 compute-0 sshd-session[301085]: Connection closed by invalid user pi 196.203.106.113 port 51044 [preauth]
Oct 08 10:30:36 compute-0 nova_compute[262220]: 2025-10-08 10:30:36.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:30:36 compute-0 ceph-mds[95385]: mds.cephfs.compute-0.lphril asok_command: damage ls {prefix=damage ls} (starting...)
Oct 08 10:30:36 compute-0 ceph-mds[95385]: mds.cephfs.compute-0.lphril Can't run that command on an inactive MDS!
Oct 08 10:30:36 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27301 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:36 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17499 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:36 compute-0 ceph-mds[95385]: mds.cephfs.compute-0.lphril asok_command: dump loads {prefix=dump loads} (starting...)
Oct 08 10:30:36 compute-0 ceph-mds[95385]: mds.cephfs.compute-0.lphril Can't run that command on an inactive MDS!
Oct 08 10:30:36 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Oct 08 10:30:36 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2044351178' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 08 10:30:36 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27632 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:36 compute-0 ceph-mds[95385]: mds.cephfs.compute-0.lphril asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Oct 08 10:30:36 compute-0 ceph-mds[95385]: mds.cephfs.compute-0.lphril Can't run that command on an inactive MDS!
Oct 08 10:30:37 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Oct 08 10:30:37 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 08 10:30:37 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:30:37 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:30:37 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:30:37.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:30:37 compute-0 ceph-mds[95385]: mds.cephfs.compute-0.lphril asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Oct 08 10:30:37 compute-0 ceph-mds[95385]: mds.cephfs.compute-0.lphril Can't run that command on an inactive MDS!
Oct 08 10:30:37 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17517 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:37 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27325 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:37 compute-0 ceph-mds[95385]: mds.cephfs.compute-0.lphril asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Oct 08 10:30:37 compute-0 ceph-mds[95385]: mds.cephfs.compute-0.lphril Can't run that command on an inactive MDS!
Oct 08 10:30:37 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:30:37.274Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:30:37 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 08 10:30:37 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/366398183' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:30:37 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1386: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:30:37 compute-0 unix_chkpwd[301707]: password check failed for user (ftp)
Oct 08 10:30:37 compute-0 sshd-session[301631]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.203.106.113  user=ftp
Oct 08 10:30:37 compute-0 ceph-mds[95385]: mds.cephfs.compute-0.lphril asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Oct 08 10:30:37 compute-0 ceph-mds[95385]: mds.cephfs.compute-0.lphril Can't run that command on an inactive MDS!
Oct 08 10:30:37 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17532 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:37 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27343 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:37 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27665 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:37 compute-0 ceph-mds[95385]: mds.cephfs.compute-0.lphril asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Oct 08 10:30:37 compute-0 ceph-mds[95385]: mds.cephfs.compute-0.lphril Can't run that command on an inactive MDS!
Oct 08 10:30:37 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0)
Oct 08 10:30:37 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4000713736' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 08 10:30:37 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:30:37 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:30:37 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:30:37.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:30:37 compute-0 ceph-mon[73572]: from='client.27608 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:37 compute-0 ceph-mon[73572]: from='client.27620 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:37 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/3867699449' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 08 10:30:37 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/2044351178' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 08 10:30:37 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/3646477179' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 08 10:30:37 compute-0 ceph-mon[73572]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 08 10:30:37 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/1799264437' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 08 10:30:37 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/366398183' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:30:37 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/3175527391' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 08 10:30:37 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/1921124390' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 08 10:30:37 compute-0 ceph-mds[95385]: mds.cephfs.compute-0.lphril asok_command: get subtrees {prefix=get subtrees} (starting...)
Oct 08 10:30:37 compute-0 ceph-mds[95385]: mds.cephfs.compute-0.lphril Can't run that command on an inactive MDS!
Oct 08 10:30:38 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17544 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:38 compute-0 ceph-mds[95385]: mds.cephfs.compute-0.lphril asok_command: ops {prefix=ops} (starting...)
Oct 08 10:30:38 compute-0 ceph-mds[95385]: mds.cephfs.compute-0.lphril Can't run that command on an inactive MDS!
Oct 08 10:30:38 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27683 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Oct 08 10:30:38 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/405516947' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 08 10:30:38 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27361 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Oct 08 10:30:38 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/803082500' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 08 10:30:38 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17565 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Oct 08 10:30:38 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 08 10:30:38 compute-0 ceph-mon[73572]: from='client.27301 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:38 compute-0 ceph-mon[73572]: from='client.17499 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:38 compute-0 ceph-mon[73572]: from='client.27632 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:38 compute-0 ceph-mon[73572]: from='client.17517 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:38 compute-0 ceph-mon[73572]: from='client.27325 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:38 compute-0 ceph-mon[73572]: pgmap v1386: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:30:38 compute-0 ceph-mon[73572]: from='client.17532 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:38 compute-0 ceph-mon[73572]: from='client.27343 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:38 compute-0 ceph-mon[73572]: from='client.27665 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:38 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/4000713736' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 08 10:30:38 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/2547679674' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 08 10:30:38 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/666183731' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 08 10:30:38 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/405516947' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 08 10:30:38 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/2249989124' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 08 10:30:38 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/1972797703' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 08 10:30:38 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/803082500' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 08 10:30:38 compute-0 ceph-mon[73572]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 08 10:30:38 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/3686254037' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 08 10:30:38 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/3862360611' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 08 10:30:38 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27385 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:38 compute-0 ceph-mds[95385]: mds.cephfs.compute-0.lphril asok_command: session ls {prefix=session ls} (starting...)
Oct 08 10:30:38 compute-0 ceph-mds[95385]: mds.cephfs.compute-0.lphril Can't run that command on an inactive MDS!
Oct 08 10:30:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:30:38.914Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:30:38 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:30:38.915Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:30:38 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17589 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:38 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Oct 08 10:30:38 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2080598880' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 08 10:30:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:30:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:30:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:30:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:39 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:30:39 compute-0 ceph-mds[95385]: mds.cephfs.compute-0.lphril asok_command: status {prefix=status} (starting...)
Oct 08 10:30:39 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:30:39 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:30:39 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:30:39.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:30:39 compute-0 nova_compute[262220]: 2025-10-08 10:30:39.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:30:39 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27409 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:39 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1387: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:30:39 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Oct 08 10:30:39 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2740380572' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 08 10:30:39 compute-0 sshd-session[301631]: Failed password for ftp from 196.203.106.113 port 49730 ssh2
Oct 08 10:30:39 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Oct 08 10:30:39 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3866329630' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 08 10:30:39 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:30:39 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27737 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:39 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T10:30:39.640+0000 7fa108681640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 08 10:30:39 compute-0 ceph-mgr[73869]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 08 10:30:39 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:30:39 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:30:39 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:30:39.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:30:39 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Oct 08 10:30:39 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 08 10:30:39 compute-0 ceph-mon[73572]: from='client.17544 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:39 compute-0 ceph-mon[73572]: from='client.27683 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:39 compute-0 ceph-mon[73572]: from='client.27361 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:39 compute-0 ceph-mon[73572]: from='client.17565 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:39 compute-0 ceph-mon[73572]: from='client.27385 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:39 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/355068105' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 08 10:30:39 compute-0 ceph-mon[73572]: from='client.17589 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:39 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/2080598880' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 08 10:30:39 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/1521085738' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 08 10:30:39 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/61177752' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 08 10:30:39 compute-0 ceph-mon[73572]: from='client.27409 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:39 compute-0 ceph-mon[73572]: pgmap v1387: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:30:39 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/1079484198' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 08 10:30:39 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/2740380572' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 08 10:30:39 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3866329630' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 08 10:30:39 compute-0 ceph-mon[73572]: from='client.27737 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:39 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/1883190501' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 08 10:30:39 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/988734602' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 08 10:30:39 compute-0 ceph-mon[73572]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 08 10:30:39 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Oct 08 10:30:39 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3456496455' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 08 10:30:39 compute-0 podman[302023]: 2025-10-08 10:30:39.932240768 +0000 UTC m=+0.081865478 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 08 10:30:39 compute-0 podman[302024]: 2025-10-08 10:30:39.934291864 +0000 UTC m=+0.083891413 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 08 10:30:39 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Oct 08 10:30:39 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3477387204' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 08 10:30:40 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17637 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:40 compute-0 ceph-mgr[73869]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 08 10:30:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T10:30:40.359+0000 7fa108681640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 08 10:30:40 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct 08 10:30:40 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1520230029' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 08 10:30:40 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27788 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:40 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27466 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:40 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T10:30:40.754+0000 7fa108681640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 08 10:30:40 compute-0 ceph-mgr[73869]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 08 10:30:40 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Oct 08 10:30:40 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3118741586' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 08 10:30:40 compute-0 sshd-session[301631]: Connection closed by authenticating user ftp 196.203.106.113 port 49730 [preauth]
Oct 08 10:30:40 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Oct 08 10:30:40 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1905105844' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 08 10:30:40 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/1403834178' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 08 10:30:40 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3456496455' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 08 10:30:40 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3477387204' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 08 10:30:40 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/2018553590' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 08 10:30:40 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/455798251' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 08 10:30:40 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/3074287625' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 08 10:30:40 compute-0 ceph-mon[73572]: from='client.17637 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:40 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/1520230029' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 08 10:30:40 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/1609468654' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 08 10:30:40 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/2525358923' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 08 10:30:40 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/710023201' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 08 10:30:40 compute-0 ceph-mon[73572]: from='client.27788 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:40 compute-0 ceph-mon[73572]: from='client.27466 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:40 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3118741586' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 08 10:30:40 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/1905105844' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 08 10:30:41 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:30:41 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:30:41 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:30:41.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:30:41 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27803 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:41 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Oct 08 10:30:41 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2869719441' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 08 10:30:41 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Oct 08 10:30:41 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4055743888' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 08 10:30:41 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1388: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:30:41 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17685 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:41 compute-0 nova_compute[262220]: 2025-10-08 10:30:41.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:30:41 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:30:41 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:30:41 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:30:41.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:30:41 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Oct 08 10:30:41 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3778309663' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 08 10:30:41 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17697 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:41 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27836 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:41 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27511 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:42 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/2385787221' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 08 10:30:42 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/773626954' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 08 10:30:42 compute-0 ceph-mon[73572]: from='client.27803 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:42 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/2803720045' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 08 10:30:42 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/2869719441' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 08 10:30:42 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/4055743888' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 08 10:30:42 compute-0 ceph-mon[73572]: pgmap v1388: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:30:42 compute-0 ceph-mon[73572]: from='client.17685 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:42 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/609986817' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 08 10:30:42 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/1888208235' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 08 10:30:42 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3778309663' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 08 10:30:42 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/1032180633' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 08 10:30:42 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17706 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:42 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Oct 08 10:30:42 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1313256790' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 08 10:30:42 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27535 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:42 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27854 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:42 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17724 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:03.184166+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 2801664 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:04.184497+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 2801664 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997669 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:05.184797+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 2801664 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:06.185156+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 2801664 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:07.185323+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 2801664 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:08.185458+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:09.185703+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997669 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:10.185847+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e8400 session 0x559f2d953680
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e8800 session 0x559f2dbe94a0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:11.186056+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:12.186241+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:13.186445+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:14.186608+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997669 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:15.186787+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:16.186976+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:17.187125+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:18.187344+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:19.187594+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997669 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:20.187729+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:21.187852+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d680c00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 38.191692352s of 38.196037292s, submitted: 1
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:22.188007+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:23.188160+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:24.189487+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 2785280 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:25.189634+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999313 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 2785280 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:26.189795+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 2785280 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:27.190084+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 2785280 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d3c4800
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:28.190230+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 2785280 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:29.190414+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 2785280 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:30.190578+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000825 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 2785280 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:31.190947+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 2777088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:32.191127+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 2777088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:33.191473+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 2777088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:34.191636+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 2777088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:35.191820+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000825 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 2777088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:36.192116+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 2777088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.237012863s of 15.247964859s, submitted: 3
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:37.192246+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 2777088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:38.192379+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 2777088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:39.192557+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 2777088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:40.192724+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000693 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 2777088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:41.192877+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 2777088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:42.193012+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 2777088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:43.193397+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 2777088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:44.193698+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 2777088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:45.193845+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000693 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 2768896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:46.194070+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 2768896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:47.194202+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 2768896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:48.194353+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 2768896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:49.194525+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 2768896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:50.194783+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000693 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 2768896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:51.194932+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 2768896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:52.195106+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 2768896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:53.195276+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 2768896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:54.195912+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 2768896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:55.196089+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000693 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 2768896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:56.196239+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 2768896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:57.196635+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 2768896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:58.196787+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2760704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:58:59.197009+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2760704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:00.197121+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000693 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2760704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:01.197261+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2760704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:02.197405+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 2768896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:03.197535+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 2768896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:04.200094+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2760704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e8c00 session 0x559f2d82f2c0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:05.200199+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000693 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2760704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:06.200480+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2760704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:07.200631+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2760704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:08.200770+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:09.200882+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:10.201065+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000693 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:11.201206+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:12.201362+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:13.201520+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:14.201757+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:15.202014+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000693 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2b37e800
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 39.051769257s of 39.055622101s, submitted: 1
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:16.202235+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:17.202385+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d3c4800 session 0x559f2d961680
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d680c00 session 0x559f2a95b680
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:18.202554+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:19.202720+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:20.202855+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000825 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:21.202993+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2760704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:22.203121+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2760704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:23.203279+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2760704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:24.203382+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2760704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:25.203524+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000825 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2760704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:26.203682+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2760704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:27.203852+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2760704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:28.204120+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2b37f400
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.737722397s of 12.740792274s, submitted: 1
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:29.204250+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:30.204632+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000957 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:31.204958+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86335488 unmapped: 1703936 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:32.205260+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 2744320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:33.205661+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 2744320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:34.205864+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 2744320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e8400
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:35.206058+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000234 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 2744320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:36.206329+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 2744320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:37.206484+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 2744320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:38.206625+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 2744320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:39.207122+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 2744320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:40.207291+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000234 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 2744320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.218849182s of 12.235140800s, submitted: 3
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:41.207407+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 2744320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:42.207580+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 2744320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:43.207767+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 2744320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:44.207892+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 2744320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:45.208299+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000234 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 2744320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:46.208472+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:47.208635+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:48.208772+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:49.208961+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:50.209160+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000102 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:51.209291+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:52.209427+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:53.209565+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:54.209742+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:55.209880+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000102 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:56.210077+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:57.210272+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:58.210401+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T09:59:59.210577+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:00.210748+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000102 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:01.210905+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:02.211074+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:03.211207+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:04.211329+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e9800 session 0x559f2c4243c0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e9000 session 0x559f2d82e3c0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:05.211524+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000102 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:06.211686+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:07.211857+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:08.212002+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:09.212143+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:10.212325+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000102 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:11.212451+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread fragmentation_score=0.000031 took=0.000080s
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 2711552 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:12.212573+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 2711552 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:13.212706+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 2711552 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:14.212832+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 2711552 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:15.212960+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000102 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e8800
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 34.856376648s of 34.864582062s, submitted: 2
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 2711552 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:16.213106+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 2711552 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2b37e000 session 0x559f2a9a3a40
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2b37e000
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:17.213268+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 2703360 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:18.213416+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9000
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 2703360 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:19.213542+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 2703360 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:20.213698+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001746 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:21.213832+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:22.213959+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:23.214120+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:24.214239+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:25.214350+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001746 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:26.214491+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e9000 session 0x559f2d9534a0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2b37e800 session 0x559f2d9612c0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:27.214618+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.321186066s of 12.326921463s, submitted: 2
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:28.214760+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:29.214875+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:30.215008+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001614 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:31.215111+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:32.215332+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:33.215412+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:34.215538+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:35.215682+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001614 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:36.215874+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:37.216003+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9800
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:38.216141+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:39.216332+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:40.216473+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001746 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 2686976 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:41.216683+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 2686976 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:42.216825+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 2686976 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:43.216934+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d3c4800
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.701647758s of 15.710140228s, submitted: 2
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:44.217092+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:45.217200+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1003258 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:46.217366+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:47.217528+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:48.217679+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:49.217853+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:50.217991+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002667 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:51.218182+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:52.218387+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:53.218546+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:54.218747+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e8800 session 0x559f2a9703c0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:55.218942+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002535 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:56.219195+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:57.219336+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:58.219450+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:00:59.219661+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:00.219794+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002535 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:01.219922+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:02.220055+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:03.220319+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:04.220451+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 2670592 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d680c00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.336950302s of 21.398941040s, submitted: 3
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:05.220585+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002667 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 2662400 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:06.220810+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 2662400 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:07.220990+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 2662400 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:08.221131+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 2662400 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:09.221307+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 2662400 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:10.221509+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004179 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 2662400 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:11.221954+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b2000
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 2646016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:12.222151+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 2646016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:13.222323+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 2646016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:14.222466+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 2646016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:15.222618+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005100 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 2646016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:16.222788+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 2646016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:17.222945+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 2646016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:18.223130+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 2646016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:19.223263+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 2646016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:20.223386+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.527006149s of 15.556138039s, submitted: 4
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004968 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 2646016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:21.223545+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 2646016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:22.223723+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 2646016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:23.223857+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 2637824 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:24.224009+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 2637824 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:25.224182+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004968 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 2637824 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:26.224443+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 2637824 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:27.224578+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 2637824 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:28.224801+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 2637824 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:29.224976+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 2637824 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:30.225124+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004968 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 2637824 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:31.225294+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d5b2000 session 0x559f2cadd2c0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d680c00 session 0x559f2d961c20
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 2637824 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:32.225506+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 2637824 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:33.225747+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 2629632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:34.225884+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 2629632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:35.227111+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004968 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 2629632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:36.227917+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 2629632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:37.229726+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 2629632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:38.230792+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 2629632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:39.231280+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 2629632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:40.231468+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004968 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 2629632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:41.231635+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 2629632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:42.233028+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2b37e800
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 22.097640991s of 22.100765228s, submitted: 1
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 2629632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:43.233513+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 2629632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:44.233664+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 2629632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:45.234224+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005100 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:46.235243+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:47.235389+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:48.235522+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e8800
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:49.235656+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:50.235796+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005100 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:51.236377+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:52.236520+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:53.236655+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:54.236832+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.092028618s of 12.143527031s, submitted: 2
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:55.237089+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1003918 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:56.237256+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:57.237374+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:58.237510+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:01:59.237680+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:00.237939+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1003786 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:01.238139+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:02.238296+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:03.238544+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 2613248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:04.238695+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 2613248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:05.238841+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 2613248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1003786 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:06.239104+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 2613248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:07.239284+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 2613248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:08.239456+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 2613248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:09.239618+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 2613248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:10.239762+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 2613248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1003786 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:11.239913+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 2613248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:12.240071+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 2605056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:13.240222+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 2605056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e8800 session 0x559f2a9550e0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2b37e800 session 0x559f2d82fe00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:14.240362+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 2605056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:15.240534+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 2605056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1003786 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:16.240723+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 2605056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:17.240865+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 2605056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:18.240997+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 2605056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: mgrc ms_handle_reset ms_handle_reset con 0x559f2abaa000
Oct 08 10:30:42 compute-0 ceph-osd[81751]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3802415056
Oct 08 10:30:42 compute-0 ceph-osd[81751]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3802415056,v1:192.168.122.100:6801/3802415056]
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: get_auth_request con 0x559f2d0e8c00 auth_method 0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: mgrc handle_mgr_configure stats_period=5
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:19.241152+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:20.241286+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1003786 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:21.241447+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:22.241636+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:23.241789+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:24.241937+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9000
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 29.943304062s of 30.003890991s, submitted: 2
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:25.242086+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1003918 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:26.242237+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:27.242399+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:28.242541+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:29.242662+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:30.242780+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005430 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:31.242897+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:32.243067+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:33.243203+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:34.243333+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27550 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:35.243472+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005430 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:36.243651+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:37.243785+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:38.243978+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:39.244119+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:40.244247+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005430 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:41.244387+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:42.244521+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.837564468s of 17.844263077s, submitted: 2
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d3c4800 session 0x559f2d82e1e0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e9800 session 0x559f2d82ef00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:43.245193+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 2588672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:44.245580+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 2588672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:45.245771+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 2588672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005298 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:46.245937+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 2588672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:47.246600+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 2588672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:48.247140+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 2588672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:49.247393+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 2588672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:50.247526+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 2588672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:51.247714+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005298 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 2588672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:52.247914+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 2588672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:53.248051+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 2588672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b2000
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.888109207s of 10.891509056s, submitted: 1
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:54.248185+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 2588672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:55.248356+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 2588672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:56.248708+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005430 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 2588672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d680c00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:57.248979+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86499328 unmapped: 1540096 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:58.249105+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:02:59.249410+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b2400
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:00.249624+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:01.249793+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006942 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:02.249930+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:03.250084+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:04.250234+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:05.250370+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.085538864s of 12.127921104s, submitted: 2
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:06.250619+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006351 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:07.250824+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:08.250991+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:09.251183+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:10.251358+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:11.251481+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006219 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:12.251683+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:13.252306+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:14.252473+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:15.252641+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d680c00 session 0x559f2d5ef2c0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e9000 session 0x559f2c5c8b40
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:16.252897+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006219 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:17.253236+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:18.253496+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:19.253625+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:20.253780+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:21.254091+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006219 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:22.254264+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:23.254406+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:24.254554+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:25.254667+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:26.254802+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006219 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d680c00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.218805313s of 21.227340698s, submitted: 2
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:27.255014+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:28.255286+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:29.255519+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:30.255660+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:31.255806+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006351 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:32.255901+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2b37e800
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:33.256043+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:34.256215+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:35.256366+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:36.256544+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007863 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:37.256712+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:38.256896+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.160308838s of 12.177426338s, submitted: 2
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:39.257007+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:40.257073+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:41.257183+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007272 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:42.257442+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:43.257567+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:44.257707+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:45.257870+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:46.258103+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007140 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:47.258220+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 1515520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:48.258376+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d5b2400 session 0x559f2da1f0e0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d5b2000 session 0x559f2a8670e0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 1515520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:49.258509+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 1515520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:50.258628+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 1515520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:51.258785+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007140 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 1515520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:52.258930+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 1515520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:53.259098+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86532096 unmapped: 1507328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:54.259236+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86532096 unmapped: 1507328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:55.259574+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86532096 unmapped: 1507328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:56.259793+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007140 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86532096 unmapped: 1507328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:57.259963+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86532096 unmapped: 1507328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:58.260112+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86532096 unmapped: 1507328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:03:59.260248+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e8800
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.320930481s of 20.461774826s, submitted: 2
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:00.260536+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:01.260743+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007272 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 9009 writes, 35K keys, 9009 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 9009 writes, 1887 syncs, 4.77 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 764 writes, 1222 keys, 764 commit groups, 1.0 writes per commit group, ingest: 0.41 MB, 0.00 MB/s
                                           Interval WAL: 764 writes, 362 syncs, 2.11 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb69b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb69b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb69b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:02.260872+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86548480 unmapped: 1490944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:03.261109+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86548480 unmapped: 1490944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:04.261305+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86548480 unmapped: 1490944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:05.261586+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86548480 unmapped: 1490944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:06.261795+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008784 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9800
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86548480 unmapped: 1490944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:07.262000+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86548480 unmapped: 1490944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:08.262176+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86548480 unmapped: 1490944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:09.262365+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.007425308s of 10.107902527s, submitted: 3
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86548480 unmapped: 1490944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:10.262591+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86548480 unmapped: 1490944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:11.262791+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009705 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86548480 unmapped: 1490944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:12.263006+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86548480 unmapped: 1490944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:13.263191+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 1482752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:14.263344+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 1482752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:15.263508+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 1482752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:16.263728+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009573 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 1482752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:17.263939+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 1482752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:18.264128+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 1482752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:19.264310+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 1482752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:20.264530+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 1482752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:21.264755+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009573 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 1482752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:22.264938+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 1482752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:23.265151+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 1482752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:24.265361+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 1482752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:25.265488+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 1482752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:26.265681+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009573 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 1482752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:27.265859+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86564864 unmapped: 1474560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:28.266079+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86564864 unmapped: 1474560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:29.266221+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86564864 unmapped: 1474560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:30.266422+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86564864 unmapped: 1474560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:31.266575+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009573 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86564864 unmapped: 1474560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:32.266701+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86564864 unmapped: 1474560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:33.266864+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 1466368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:34.267116+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 1466368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:35.267284+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 1466368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:36.267482+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009573 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 1466368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:37.267790+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 1466368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:38.268000+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 1466368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:39.268224+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 1466368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:40.268433+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 1466368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:41.268702+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009573 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 1466368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:42.268903+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 1466368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:43.269195+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 1466368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e8400 session 0x559f2d70cb40
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2b37f400 session 0x559f2d5ee1e0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:44.269384+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 1466368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:45.269630+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 1466368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:46.269839+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009573 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 1466368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:47.270160+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 1458176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:48.270382+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 1458176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:49.270561+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 1458176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:50.270722+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 1458176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:51.270840+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009573 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 1458176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:52.270954+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 1458176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:53.271096+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 1458176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:54.271243+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 1458176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2b37f400
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 45.391696930s of 45.435684204s, submitted: 2
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:55.271377+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 1458176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:56.271529+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009705 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 1458176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:57.271707+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 1458176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:58.271840+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 1458176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:04:59.272120+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 1458176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:00.272254+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 1458176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:01.272431+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e8400
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011217 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86589440 unmapped: 1449984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:02.272570+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:03.272729+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:04.272866+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:05.272995+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:06.273172+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010035 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:07.273334+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:08.273534+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.340482712s of 13.399305344s, submitted: 4
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:09.273715+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:10.273889+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:11.274081+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009903 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:12.274219+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:13.274322+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:14.274445+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:15.274614+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:16.274793+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009903 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:17.274971+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:18.275216+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:19.275426+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:20.275903+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:21.276059+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009903 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:22.276215+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.399309158s of 14.402190208s, submitted: 1
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:23.276339+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86638592 unmapped: 1400832 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:24.276494+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [0,0,0,0,0,0,4])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 1458176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:25.276658+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [0,0,0,0,0,0,1,2])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86614016 unmapped: 1425408 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:26.276838+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009975 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86794240 unmapped: 2293760 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:27.276895+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:28.277019+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:29.277171+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:30.277353+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:31.277503+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009903 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:32.277621+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:33.277786+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:34.277948+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:35.278149+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:36.278282+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009903 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:37.278435+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:38.278584+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:39.278726+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:40.278893+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:41.279074+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009903 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:42.279309+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:43.279420+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:44.279550+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:45.279688+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:46.279844+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009903 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:47.280026+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:48.280280+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:49.280470+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:50.280627+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:51.280862+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009903 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:52.281007+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:53.281133+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:54.281347+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:55.281508+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:56.281680+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009903 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:57.281808+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:58.281928+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:05:59.282091+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:00.282240+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Oct 08 10:30:42 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4046998603' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:01.282453+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009903 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:02.282567+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:03.282730+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:04.282878+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:05.283151+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:06.283402+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009903 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:07.283579+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:08.283736+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:09.283897+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2b37e800 session 0x559f2dc09680
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d680c00 session 0x559f2d9612c0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:10.284156+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:11.284348+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009903 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:12.284532+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:13.284739+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:14.284989+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:15.285141+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:16.285319+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009903 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:17.285516+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e8400 session 0x559f2c8afe00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2b37f400 session 0x559f2d953a40
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:18.285698+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:19.285899+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9000
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 53.254104614s of 57.032154083s, submitted: 332
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:20.286138+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:21.286306+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010035 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:22.286458+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:23.286603+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:24.286715+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:25.286839+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:26.287100+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010035 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:27.287239+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:28.287448+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b2000
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:29.287662+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:30.287842+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:31.288098+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010167 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:32.288221+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:33.288408+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:34.288547+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b2400
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:35.288770+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:36.288964+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.976808548s of 16.986804962s, submitted: 2
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010035 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:37.289144+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:38.289345+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e9800 session 0x559f2d70de00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e8800 session 0x559f2d554960
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:39.289514+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:40.289698+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:41.290126+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010035 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:42.290298+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:43.290519+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:44.290774+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:45.290888+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 2236416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:46.291087+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 2236416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009903 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:47.291316+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 2236416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:48.291505+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 2236416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2b37e800
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:49.291617+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.073468208s of 12.254982948s, submitted: 3
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 2236416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:50.291764+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 2236416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:51.291945+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 2236416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010035 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:52.292105+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 2236416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:53.292288+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 2236416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:54.292471+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 2236416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:55.292646+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 2236416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:56.292825+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 2236416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011547 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:57.293004+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 2236416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:58.293169+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 2236416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:06:59.293317+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86859776 unmapped: 2228224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:00.293483+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86859776 unmapped: 2228224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:01.293669+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86859776 unmapped: 2228224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011547 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:02.294069+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86859776 unmapped: 2228224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:03.294219+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86859776 unmapped: 2228224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:04.294351+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86859776 unmapped: 2228224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.275589943s of 15.385351181s, submitted: 2
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:05.294483+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:06.294636+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011415 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:07.295115+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:08.295399+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:09.296338+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:10.296585+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:11.296755+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011415 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:12.296917+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:13.297075+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:14.297216+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:15.297372+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:16.297677+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011415 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:17.297864+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:18.298022+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:19.298271+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:20.298445+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:21.298582+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011415 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:22.298823+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:23.299067+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:24.299241+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:25.299491+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:26.299709+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011415 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:27.299901+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:28.300097+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:29.300237+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:30.300370+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:31.300844+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:32.301336+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011415 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:33.301790+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:34.302082+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d5b2400 session 0x559f2d82e960
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d5b2000 session 0x559f2d82ef00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:35.302454+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:36.302849+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:37.303216+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011415 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:38.303509+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:39.303836+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:40.304132+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:41.304429+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:42.304617+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011415 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:43.304787+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:44.305003+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2b37f400
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 40.478878021s of 40.551963806s, submitted: 1
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:45.305308+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86884352 unmapped: 2203648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:46.305519+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86884352 unmapped: 2203648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:47.305749+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011547 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86884352 unmapped: 2203648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:48.305998+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86884352 unmapped: 2203648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:49.306229+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86884352 unmapped: 2203648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:50.306390+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 2195456 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:51.306667+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e8400
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:52.306796+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013059 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:53.306960+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:54.307111+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:55.307280+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:56.307517+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:57.307721+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.107625008s of 12.130958557s, submitted: 2
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012468 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:58.307935+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:07:59.308139+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:00.308322+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:01.308518+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:02.308669+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012336 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:03.309160+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:04.309381+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:05.309817+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:06.310589+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:07.311773+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012336 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:08.311941+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:09.312083+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:10.314258+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:11.314556+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:12.314868+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012336 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:13.315140+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:14.315308+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:15.315449+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:16.315691+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:17.315824+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012336 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:18.316071+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:19.316262+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:20.316462+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _renew_subs
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.331020355s of 23.338811874s, submitted: 2
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86908928 unmapped: 2179072 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:21.316598+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9800
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _renew_subs
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86933504 unmapped: 2154496 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:22.316819+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1021781 data_alloc: 218103808 data_used: 167936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 150 handle_osd_map epochs [150,151], i have 150, src has [1,151]
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _renew_subs
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 151 handle_osd_map epochs [151,151], i have 151, src has [1,151]
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x107e4e/0x1c6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [0,0,0,0,1])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 2146304 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 151 ms_handle_reset con 0x559f2d0e9800 session 0x559f2d952960
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:23.317019+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 2146304 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d680c00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:24.317142+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 151 ms_handle_reset con 0x559f2d0e8400 session 0x559f2d5ee1e0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 151 ms_handle_reset con 0x559f2b37f400 session 0x559f2d5ef2c0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 18898944 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:25.317350+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _renew_subs
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 18898944 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 152 ms_handle_reset con 0x559f2d680c00 session 0x559f2d555680
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:26.317512+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fbe3e000/0x0/0x4ffc00000, data 0x90c0a4/0x9ce000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 18898944 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:27.317706+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083662 data_alloc: 218103808 data_used: 176128
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 18898944 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:28.317837+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 18898944 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 152 handle_osd_map epochs [152,153], i have 152, src has [1,153]
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:29.318081+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 18890752 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:30.318238+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 18890752 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fbe3a000/0x0/0x4ffc00000, data 0x90e076/0x9d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:31.318429+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 18890752 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:32.318605+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087260 data_alloc: 218103808 data_used: 176128
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 18890752 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:33.318764+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 18890752 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:34.318927+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 18890752 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fbe3a000/0x0/0x4ffc00000, data 0x90e076/0x9d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2b37f400
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.067012787s of 14.482573509s, submitted: 64
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:35.319225+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 18890752 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:36.319453+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 18890752 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:37.319597+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087392 data_alloc: 218103808 data_used: 176128
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 18890752 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e8400
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:38.319667+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 18890752 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:39.319790+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 18890752 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:40.319892+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fbe3b000/0x0/0x4ffc00000, data 0x90e076/0x9d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 18890752 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9800
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:41.320014+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 18882560 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:42.320102+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1089576 data_alloc: 218103808 data_used: 176128
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 18882560 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:43.320277+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 18882560 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:44.320426+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fbe3b000/0x0/0x4ffc00000, data 0x90e076/0x9d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 18882560 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:45.320556+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 18882560 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:46.320709+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 18882560 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.071710587s of 12.114167213s, submitted: 3
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:47.320837+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088985 data_alloc: 218103808 data_used: 176128
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 88039424 unmapped: 17833984 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:48.321028+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 88039424 unmapped: 17833984 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:49.321213+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fbe3b000/0x0/0x4ffc00000, data 0x90e076/0x9d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 88039424 unmapped: 17833984 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:50.321345+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 18866176 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:51.321492+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 18866176 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:52.321643+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088853 data_alloc: 218103808 data_used: 176128
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 18866176 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:53.321760+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 18866176 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:54.321861+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 18866176 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fbe3b000/0x0/0x4ffc00000, data 0x90e076/0x9d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:55.322014+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 18866176 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:56.322181+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 18866176 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:57.322342+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088853 data_alloc: 218103808 data_used: 176128
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 18866176 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:58.322459+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 18866176 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:08:59.322611+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87015424 unmapped: 18857984 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:00.322774+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fbe3b000/0x0/0x4ffc00000, data 0x90e076/0x9d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87015424 unmapped: 18857984 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:01.322895+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87015424 unmapped: 18857984 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:02.323071+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088853 data_alloc: 218103808 data_used: 176128
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87015424 unmapped: 18857984 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:03.323218+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87015424 unmapped: 18857984 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b2000
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 153 ms_handle_reset con 0x559f2d5b2000 session 0x559f2d6370e0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b2400
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 153 ms_handle_reset con 0x559f2d5b2400 session 0x559f2dbe8b40
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d3c4800
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 153 ms_handle_reset con 0x559f2d3c4800 session 0x559f2c5dfc20
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:04.323361+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87023616 unmapped: 18849792 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:05.323503+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b2800
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 153 ms_handle_reset con 0x559f2d5b2800 session 0x559f2a866000
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d3c4800
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fbe3b000/0x0/0x4ffc00000, data 0x90e076/0x9d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 153 ms_handle_reset con 0x559f2d3c4800 session 0x559f2a975680
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87023616 unmapped: 18849792 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:06.323642+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b2000
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 153 ms_handle_reset con 0x559f2d5b2000 session 0x559f2a95bc20
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b2400
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87023616 unmapped: 18849792 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:07.323787+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 153 handle_osd_map epochs [153,154], i have 153, src has [1,154]
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.177728653s of 20.183889389s, submitted: 2
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092771 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87023616 unmapped: 18849792 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:08.323939+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 154 handle_osd_map epochs [154,155], i have 154, src has [1,155]
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 155 ms_handle_reset con 0x559f2d5b2400 session 0x559f2b6512c0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d680c00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 155 ms_handle_reset con 0x559f2d680c00 session 0x559f2a9a3e00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b2c00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 155 ms_handle_reset con 0x559f2d5b2c00 session 0x559f2d960780
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d3c4800
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 155 ms_handle_reset con 0x559f2d3c4800 session 0x559f2d82fa40
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b2000
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 155 ms_handle_reset con 0x559f2d5b2000 session 0x559f2d554780
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87269376 unmapped: 18604032 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:09.324119+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87269376 unmapped: 18604032 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:10.324251+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fb528000/0x0/0x4ffc00000, data 0x121c314/0x12e3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87277568 unmapped: 18595840 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:11.324405+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87277568 unmapped: 18595840 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:12.324521+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b2400
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 155 ms_handle_reset con 0x559f2d5b2400 session 0x559f2dbe81e0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165866 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87302144 unmapped: 18571264 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b2c00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d680c00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:13.324675+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87318528 unmapped: 18554880 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:14.324811+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _renew_subs
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 155 handle_osd_map epochs [156,156], i have 155, src has [1,156]
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb528000/0x0/0x4ffc00000, data 0x121c337/0x12e4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 95657984 unmapped: 10215424 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:15.325006+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 97189888 unmapped: 8683520 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:16.325214+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 97189888 unmapped: 8683520 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:17.325349+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233556 data_alloc: 234881024 data_used: 9666560
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 97206272 unmapped: 8667136 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:18.325469+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb524000/0x0/0x4ffc00000, data 0x121e309/0x12e7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 97239040 unmapped: 8634368 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:19.325583+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 97239040 unmapped: 8634368 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:20.325703+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 97239040 unmapped: 8634368 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:21.325825+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 97239040 unmapped: 8634368 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:22.325971+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233556 data_alloc: 234881024 data_used: 9666560
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 97239040 unmapped: 8634368 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb524000/0x0/0x4ffc00000, data 0x121e309/0x12e7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:23.326138+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 97239040 unmapped: 8634368 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:24.326244+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.378948212s of 17.598480225s, submitted: 58
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 97239040 unmapped: 8634368 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:25.326361+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103514112 unmapped: 8765440 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:26.326517+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102342656 unmapped: 9936896 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:27.326626+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1346036 data_alloc: 234881024 data_used: 10461184
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102342656 unmapped: 9936896 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:28.326783+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102498304 unmapped: 9781248 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f95a7000/0x0/0x4ffc00000, data 0x1ffc309/0x20c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:29.326922+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f95a7000/0x0/0x4ffc00000, data 0x1ffc309/0x20c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102506496 unmapped: 9773056 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:30.327094+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e8400 session 0x559f2da1f0e0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9000 session 0x559f2d555e00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102506496 unmapped: 9773056 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:31.327259+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102506496 unmapped: 9773056 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:32.327381+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1346036 data_alloc: 234881024 data_used: 10461184
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102506496 unmapped: 9773056 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:33.327503+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102506496 unmapped: 9773056 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:34.327722+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102506496 unmapped: 9773056 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:35.327897+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f95a7000/0x0/0x4ffc00000, data 0x1ffc309/0x20c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102547456 unmapped: 9732096 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:36.328091+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102547456 unmapped: 9732096 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:37.328227+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1346948 data_alloc: 234881024 data_used: 10530816
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102547456 unmapped: 9732096 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:38.328414+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f95a7000/0x0/0x4ffc00000, data 0x1ffc309/0x20c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102547456 unmapped: 9732096 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:39.328554+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102555648 unmapped: 9723904 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:40.328735+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102555648 unmapped: 9723904 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e8400
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.637916565s of 16.192432404s, submitted: 74
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:41.328919+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102555648 unmapped: 9723904 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:42.329110+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1347080 data_alloc: 234881024 data_used: 10530816
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102555648 unmapped: 9723904 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:43.329256+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9000
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9000 session 0x559f2a9543c0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d3c4800
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2d953a40
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b2000
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b2000 session 0x559f2d952960
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b2400
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b2400 session 0x559f2d82e960
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b3000
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2d554b40
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2b37f400 session 0x559f2a954b40
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102547456 unmapped: 9732096 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:44.329381+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f95a7000/0x0/0x4ffc00000, data 0x1ffc309/0x20c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b3000 session 0x559f2d82fe00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f95a7000/0x0/0x4ffc00000, data 0x1ffc309/0x20c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2b37f400
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2b37f400 session 0x559f2d82ef00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9000
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102604800 unmapped: 9674752 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:45.329532+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9000 session 0x559f2d554960
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9800
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2dbe90e0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d3c4800
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2d6370e0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d3c4800
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2c5fc960
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2b37f400
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2b37f400 session 0x559f2a9a3e00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 9101312 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:46.329706+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9409000/0x0/0x4ffc00000, data 0x2199319/0x2263000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 9101312 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:47.329853+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9000
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9800
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2d82e000
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367280 data_alloc: 234881024 data_used: 10534912
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 9101312 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:48.329999+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 9101312 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:49.330117+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b3000
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b3000 session 0x559f2c424000
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 9101312 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:50.330263+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b2000
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b2000 session 0x559f2cbf7680
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2b37f400
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2b37f400 session 0x559f2d70d2c0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102727680 unmapped: 9551872 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9800
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:51.330386+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102727680 unmapped: 9551872 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:52.330529+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9408000/0x0/0x4ffc00000, data 0x219933c/0x2264000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1378901 data_alloc: 234881024 data_used: 11943936
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103833600 unmapped: 8445952 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:53.330701+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103833600 unmapped: 8445952 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:54.330828+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d3c4800
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.715806007s of 13.765681267s, submitted: 16
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103833600 unmapped: 8445952 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:55.330977+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103833600 unmapped: 8445952 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:56.331211+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103841792 unmapped: 8437760 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:57.331355+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9408000/0x0/0x4ffc00000, data 0x219933c/0x2264000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1378853 data_alloc: 234881024 data_used: 11948032
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103874560 unmapped: 8404992 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:58.331502+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103874560 unmapped: 8404992 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:09:59.331630+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9408000/0x0/0x4ffc00000, data 0x219933c/0x2264000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103907328 unmapped: 8372224 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:00.331774+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b2000
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b3000
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103907328 unmapped: 8372224 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:01.331898+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103907328 unmapped: 8372224 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:02.332133+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1379774 data_alloc: 234881024 data_used: 11948032
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103907328 unmapped: 8372224 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:03.332286+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9408000/0x0/0x4ffc00000, data 0x219933c/0x2264000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103907328 unmapped: 8372224 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:04.332447+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.078499794s of 10.066446304s, submitted: 47
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108421120 unmapped: 3858432 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:05.332594+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108511232 unmapped: 3768320 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:06.332751+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 4235264 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:07.332941+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1423758 data_alloc: 234881024 data_used: 13017088
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 4235264 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:08.333078+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 4235264 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:09.333234+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f8e52000/0x0/0x4ffc00000, data 0x274f33c/0x281a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f8e52000/0x0/0x4ffc00000, data 0x274f33c/0x281a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 4235264 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:10.333404+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 4235264 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:11.333583+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 4235264 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:12.333756+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1422330 data_alloc: 234881024 data_used: 13017088
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108322816 unmapped: 3956736 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:13.334147+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108322816 unmapped: 3956736 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:14.334437+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f8e31000/0x0/0x4ffc00000, data 0x277033c/0x283b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2a974000
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b2400
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.945456505s of 10.029915810s, submitted: 20
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106733568 unmapped: 5545984 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:15.334737+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b2400 session 0x559f2d8d0960
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106741760 unmapped: 5537792 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:16.335011+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f95a6000/0x0/0x4ffc00000, data 0x1ffc309/0x20c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106741760 unmapped: 5537792 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:17.335272+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1352840 data_alloc: 234881024 data_used: 10534912
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106741760 unmapped: 5537792 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:18.335496+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f95a6000/0x0/0x4ffc00000, data 0x1ffc309/0x20c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106741760 unmapped: 5537792 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:19.335707+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:20.335956+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106741760 unmapped: 5537792 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:21.336145+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106741760 unmapped: 5537792 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:22.336311+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106741760 unmapped: 5537792 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1352840 data_alloc: 234881024 data_used: 10534912
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:23.336522+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106741760 unmapped: 5537792 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:24.336725+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106741760 unmapped: 5537792 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b2c00 session 0x559f2d555c20
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2c36be00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d680c00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2d5ef4a0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f95a7000/0x0/0x4ffc00000, data 0x1ffc309/0x20c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:25.336851+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100761600 unmapped: 11517952 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4faa3c000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:26.337055+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4faa3c000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4faa3c000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:27.337180+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118844 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:28.337341+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:29.337545+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:30.337752+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:31.337995+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:32.338160+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4faa3c000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118844 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:33.338299+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:34.338459+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:35.338608+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:36.338848+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4faa3c000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:37.339076+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118844 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:38.339300+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:39.339504+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:40.339735+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4faa3c000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:41.339922+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:42.340161+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4faa3c000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118844 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:43.340298+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:44.340416+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:45.340550+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:46.340775+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4faa3c000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:47.340898+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118844 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4faa3c000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:48.341089+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:49.341233+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:50.341373+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2b37f400
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2b37f400 session 0x559f2c8b0b40
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9800
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2c8b03c0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b2400
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b2400 session 0x559f2c8b0d20
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b2c00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:51.341502+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b2c00 session 0x559f2b2d8b40
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2b37f400
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2b37f400 session 0x559f2b2d8000
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4faa3c000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:52.341634+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9800
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2a999e00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b2400
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 37.161369324s of 37.341365814s, submitted: 63
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b2400 session 0x559f2a9983c0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d680c00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2a996b40
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b3400
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b3400 session 0x559f2a9974a0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b3400
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b3400 session 0x559f2a958960
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2b37f400
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2b37f400 session 0x559f2a9583c0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198144 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:53.341767+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 26476544 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:54.341919+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 26476544 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa225000/0x0/0x4ffc00000, data 0x1380284/0x1447000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:55.342115+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 26476544 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9800
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2a9703c0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:56.342271+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 26476544 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:57.342408+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 26476544 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b2400
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b2400 session 0x559f2d5ef4a0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198144 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:58.342565+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 26476544 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d680c00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2d5ee5a0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d680c00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:10:59.342706+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2d5ee1e0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100220928 unmapped: 26755072 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9800
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b2400
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:00.342888+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100220928 unmapped: 26755072 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b2000 session 0x559f2b2d92c0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2d82fa40
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa224000/0x0/0x4ffc00000, data 0x1380294/0x1448000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:01.343016+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100253696 unmapped: 26722304 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa224000/0x0/0x4ffc00000, data 0x1380294/0x1448000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:02.343178+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 104652800 unmapped: 22323200 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1272627 data_alloc: 234881024 data_used: 10821632
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:03.343314+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 104652800 unmapped: 22323200 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:04.343460+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 104652800 unmapped: 22323200 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:05.343610+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 104652800 unmapped: 22323200 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa224000/0x0/0x4ffc00000, data 0x1380294/0x1448000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:06.343784+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 104652800 unmapped: 22323200 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:07.343939+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 104652800 unmapped: 22323200 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1272627 data_alloc: 234881024 data_used: 10821632
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:08.344073+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 104652800 unmapped: 22323200 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:09.344227+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 104652800 unmapped: 22323200 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:10.344365+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 104652800 unmapped: 22323200 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:11.344550+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 104652800 unmapped: 22323200 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa224000/0x0/0x4ffc00000, data 0x1380294/0x1448000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.155124664s of 19.309776306s, submitted: 20
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:12.344711+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107364352 unmapped: 19611648 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1299469 data_alloc: 234881024 data_used: 11239424
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:13.344840+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109273088 unmapped: 17702912 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:14.344977+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109273088 unmapped: 17702912 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9ea1000/0x0/0x4ffc00000, data 0x16ed294/0x17b5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:15.345132+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109305856 unmapped: 17670144 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:16.345442+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109305856 unmapped: 17670144 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:17.345666+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109305856 unmapped: 17670144 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:18.345896+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1311047 data_alloc: 234881024 data_used: 11096064
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109305856 unmapped: 17670144 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9e96000/0x0/0x4ffc00000, data 0x170e294/0x17d6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:19.346087+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108470272 unmapped: 18505728 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:20.346288+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108470272 unmapped: 18505728 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:21.346445+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108478464 unmapped: 18497536 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9e96000/0x0/0x4ffc00000, data 0x170e294/0x17d6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:22.346628+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108478464 unmapped: 18497536 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:23.346814+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303199 data_alloc: 234881024 data_used: 11096064
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108478464 unmapped: 18497536 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:24.347273+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9e96000/0x0/0x4ffc00000, data 0x170e294/0x17d6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108478464 unmapped: 18497536 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.776124001s of 13.241639137s, submitted: 70
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:25.347413+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108478464 unmapped: 18497536 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:26.347616+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 18448384 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:27.347883+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 18448384 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:28.348234+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303147 data_alloc: 234881024 data_used: 11096064
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9e90000/0x0/0x4ffc00000, data 0x1714294/0x17dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108535808 unmapped: 18440192 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9e90000/0x0/0x4ffc00000, data 0x1714294/0x17dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:29.348626+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108535808 unmapped: 18440192 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:30.348816+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108535808 unmapped: 18440192 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9e90000/0x0/0x4ffc00000, data 0x1714294/0x17dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:31.348957+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108544000 unmapped: 18432000 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:32.349122+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108552192 unmapped: 18423808 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:33.349240+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303235 data_alloc: 234881024 data_used: 11096064
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108552192 unmapped: 18423808 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9e8d000/0x0/0x4ffc00000, data 0x1717294/0x17df000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:34.349486+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108552192 unmapped: 18423808 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:35.349608+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108552192 unmapped: 18423808 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:36.349761+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9e8d000/0x0/0x4ffc00000, data 0x1717294/0x17df000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108552192 unmapped: 18423808 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9e8d000/0x0/0x4ffc00000, data 0x1717294/0x17df000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:37.349889+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108552192 unmapped: 18423808 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.900504112s of 12.918242455s, submitted: 5
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:38.350023+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304083 data_alloc: 234881024 data_used: 11104256
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 18309120 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:39.350515+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 18309120 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:40.350702+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9e82000/0x0/0x4ffc00000, data 0x1722294/0x17ea000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2c5c8f00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b2400 session 0x559f2cc785a0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 18309120 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9800
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:41.350862+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2a996000
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:42.351320+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:43.351598+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1128836 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac92000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:44.351761+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:45.351883+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:46.352052+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:47.352197+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:48.352354+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1128836 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac92000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:49.352532+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:50.352697+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:51.352797+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:52.352923+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:53.353085+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1128836 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac92000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:54.353210+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:55.353369+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:56.353656+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:57.353827+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:58.353990+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1128836 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:11:59.354151+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac92000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:00.354268+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:01.354469+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:02.354610+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:03.354779+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1128836 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:04.354921+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac92000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:05.355104+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac92000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:06.355244+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:07.355386+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac92000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:08.355666+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac92000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1128836 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:09.355849+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:10.355980+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:11.356136+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d3c4800
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2da1f860
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b2000
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b2000 session 0x559f2d636f00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b2400
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b2400 session 0x559f2d5ee3c0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d680c00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2cc5ed20
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d680c00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 33.925148010s of 34.002922058s, submitted: 29
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:12.356266+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2a9925a0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9800
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2d82e3c0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d3c4800
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2d82fe00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2f76c000
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76c000 session 0x559f2c5c9860
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2f76c400
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76c400 session 0x559f2b2d8000
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101851136 unmapped: 25124864 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:13.356453+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193305 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa497000/0x0/0x4ffc00000, data 0x110d2e6/0x11d5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101851136 unmapped: 25124864 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:14.356637+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101851136 unmapped: 25124864 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:15.356832+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101851136 unmapped: 25124864 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:16.357076+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101851136 unmapped: 25124864 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:17.357453+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101851136 unmapped: 25124864 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:18.357604+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2f76c400
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195599 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76c400 session 0x559f2a958960
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101908480 unmapped: 25067520 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9800
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d3c4800
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa497000/0x0/0x4ffc00000, data 0x110d2e6/0x11d5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [0,0,0,0,0,0,1])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:19.357752+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101916672 unmapped: 25059328 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:20.357955+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103686144 unmapped: 23289856 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:21.359648+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 23093248 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:22.360427+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 23093248 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:23.361727+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244379 data_alloc: 218103808 data_used: 7331840
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 23093248 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:24.363002+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 23093248 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa496000/0x0/0x4ffc00000, data 0x110d309/0x11d6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:25.364001+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 23093248 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:26.364564+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 23093248 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:27.365347+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa496000/0x0/0x4ffc00000, data 0x110d309/0x11d6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 23093248 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:28.366159+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244379 data_alloc: 218103808 data_used: 7331840
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 23093248 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:29.366707+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 23093248 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:30.367349+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.476533890s of 18.995376587s, submitted: 43
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106659840 unmapped: 20316160 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:31.367735+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa178000/0x0/0x4ffc00000, data 0x142b309/0x14f4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108077056 unmapped: 18898944 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:32.368258+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa100000/0x0/0x4ffc00000, data 0x14a3309/0x156c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109371392 unmapped: 17604608 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa100000/0x0/0x4ffc00000, data 0x14a3309/0x156c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:33.368615+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1282047 data_alloc: 218103808 data_used: 8519680
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109445120 unmapped: 17530880 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa0f1000/0x0/0x4ffc00000, data 0x14b1309/0x157a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:34.368888+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109445120 unmapped: 17530880 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa0f1000/0x0/0x4ffc00000, data 0x14b1309/0x157a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:35.369184+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109445120 unmapped: 17530880 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:36.369559+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109453312 unmapped: 17522688 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:37.369731+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa0f1000/0x0/0x4ffc00000, data 0x14b1309/0x157a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109453312 unmapped: 17522688 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:38.370000+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1282047 data_alloc: 218103808 data_used: 8519680
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109453312 unmapped: 17522688 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:39.370141+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109453312 unmapped: 17522688 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:40.370323+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109453312 unmapped: 17522688 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:41.370468+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109453312 unmapped: 17522688 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:42.370616+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa0f1000/0x0/0x4ffc00000, data 0x14b1309/0x157a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109461504 unmapped: 17514496 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:43.370764+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1282047 data_alloc: 218103808 data_used: 8519680
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa0f1000/0x0/0x4ffc00000, data 0x14b1309/0x157a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109461504 unmapped: 17514496 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:44.370934+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109461504 unmapped: 17514496 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:45.371223+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109461504 unmapped: 17514496 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:46.371551+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109461504 unmapped: 17514496 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:47.371708+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109461504 unmapped: 17514496 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:48.371977+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1282047 data_alloc: 218103808 data_used: 8519680
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109461504 unmapped: 17514496 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa0f1000/0x0/0x4ffc00000, data 0x14b1309/0x157a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:49.372269+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d680c00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2dbe81e0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2f76c000
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76c000 session 0x559f2c424b40
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2f76c800
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76c800 session 0x559f2c5df860
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2f76cc00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76cc00 session 0x559f2c8b1e00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2f76cc00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.359991074s of 18.809175491s, submitted: 62
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76cc00 session 0x559f2c8b1c20
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d680c00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2a9a2960
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2f76c000
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76c000 session 0x559f2d5ee000
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2f76c400
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76c400 session 0x559f2d5ee780
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2f76c800
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76c800 session 0x559f2d5eeb40
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108208128 unmapped: 22970368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:50.372480+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108208128 unmapped: 22970368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:51.372637+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108216320 unmapped: 22962176 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:52.372848+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1d0f319/0x1dd9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108216320 unmapped: 22962176 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:53.373012+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1344055 data_alloc: 218103808 data_used: 8523776
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108216320 unmapped: 22962176 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:54.373273+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108216320 unmapped: 22962176 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d680c00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:55.373414+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2c5da1e0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2f76c000
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2f76c400
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108216320 unmapped: 22962176 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:56.373616+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 22773760 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:57.373766+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1d0f319/0x1dd9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 16949248 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:58.373950+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1403516 data_alloc: 234881024 data_used: 15618048
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 16949248 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:12:59.374143+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1d0f319/0x1dd9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 16932864 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:00.374359+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1d0f319/0x1dd9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 16932864 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:01.374504+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 16932864 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:02.374722+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 16932864 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:03.374891+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.809376717s of 14.030103683s, submitted: 19
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1403852 data_alloc: 234881024 data_used: 15618048
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 16932864 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:04.375067+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 16924672 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:05.375239+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1d0f319/0x1dd9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 16924672 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:06.375449+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 16924672 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:07.375621+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 115367936 unmapped: 15810560 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:08.375810+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1431280 data_alloc: 234881024 data_used: 16175104
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 13950976 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:09.375992+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118071296 unmapped: 13107200 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9519000/0x0/0x4ffc00000, data 0x2081319/0x214b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:10.376144+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118104064 unmapped: 13074432 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:11.376303+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118104064 unmapped: 13074432 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:12.376637+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118104064 unmapped: 13074432 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:13.376847+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1439394 data_alloc: 234881024 data_used: 16089088
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118104064 unmapped: 13074432 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:14.377126+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118104064 unmapped: 13074432 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27869 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:15.377279+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9519000/0x0/0x4ffc00000, data 0x2081319/0x214b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118145024 unmapped: 13033472 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:16.377478+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118145024 unmapped: 13033472 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9519000/0x0/0x4ffc00000, data 0x2081319/0x214b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:17.377647+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118145024 unmapped: 13033472 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76c000 session 0x559f2a866b40
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.375069618s of 14.576653481s, submitted: 66
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76c400 session 0x559f2d8d0000
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:18.377863+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2f76d800
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286810 data_alloc: 218103808 data_used: 6938624
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76d800 session 0x559f2dbe94a0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111828992 unmapped: 19349504 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:19.378072+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111828992 unmapped: 19349504 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:20.378321+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e8400 session 0x559f2c8ae780
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9000 session 0x559f2a997c20
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111828992 unmapped: 19349504 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:21.378600+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa0f2000/0x0/0x4ffc00000, data 0x14b1309/0x157a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111828992 unmapped: 19349504 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:22.378856+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2d9605a0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2c8b1a40
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d680c00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111828992 unmapped: 19349504 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:23.379152+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150044 data_alloc: 218103808 data_used: 184320
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2b6512c0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa91a000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:24.379361+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:25.379582+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:26.380238+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:27.380736+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:28.381546+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1148764 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa91a000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:29.381804+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:30.382107+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:31.382438+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2f76c000
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.605167389s of 13.440299034s, submitted: 69
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:32.383793+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa91a000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:33.384479+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa91a000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1148896 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:34.384827+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:35.385186+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:36.385561+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:37.385730+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2f76c400
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:38.386265+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac92000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1151336 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:39.386559+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:40.386966+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:41.387363+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:42.387627+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac92000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:43.388139+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1151336 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:44.388460+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac92000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:45.388612+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac92000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.465369225s of 14.476176262s, submitted: 3
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:46.388853+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:47.388994+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:48.389188+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1151204 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:49.389444+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:50.389673+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac92000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:51.389827+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac92000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:52.390122+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac92000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:53.390424+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1151204 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:54.390616+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2f76d800
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76d800 session 0x559f2cc5e000
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2f76dc00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76dc00 session 0x559f2d82e780
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2f76dc00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76dc00 session 0x559f2d0534a0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9800
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2c8afc20
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d3c4800
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2d8d0d20
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d680c00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:55.390778+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2cbf7680
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 24354816 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:56.391406+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 24354816 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:57.391564+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa77b000/0x0/0x4ffc00000, data 0xe2a2d6/0xef1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 24354816 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:58.391719+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 24354816 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190883 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:13:59.391925+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 24354816 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:00.392143+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 24354816 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:01.392341+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 24354816 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2f76d400
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76d400 session 0x559f2d8d05a0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 10K writes, 40K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 10K writes, 2666 syncs, 4.09 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1892 writes, 5856 keys, 1892 commit groups, 1.0 writes per commit group, ingest: 6.53 MB, 0.01 MB/s
                                           Interval WAL: 1892 writes, 779 syncs, 2.43 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa77b000/0x0/0x4ffc00000, data 0xe2a2d6/0xef1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:02.392516+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2f76d400
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76d400 session 0x559f2cadd2c0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 24354816 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:03.392741+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 24354816 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2c59a000
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2c59a000 session 0x559f2cc5fc20
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2c59bc00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.399578094s of 17.499835968s, submitted: 27
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2c59bc00 session 0x559f2d637680
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192697 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:04.392928+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106807296 unmapped: 24371200 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9800
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:05.393112+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106807296 unmapped: 24371200 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d3c4800
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa77a000/0x0/0x4ffc00000, data 0xe2a2e6/0xef2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:06.393354+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108134400 unmapped: 23044096 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa77a000/0x0/0x4ffc00000, data 0xe2a2e6/0xef2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:07.393519+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108167168 unmapped: 23011328 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:08.393721+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108167168 unmapped: 23011328 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228417 data_alloc: 218103808 data_used: 5488640
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:09.393904+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108167168 unmapped: 23011328 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:10.394119+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108167168 unmapped: 23011328 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:11.394270+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108175360 unmapped: 23003136 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa77a000/0x0/0x4ffc00000, data 0xe2a2e6/0xef2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:12.394440+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108208128 unmapped: 22970368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:13.394670+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108208128 unmapped: 22970368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228417 data_alloc: 218103808 data_used: 5488640
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:14.421431+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108208128 unmapped: 22970368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:15.421742+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108208128 unmapped: 22970368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.437581062s of 12.444223404s, submitted: 1
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:16.421907+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109305856 unmapped: 21872640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:17.422052+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109371392 unmapped: 21807104 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa4e0000/0x0/0x4ffc00000, data 0x10be2e6/0x1186000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:18.422245+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109895680 unmapped: 21282816 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa4ca000/0x0/0x4ffc00000, data 0x10cc2e6/0x1194000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1256751 data_alloc: 218103808 data_used: 5910528
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:19.422488+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109895680 unmapped: 21282816 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:20.422708+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109895680 unmapped: 21282816 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa4ca000/0x0/0x4ffc00000, data 0x10cc2e6/0x1194000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:21.422944+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109895680 unmapped: 21282816 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:22.423134+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109895680 unmapped: 21282816 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:23.423297+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109895680 unmapped: 21282816 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1256751 data_alloc: 218103808 data_used: 5910528
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:24.423488+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109903872 unmapped: 21274624 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:25.423700+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109903872 unmapped: 21274624 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:26.423984+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa4ca000/0x0/0x4ffc00000, data 0x10cc2e6/0x1194000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109903872 unmapped: 21274624 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:27.424208+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109903872 unmapped: 21274624 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:28.424427+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 21266432 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1256751 data_alloc: 218103808 data_used: 5910528
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:29.424569+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa4ca000/0x0/0x4ffc00000, data 0x10cc2e6/0x1194000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 21266432 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:30.424702+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 21266432 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:31.424845+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 21266432 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa4ca000/0x0/0x4ffc00000, data 0x10cc2e6/0x1194000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:32.424985+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 21266432 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:33.425172+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 21266432 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa4ca000/0x0/0x4ffc00000, data 0x10cc2e6/0x1194000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1256751 data_alloc: 218103808 data_used: 5910528
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:34.425274+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 21266432 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:35.425415+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 21258240 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:36.425587+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 21258240 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa4ca000/0x0/0x4ffc00000, data 0x10cc2e6/0x1194000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:37.425796+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 21258240 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:38.425943+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 21258240 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1256751 data_alloc: 218103808 data_used: 5910528
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:39.426093+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 21258240 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:40.426250+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 21258240 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa4ca000/0x0/0x4ffc00000, data 0x10cc2e6/0x1194000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:41.426408+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 21258240 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:42.426591+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 21258240 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d680c00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2a9703c0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2f76d800
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76d800 session 0x559f2c36ba40
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2c59a000
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2c59a000 session 0x559f2c36a1e0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:43.426712+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2c59bc00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2c59bc00 session 0x559f2cc5ed20
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d680c00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 27.291732788s of 27.440547943s, submitted: 53
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109658112 unmapped: 21520384 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2dbe92c0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1283893 data_alloc: 218103808 data_used: 5914624
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:44.426836+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110706688 unmapped: 20471808 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa1a1000/0x0/0x4ffc00000, data 0x14032e6/0x14cb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:45.426988+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110706688 unmapped: 20471808 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa1a1000/0x0/0x4ffc00000, data 0x14032e6/0x14cb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:46.427197+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110706688 unmapped: 20471808 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:47.427329+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110706688 unmapped: 20471808 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:48.427706+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110706688 unmapped: 20471808 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa1a1000/0x0/0x4ffc00000, data 0x14032e6/0x14cb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284061 data_alloc: 218103808 data_used: 5914624
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:49.427925+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110706688 unmapped: 20471808 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:50.428097+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110706688 unmapped: 20471808 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2f76d400
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:51.428231+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 19226624 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:52.428370+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa1a1000/0x0/0x4ffc00000, data 0x14032e6/0x14cb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 19226624 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:53.428520+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 19226624 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1305797 data_alloc: 218103808 data_used: 9158656
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:54.428715+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 19226624 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:55.428845+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 19226624 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:56.429059+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 19226624 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa1a1000/0x0/0x4ffc00000, data 0x14032e6/0x14cb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:57.429247+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 19226624 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:58.429392+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 19226624 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa1a1000/0x0/0x4ffc00000, data 0x14032e6/0x14cb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1305797 data_alloc: 218103808 data_used: 9158656
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:14:59.429524+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 19226624 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:00.429653+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 19226624 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:01.429833+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.994756699s of 18.046251297s, submitted: 9
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 18489344 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:02.430008+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 16203776 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:03.431572+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114745344 unmapped: 16433152 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f973e000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1392039 data_alloc: 234881024 data_used: 9400320
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:04.432448+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114745344 unmapped: 16433152 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:05.432676+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114745344 unmapped: 16433152 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:06.433933+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f973e000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 16424960 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:07.434254+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 16424960 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:08.435312+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 16424960 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:09.436203+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1392055 data_alloc: 234881024 data_used: 9400320
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 16424960 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:10.436938+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 16424960 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:11.437344+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 16424960 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f973e000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:12.437547+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 16400384 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:13.437887+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f973e000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 16400384 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:14.438145+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1392055 data_alloc: 234881024 data_used: 9400320
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.862829208s of 13.072974205s, submitted: 92
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113451008 unmapped: 17727488 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2b646c00 session 0x559f2c5fc5a0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2f76cc00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:15.438499+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2b647c00 session 0x559f2b6505a0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2f76d000
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113459200 unmapped: 17719296 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9754000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:16.438761+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113459200 unmapped: 17719296 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2b37e000 session 0x559f2d953860
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2b647c00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:17.438943+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113459200 unmapped: 17719296 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:18.439271+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113459200 unmapped: 17719296 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:19.439423+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1382143 data_alloc: 234881024 data_used: 9400320
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113459200 unmapped: 17719296 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9754000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:20.439609+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113459200 unmapped: 17719296 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:21.439798+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113467392 unmapped: 17711104 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:22.440020+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113467392 unmapped: 17711104 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:23.440275+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9754000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113483776 unmapped: 17694720 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:24.440405+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1381879 data_alloc: 234881024 data_used: 9400320
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.620989799s of 10.001231194s, submitted: 134
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113631232 unmapped: 17547264 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:25.440650+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9344000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113803264 unmapped: 17375232 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:26.440845+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113803264 unmapped: 17375232 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:27.441147+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113803264 unmapped: 17375232 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:28.441366+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9344000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 17367040 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:29.441551+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1382143 data_alloc: 234881024 data_used: 9400320
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 17367040 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:30.441732+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 17367040 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:31.441955+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 17358848 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:32.442120+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 17358848 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:33.442325+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 17358848 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:34.442488+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1382143 data_alloc: 234881024 data_used: 9400320
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9344000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 17358848 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:35.442686+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.531607628s of 10.991118431s, submitted: 201
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 17358848 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:36.442904+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 17350656 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9344000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:37.443121+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 17350656 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:38.443315+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 17350656 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9344000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:39.443495+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1382143 data_alloc: 234881024 data_used: 9400320
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 17350656 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:40.443641+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 17342464 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:41.443806+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 17342464 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:42.443971+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 17342464 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:43.444162+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 17342464 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:44.444342+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1382143 data_alloc: 234881024 data_used: 9400320
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 17342464 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9344000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:45.444671+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 17342464 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9344000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:46.444907+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 17342464 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:47.445108+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 17334272 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:48.445384+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9344000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 17334272 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.583388329s of 13.592965126s, submitted: 2
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:49.445549+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1384159 data_alloc: 234881024 data_used: 9388032
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113967104 unmapped: 17211392 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:50.445717+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113967104 unmapped: 17211392 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:51.445905+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113967104 unmapped: 17211392 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:52.446085+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113975296 unmapped: 17203200 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9344000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:53.446329+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113983488 unmapped: 17195008 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9344000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:54.446495+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1384663 data_alloc: 234881024 data_used: 9388032
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113983488 unmapped: 17195008 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:55.446638+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113983488 unmapped: 17195008 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:56.446854+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76d400 session 0x559f2d637860
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2b37e000
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112812032 unmapped: 18366464 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2b37e000 session 0x559f2b2d8b40
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:57.447118+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 19021824 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:58.447299+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa0c8000/0x0/0x4ffc00000, data 0x10cc2e6/0x1194000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa0c8000/0x0/0x4ffc00000, data 0x10cc2e6/0x1194000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 19021824 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:15:59.447443+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1259535 data_alloc: 218103808 data_used: 5898240
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2b37fc00 session 0x559f2d052b40
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2c59a000
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 19021824 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:00.447601+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 19021824 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:01.447828+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 19021824 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.979496956s of 13.032286644s, submitted: 26
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:02.447970+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 19021824 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:03.448201+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 19021824 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:04.448337+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1259703 data_alloc: 218103808 data_used: 5898240
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa0c8000/0x0/0x4ffc00000, data 0x10cc2e6/0x1194000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 19021824 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:05.448482+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa0c8000/0x0/0x4ffc00000, data 0x10cc2e6/0x1194000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 19021824 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:06.448656+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2d5321e0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2c6481e0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 19021824 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2c59bc00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:07.448787+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2c59bc00 session 0x559f2dbe8960
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa486000/0x0/0x4ffc00000, data 0x914284/0x9db000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:08.448943+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:09.449088+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166102 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:10.449231+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa487000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:11.449400+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:12.449552+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:13.449736+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:14.449888+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa487000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166102 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:15.450082+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa487000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa487000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:16.450240+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:17.450377+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b3000 session 0x559f2da1c3c0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2b37e800 session 0x559f2dbe9680
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:18.450529+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:19.450659+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166102 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:20.450823+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:21.450977+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa487000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:22.451193+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:23.451350+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:24.451481+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166102 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:25.451681+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa487000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:26.451849+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:27.452006+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:28.452183+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2b37e800
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.247751236s of 26.306289673s, submitted: 19
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:29.452409+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166234 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:30.452613+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa487000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:31.452764+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:32.452937+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 22896640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:33.453186+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa487000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 22896640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:34.453330+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166234 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2c59bc00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa487000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 22896640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:35.453547+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 22896640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:36.453816+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 22896640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:37.454002+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 22896640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:38.454190+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 22896640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:39.454691+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165942 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 22896640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:40.455158+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 22896640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:41.455576+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9800
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.318322182s of 13.376296997s, submitted: 2
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 20733952 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2c5df4a0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:42.455982+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108290048 unmapped: 22888448 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:43.456332+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa672000/0x0/0x4ffc00000, data 0xb24274/0xbea000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108290048 unmapped: 22888448 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:44.456605+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180354 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108290048 unmapped: 22888448 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:45.456743+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108290048 unmapped: 22888448 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:46.456926+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d3c4800
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2a999e00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108290048 unmapped: 22888448 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:47.457154+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b3000
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b3000 session 0x559f2c8b14a0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108290048 unmapped: 22888448 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:48.457383+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d680c00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f29d55c20
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa672000/0x0/0x4ffc00000, data 0xb24274/0xbea000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2f76d400
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76d400 session 0x559f2c5c9860
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 22896640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:49.457593+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180354 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 22896640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:50.457796+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9800
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 22896640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:51.458008+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 22896640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:52.458224+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 22896640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:53.458423+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa672000/0x0/0x4ffc00000, data 0xb24274/0xbea000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2cadc000
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 22896640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:54.458598+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d3c4800
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.305717468s of 12.769754410s, submitted: 2
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166826 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 23887872 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:55.458752+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2cc5e000
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 23887872 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:56.458927+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 23887872 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:57.459090+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 23887872 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:58.459255+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 23887872 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:16:59.459406+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166826 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 23887872 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:00.459549+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 23887872 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:01.459719+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 23887872 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:02.459918+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 23887872 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:03.460487+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 23887872 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:04.462283+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166826 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 23879680 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:05.462444+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 23879680 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:06.462634+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 23879680 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:07.462851+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 23879680 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:08.463070+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b3000
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.356574059s of 13.715682030s, submitted: 3
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b3000 session 0x559f2a971a40
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d680c00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2dd0ad20
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107896832 unmapped: 30638080 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:09.463228+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237939 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107896832 unmapped: 30638080 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:10.463353+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9ee3000/0x0/0x4ffc00000, data 0x12b22d6/0x1379000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107896832 unmapped: 30638080 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:11.463501+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107896832 unmapped: 30638080 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:12.463658+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107896832 unmapped: 30638080 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:13.463806+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2f76d400
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76d400 session 0x559f2cc5eb40
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108199936 unmapped: 30334976 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:14.463962+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9800
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d3c4800
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239823 data_alloc: 218103808 data_used: 184320
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108199936 unmapped: 30334976 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:15.464101+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9ebf000/0x0/0x4ffc00000, data 0x12d62d6/0x139d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108363776 unmapped: 30171136 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:16.464307+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:17.464521+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111017984 unmapped: 27516928 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:18.464653+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111017984 unmapped: 27516928 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: mgrc ms_handle_reset ms_handle_reset con 0x559f2d0e8c00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3802415056
Oct 08 10:30:42 compute-0 ceph-osd[81751]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3802415056,v1:192.168.122.100:6801/3802415056]
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: get_auth_request con 0x559f2b37e000 auth_method 0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: mgrc handle_mgr_configure stats_period=5
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:19.464845+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111017984 unmapped: 27516928 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1300015 data_alloc: 218103808 data_used: 9142272
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:20.465003+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111017984 unmapped: 27516928 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9ebf000/0x0/0x4ffc00000, data 0x12d62d6/0x139d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:21.465159+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111017984 unmapped: 27516928 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2c36a3c0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2d82fa40
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b3000
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.177964211s of 13.560062408s, submitted: 29
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:22.465306+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110714880 unmapped: 27820032 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:23.465465+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b3000 session 0x559f2c5fc5a0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:24.465619+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173331 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa881000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:25.465747+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa881000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:26.465965+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:27.466118+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:28.466329+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:29.466508+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173331 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa881000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:30.466653+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:31.466814+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:32.466984+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:33.467156+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:34.467330+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa881000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173331 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:35.467466+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:36.467681+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:37.467829+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:38.468003+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:39.468164+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173331 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa881000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:40.468324+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:41.468505+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:42.468672+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:43.468898+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:44.469111+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173331 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:45.469273+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa881000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:46.469454+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:47.469617+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:48.469796+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:49.469975+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173331 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:50.470138+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:51.470335+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa881000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:52.470675+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:53.470942+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:54.471101+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173331 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:55.471250+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 31408128 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:56.471488+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 31408128 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:57.471654+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa881000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 31408128 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:58.471926+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 31408128 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:17:59.472082+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 31408128 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173331 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d680c00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 36.629310608s of 38.173881531s, submitted: 16
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2c8b03c0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa881000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:00.472209+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 31375360 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:01.472349+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 31375360 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa4a7000/0x0/0x4ffc00000, data 0xcef274/0xdb5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:02.473149+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 31375360 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:03.473350+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 31375360 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:04.473590+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 31375360 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205109 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:05.473782+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 31375360 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa4a7000/0x0/0x4ffc00000, data 0xcef274/0xdb5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:06.473982+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2f76dc00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76dc00 session 0x559f2c8ae780
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 31375360 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9800
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2d636960
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:07.474187+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 31375360 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d3c4800
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2a866000
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b3000
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b3000 session 0x559f2a955680
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:08.474427+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 31375360 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:09.474566+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 31375360 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205109 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d680c00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:10.474717+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107323392 unmapped: 31211520 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa4a7000/0x0/0x4ffc00000, data 0xcef274/0xdb5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:11.474896+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 30285824 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:12.475075+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 30285824 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:13.475234+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 30285824 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa4a7000/0x0/0x4ffc00000, data 0xcef274/0xdb5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:14.475381+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 30285824 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232013 data_alloc: 218103808 data_used: 4112384
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:15.475518+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 30285824 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:16.475704+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 30285824 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:17.475851+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 30285824 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:18.476135+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa4a7000/0x0/0x4ffc00000, data 0xcef274/0xdb5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 30285824 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:19.476303+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 30285824 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232013 data_alloc: 218103808 data_used: 4112384
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:20.476457+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 30285824 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.716075897s of 20.768712997s, submitted: 10
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:21.476809+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 22183936 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa4a7000/0x0/0x4ffc00000, data 0xcef274/0xdb5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:22.476947+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114319360 unmapped: 24215552 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:23.477117+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113745920 unmapped: 24788992 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9bff000/0x0/0x4ffc00000, data 0x158f274/0x1655000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:24.477268+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113745920 unmapped: 24788992 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9bf5000/0x0/0x4ffc00000, data 0x1599274/0x165f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304949 data_alloc: 218103808 data_used: 5197824
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:25.477435+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113745920 unmapped: 24788992 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:26.477617+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113745920 unmapped: 24788992 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:27.477741+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113745920 unmapped: 24788992 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:28.477906+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113745920 unmapped: 24788992 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9bf5000/0x0/0x4ffc00000, data 0x1599274/0x165f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9bf5000/0x0/0x4ffc00000, data 0x1599274/0x165f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:29.478137+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113762304 unmapped: 24772608 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1305101 data_alloc: 218103808 data_used: 5201920
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:30.478310+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113762304 unmapped: 24772608 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:31.478489+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9bf5000/0x0/0x4ffc00000, data 0x1599274/0x165f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113762304 unmapped: 24772608 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:32.478709+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113762304 unmapped: 24772608 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:33.478994+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113762304 unmapped: 24772608 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:34.479170+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113762304 unmapped: 24772608 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2a9774a0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2c8c1c00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1305101 data_alloc: 218103808 data_used: 5201920
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.963165283s of 14.391463280s, submitted: 83
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:35.479322+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2c8c1c00 session 0x559f2d8d10e0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:36.479526+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:37.479690+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:38.479910+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:39.480104+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1178119 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:40.480265+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:41.480412+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:42.480563+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:43.480817+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:44.480967+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1178119 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:45.481147+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:46.481369+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:47.481564+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:48.481652+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:49.481783+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1178119 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:50.481946+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:51.482110+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:52.482288+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:53.482402+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:54.482561+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1178119 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:55.482696+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:56.482876+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:57.483070+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2c8c1c00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 22.524868011s of 22.775295258s, submitted: 9
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2c8c1c00 session 0x559f2d053a40
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:58.483236+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa46b000/0x0/0x4ffc00000, data 0xd2b274/0xdf1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111312896 unmapped: 27222016 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa46b000/0x0/0x4ffc00000, data 0xd2b274/0xdf1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:18:59.483385+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111312896 unmapped: 27222016 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251947 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:00.483582+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9f17000/0x0/0x4ffc00000, data 0x127f274/0x1345000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110100480 unmapped: 28434432 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9f17000/0x0/0x4ffc00000, data 0x127f274/0x1345000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:01.483770+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9f17000/0x0/0x4ffc00000, data 0x127f274/0x1345000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110100480 unmapped: 28434432 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:02.484070+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110100480 unmapped: 28434432 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:03.484232+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110100480 unmapped: 28434432 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:04.484505+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110100480 unmapped: 28434432 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251947 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:05.484665+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110100480 unmapped: 28434432 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:06.484891+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9800
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2d82f0e0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 28286976 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9f17000/0x0/0x4ffc00000, data 0x127f274/0x1345000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:07.485048+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d3c4800
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b3000
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 28286976 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:08.485233+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9ef2000/0x0/0x4ffc00000, data 0x12a3297/0x136a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 28286976 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:09.485359+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112893952 unmapped: 25640960 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:10.485526+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1314620 data_alloc: 218103808 data_used: 8757248
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112893952 unmapped: 25640960 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:11.485701+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112893952 unmapped: 25640960 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:12.485844+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112893952 unmapped: 25640960 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:13.486092+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9ef2000/0x0/0x4ffc00000, data 0x12a3297/0x136a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112893952 unmapped: 25640960 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:14.486311+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112893952 unmapped: 25640960 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:15.486481+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1314620 data_alloc: 218103808 data_used: 8757248
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112902144 unmapped: 25632768 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:16.486647+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112902144 unmapped: 25632768 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:17.486837+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112902144 unmapped: 25632768 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9ef2000/0x0/0x4ffc00000, data 0x12a3297/0x136a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:18.487072+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 25600000 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.691644669s of 20.797815323s, submitted: 21
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d680c00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9866000/0x0/0x4ffc00000, data 0x1927297/0x19ee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2caddc20
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:19.487234+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 120578048 unmapped: 17956864 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:20.487458+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1409342 data_alloc: 234881024 data_used: 10747904
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118767616 unmapped: 19767296 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:21.488144+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118767616 unmapped: 19767296 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:22.488313+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118767616 unmapped: 19767296 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:23.488519+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b0c00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b0c00 session 0x559f2a954b40
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118784000 unmapped: 19750912 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d6ca800
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d6ca800 session 0x559f2da1f860
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:24.488766+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f95d6000/0x0/0x4ffc00000, data 0x1bbe297/0x1c85000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2c8c1c00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2c8c1c00 session 0x559f2c8b10e0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9800
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118800384 unmapped: 19734528 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2d5efe00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:25.489087+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1410709 data_alloc: 234881024 data_used: 10760192
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b0c00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d680c00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118808576 unmapped: 19726336 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:26.489301+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 19537920 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:27.489431+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 120479744 unmapped: 18055168 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:28.489602+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 120479744 unmapped: 18055168 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f95d6000/0x0/0x4ffc00000, data 0x1bbe2a7/0x1c86000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:29.489743+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 120479744 unmapped: 18055168 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:30.489894+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1426209 data_alloc: 234881024 data_used: 12935168
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 120479744 unmapped: 18055168 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f95d6000/0x0/0x4ffc00000, data 0x1bbe2a7/0x1c86000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:31.490092+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 120479744 unmapped: 18055168 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:32.490225+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f95d6000/0x0/0x4ffc00000, data 0x1bbe2a7/0x1c86000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 18022400 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:33.490696+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 18022400 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:34.491094+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 18022400 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:35.491242+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1426209 data_alloc: 234881024 data_used: 12935168
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 18022400 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:36.491463+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 120545280 unmapped: 17989632 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:37.491719+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.309732437s of 18.578636169s, submitted: 92
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124076032 unmapped: 14458880 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:38.491855+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f8e3f000/0x0/0x4ffc00000, data 0x234f2a7/0x2417000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 13942784 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:39.492194+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124731392 unmapped: 13803520 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:40.492517+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1498987 data_alloc: 234881024 data_used: 13889536
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124731392 unmapped: 13803520 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:41.492911+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124731392 unmapped: 13803520 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:42.493221+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124731392 unmapped: 13803520 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:43.493539+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124731392 unmapped: 13803520 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:44.493777+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f8e23000/0x0/0x4ffc00000, data 0x23682a7/0x2430000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f8e23000/0x0/0x4ffc00000, data 0x23682a7/0x2430000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124731392 unmapped: 13803520 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:45.493934+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1498987 data_alloc: 234881024 data_used: 13889536
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124731392 unmapped: 13803520 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:46.494206+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124731392 unmapped: 13803520 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:47.494409+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124747776 unmapped: 13787136 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:48.494662+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124747776 unmapped: 13787136 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f8e23000/0x0/0x4ffc00000, data 0x23682a7/0x2430000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:49.494874+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124747776 unmapped: 13787136 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:50.495193+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1500203 data_alloc: 234881024 data_used: 13967360
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.244200706s of 13.420284271s, submitted: 77
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124747776 unmapped: 13787136 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:51.495470+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124747776 unmapped: 13787136 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:52.495634+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b0c00 session 0x559f2a976b40
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2d5321e0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d608400
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124084224 unmapped: 14450688 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f8e2c000/0x0/0x4ffc00000, data 0x23682a7/0x2430000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [0,1])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d608400 session 0x559f2d8d0d20
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:53.495851+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124084224 unmapped: 14450688 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:54.496251+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124084224 unmapped: 14450688 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:55.496519+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1387580 data_alloc: 234881024 data_used: 10768384
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124084224 unmapped: 14450688 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:56.496797+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124084224 unmapped: 14450688 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2b6505a0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b3000 session 0x559f2d8d0f00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:57.497023+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d608400
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d608400 session 0x559f2c6481e0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f98fd000/0x0/0x4ffc00000, data 0x1898297/0x195f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:58.497361+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:19:59.497528+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:00.497655+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197050 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:01.497829+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:02.497981+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa85d000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:03.498439+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:04.498648+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:05.498908+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197050 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:06.499103+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:07.499348+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:08.499731+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa85d000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:09.499870+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:10.500151+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197050 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa85d000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:11.500446+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:12.500630+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:13.500861+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:14.501118+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:15.501283+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197050 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:16.501472+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa85d000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:17.501662+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:18.501808+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa85d000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2c8c1c00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2c8c1c00 session 0x559f2dab5680
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d0e9800
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2b2d90e0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2c8c1c00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2c8c1c00 session 0x559f2d636780
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d3c4800
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2d5eeb40
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:19.501941+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b3000
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 28.304420471s of 28.535713196s, submitted: 47
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b3000 session 0x559f2a958f00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d608400
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d608400 session 0x559f2a9990e0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b0c00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b0c00 session 0x559f2cbf61e0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2c8c1c00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2c8c1c00 session 0x559f2d8d1c20
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d3c4800
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2a9961e0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 21422080 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:20.502091+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa6ed000/0x0/0x4ffc00000, data 0xaa8284/0xb6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa6ed000/0x0/0x4ffc00000, data 0xaa8284/0xb6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 21422080 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:21.502222+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 21422080 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:22.502381+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa6ed000/0x0/0x4ffc00000, data 0xaa8284/0xb6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 21422080 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:23.502580+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b3000
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b3000 session 0x559f2d9530e0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 21405696 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:24.502744+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa6ed000/0x0/0x4ffc00000, data 0xaa8284/0xb6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d608400
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d608400 session 0x559f2d952960
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 21405696 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:25.502932+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa6ed000/0x0/0x4ffc00000, data 0xaa8284/0xb6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d680c00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2d953680
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2c8c1c00
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2c8c1c00 session 0x559f2d952d20
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa6ed000/0x0/0x4ffc00000, data 0xaa8284/0xb6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 21405696 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:26.503166+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d3c4800
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 21405696 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:27.503422+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 21405696 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:28.503587+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 21405696 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:29.503749+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa6ed000/0x0/0x4ffc00000, data 0xaa8284/0xb6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 21405696 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:30.503902+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219133 data_alloc: 218103808 data_used: 704512
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 21405696 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa6ed000/0x0/0x4ffc00000, data 0xaa8284/0xb6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:31.504088+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 21405696 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:32.504447+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa6ed000/0x0/0x4ffc00000, data 0xaa8284/0xb6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 21405696 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:33.504587+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 21405696 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:34.504738+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 21405696 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:35.504899+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219133 data_alloc: 218103808 data_used: 704512
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 21397504 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:36.505086+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa6ed000/0x0/0x4ffc00000, data 0xaa8284/0xb6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 21397504 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:37.505210+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.240032196s of 18.298688889s, submitted: 18
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 19447808 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:38.506160+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa113000/0x0/0x4ffc00000, data 0x1082284/0x1149000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118333440 unmapped: 20201472 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:39.506344+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118333440 unmapped: 20201472 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:40.506533+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1266339 data_alloc: 218103808 data_used: 815104
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa10a000/0x0/0x4ffc00000, data 0x108a284/0x1151000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118333440 unmapped: 20201472 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:41.506792+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118333440 unmapped: 20201472 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:42.506991+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118333440 unmapped: 20201472 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:43.507226+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118333440 unmapped: 20201472 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:44.507421+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118333440 unmapped: 20201472 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:45.507636+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1266339 data_alloc: 218103808 data_used: 815104
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118333440 unmapped: 20201472 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:46.507841+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa10a000/0x0/0x4ffc00000, data 0x108a284/0x1151000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118333440 unmapped: 20201472 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:47.508121+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa10a000/0x0/0x4ffc00000, data 0x108a284/0x1151000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118333440 unmapped: 20201472 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:48.508324+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118333440 unmapped: 20201472 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:49.508520+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa10a000/0x0/0x4ffc00000, data 0x108a284/0x1151000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118333440 unmapped: 20201472 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.515064240s of 12.640249252s, submitted: 32
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:50.508667+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2d5ee960
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d5b3000
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265651 data_alloc: 218103808 data_used: 815104
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b3000 session 0x559f2d6372c0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:51.510302+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:52.510495+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:53.510710+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:54.510897+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:55.511066+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:56.511280+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:57.511738+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:58.512670+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:20:59.513111+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:00.513612+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:01.513818+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:02.514130+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:03.514275+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:04.514486+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:05.514660+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:06.515179+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:07.515294+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:08.515947+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:09.516507+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:10.517091+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:11.517884+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:12.518352+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:13.518538+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:14.518764+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:15.518898+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:16.519299+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:17.519521+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:18.519887+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:19.520334+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:20.520601+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 20873216 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:21.520756+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 20873216 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:22.520954+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 20873216 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:23.521087+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 20873216 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:24.521531+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 20873216 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:25.521665+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 20873216 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:26.522025+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 20873216 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:27.522249+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 20873216 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:28.522497+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 20873216 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:29.522652+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 20873216 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:30.522770+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 20873216 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:31.522904+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:32.523149+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:33.523328+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:34.523480+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:35.523605+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:36.523729+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:37.523860+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:38.524003+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:39.524078+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:40.524207+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:41.524362+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:42.524469+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:43.524632+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:44.524785+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:45.524976+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:46.525189+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:47.525327+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 20856832 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:48.525512+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 20856832 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:49.525636+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 20856832 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:50.525811+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 20856832 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:51.525941+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 20856832 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:52.526089+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 20856832 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:53.526246+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 20856832 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:54.526407+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 20856832 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:55.526539+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 20856832 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:56.526730+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 20856832 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:57.526860+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 20856832 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:58.527001+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 20856832 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:21:59.527144+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 20856832 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:00.527272+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 20856832 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:01.527399+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 20856832 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:02.527555+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117833728 unmapped: 20701184 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: do_command 'config diff' '{prefix=config diff}'
Oct 08 10:30:42 compute-0 ceph-osd[81751]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 08 10:30:42 compute-0 ceph-osd[81751]: do_command 'config show' '{prefix=config show}'
Oct 08 10:30:42 compute-0 ceph-osd[81751]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 08 10:30:42 compute-0 ceph-osd[81751]: do_command 'counter dump' '{prefix=counter dump}'
Oct 08 10:30:42 compute-0 ceph-osd[81751]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 08 10:30:42 compute-0 ceph-osd[81751]: do_command 'counter schema' '{prefix=counter schema}'
Oct 08 10:30:42 compute-0 ceph-osd[81751]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:03.527692+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117456896 unmapped: 21078016 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:04.527828+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117882880 unmapped: 20652032 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: do_command 'log dump' '{prefix=log dump}'
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:05.527966+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117432320 unmapped: 32145408 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: do_command 'perf dump' '{prefix=perf dump}'
Oct 08 10:30:42 compute-0 ceph-osd[81751]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Oct 08 10:30:42 compute-0 ceph-osd[81751]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Oct 08 10:30:42 compute-0 ceph-osd[81751]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: do_command 'perf schema' '{prefix=perf schema}'
Oct 08 10:30:42 compute-0 ceph-osd[81751]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:06.528130+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 32391168 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:07.528277+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 32391168 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:08.528422+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 32391168 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:09.528587+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 32391168 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:10.528727+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 32391168 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:11.528856+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 32391168 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:12.529010+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 32391168 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:13.529172+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 32391168 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:14.529361+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 32391168 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:15.529489+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 32391168 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:16.529648+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 32391168 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:17.529824+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 32382976 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:18.529971+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 32382976 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:19.530079+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 32382976 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:20.530199+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 32382976 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:21.530336+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 32382976 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:22.530466+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 32382976 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:23.530595+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 32382976 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:24.530721+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 32382976 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:25.530857+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 32382976 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:26.531016+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 32382976 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:27.531090+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 32382976 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:28.531214+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 32382976 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:29.531363+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 32382976 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:30.532114+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 32382976 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:31.532239+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 32382976 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:32.532361+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 32374784 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:33.532527+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 32374784 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:34.532673+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 32374784 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:35.532818+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 32374784 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:36.532992+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 32374784 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:37.533097+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 32374784 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:38.533268+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 32374784 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:39.533408+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 32374784 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:40.533562+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 32374784 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:41.533736+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 32374784 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:42.533903+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 32374784 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:43.534100+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 32374784 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:44.534314+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 32374784 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:45.534486+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 32374784 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:46.534718+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 32374784 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:47.535022+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 32374784 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:48.535215+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 32374784 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:49.535364+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 32366592 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:50.535511+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 32366592 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:51.535690+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 32366592 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:52.535810+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 32366592 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:53.535994+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 32366592 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:54.536104+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 32366592 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:55.536270+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 32366592 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:56.536481+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 32366592 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:57.536674+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 32366592 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:58.536909+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 32366592 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:22:59.537066+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 32366592 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:23:00.537240+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 32358400 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:23:01.537437+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 32358400 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:23:02.537721+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 32358400 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:23:03.537850+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 32358400 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:23:04.538009+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 32358400 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:23:05.538241+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 32358400 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:23:06.538441+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 32358400 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:23:07.538577+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 32358400 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:23:08.538802+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 32358400 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:23:09.538955+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 32358400 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:23:10.539096+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 32358400 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:23:11.539231+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 32358400 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:23:12.539324+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 32358400 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:23:13.539463+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 32350208 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:23:14.539598+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 32350208 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:23:15.539741+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 32350208 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:23:16.539914+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 32350208 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:23:17.540066+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 32350208 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:23:18.540211+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 32350208 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:23:19.540364+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 32350208 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:23:20.540501+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 32350208 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:23:21.540625+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 32350208 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:23:22.540755+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 32350208 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:23:23.540904+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 32350208 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:23:24.541040+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 32350208 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:23:25.541150+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 32350208 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:23:26.541277+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 32350208 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:23:27.541441+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 32350208 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:23:28.541779+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 32342016 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:23:29.542002+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 32342016 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:23:30.542096+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 32342016 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:23:31.542258+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 32342016 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:23:32.542385+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 32342016 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:23:33.542515+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 32342016 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:23:34.542680+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 32342016 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:23:35.542824+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 32342016 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:23:36.543022+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 32342016 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:23:37.543216+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 32342016 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:23:38.543372+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 32342016 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:23:39.543505+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 32342016 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:23:40.543624+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 32342016 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:23:41.543712+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 32342016 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:23:42.543874+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 32342016 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:23:43.543997+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 32342016 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:23:44.544172+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 32342016 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:23:45.544321+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 32342016 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:23:46.544496+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 32342016 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:23:47.544666+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 32342016 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:23:48.544834+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 32342016 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:23:49.544973+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 32342016 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:23:50.545094+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 32342016 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:23:51.545267+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 32342016 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:23:52.545398+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 32333824 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:23:53.545532+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 32333824 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:23:54.545695+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 32333824 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:23:55.545813+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 32333824 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:23:56.545979+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 32333824 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:23:57.546103+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 32333824 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:23:58.546188+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 32333824 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:23:59.546348+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 32333824 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:24:00.546471+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 32325632 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:24:01.546577+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 12K writes, 46K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 12K writes, 3376 syncs, 3.74 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1713 writes, 5631 keys, 1713 commit groups, 1.0 writes per commit group, ingest: 6.95 MB, 0.01 MB/s
                                           Interval WAL: 1713 writes, 710 syncs, 2.41 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 32325632 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:24:02.546726+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 32325632 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:24:03.546881+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 32325632 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:24:04.547028+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 32325632 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:24:05.547202+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 32325632 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:24:06.547407+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 32325632 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:24:07.547578+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 32325632 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:24:08.547706+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 32325632 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:24:09.547829+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 32325632 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:24:10.547970+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 32325632 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:24:11.548824+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 32317440 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:24:12.549555+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 32317440 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:24:13.551590+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 32317440 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:24:14.551994+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 32317440 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:24:15.552175+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 32317440 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:24:16.552473+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 32317440 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:24:17.552740+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 32317440 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:24:18.553115+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 32317440 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:24:19.553476+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 32317440 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:24:20.553622+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 32317440 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:24:21.554158+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 32317440 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:24:22.554684+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 32317440 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:24:23.554893+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 32317440 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:24:24.555113+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 32309248 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:24:25.555282+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 32309248 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:24:26.555474+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 32309248 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:24:27.555723+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 32309248 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:24:28.555870+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 32309248 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:24:29.556120+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 32309248 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:24:30.556321+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 32309248 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:24:31.556516+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 32309248 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:24:32.556862+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 32309248 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:24:33.556989+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 32309248 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:24:34.557129+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 32309248 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:24:35.557308+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117276672 unmapped: 32301056 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:24:36.557475+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117276672 unmapped: 32301056 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:24:37.557633+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117276672 unmapped: 32301056 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:24:38.557838+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117276672 unmapped: 32301056 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:24:39.558090+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117276672 unmapped: 32301056 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:24:40.558258+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117276672 unmapped: 32301056 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:24:41.558410+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117276672 unmapped: 32301056 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:24:42.558588+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117276672 unmapped: 32301056 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:24:43.558745+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117276672 unmapped: 32301056 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:24:44.558978+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117276672 unmapped: 32301056 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:24:45.559196+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117276672 unmapped: 32301056 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:24:46.559423+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117276672 unmapped: 32301056 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:24:47.559587+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117284864 unmapped: 32292864 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:24:48.559769+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117284864 unmapped: 32292864 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:24:49.559896+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117284864 unmapped: 32292864 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:24:50.559979+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117284864 unmapped: 32292864 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:24:51.560135+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117284864 unmapped: 32292864 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:24:52.560223+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117284864 unmapped: 32292864 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:24:53.560353+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117284864 unmapped: 32292864 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:24:54.560480+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117284864 unmapped: 32292864 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:24:55.560618+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117284864 unmapped: 32292864 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:24:56.560793+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117293056 unmapped: 32284672 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:24:57.560927+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117293056 unmapped: 32284672 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:24:58.561119+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117293056 unmapped: 32284672 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:24:59.561296+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117293056 unmapped: 32284672 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:25:00.561417+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117293056 unmapped: 32284672 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:25:01.561557+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117293056 unmapped: 32284672 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:25:02.561689+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117293056 unmapped: 32284672 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:25:03.561832+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117293056 unmapped: 32284672 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:25:04.561939+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117293056 unmapped: 32284672 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:25:05.562064+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:25:06.562225+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117293056 unmapped: 32284672 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:25:07.562368+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117293056 unmapped: 32284672 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:25:08.562545+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117293056 unmapped: 32284672 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:25:09.562642+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117293056 unmapped: 32284672 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:25:10.562794+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117293056 unmapped: 32284672 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:25:11.562942+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 32276480 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:25:12.563114+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 32276480 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:25:13.563358+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 32276480 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:25:14.563475+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 32276480 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:25:15.563671+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 32276480 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:25:16.563856+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 32276480 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:25:17.564004+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 32276480 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:25:18.564184+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 32276480 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:25:19.564348+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 32276480 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:25:20.564532+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 32276480 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:25:21.564665+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 32268288 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:25:22.564786+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 32268288 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 272.678894043s of 272.762756348s, submitted: 25
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:25:23.564929+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117325824 unmapped: 32251904 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [0,0,1])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:25:24.565078+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117325824 unmapped: 32251904 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:25:25.565186+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117465088 unmapped: 32112640 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:25:26.565383+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 29900800 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:25:27.565526+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 29900800 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:25:28.565663+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 29900800 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:25:29.565798+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 29900800 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:25:30.565971+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 29900800 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:25:31.566129+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 29900800 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:25:32.566303+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 29900800 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:25:33.566457+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 29900800 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:25:34.566597+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 29900800 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:25:35.566753+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 29900800 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:25:36.566941+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 29900800 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:25:37.567118+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 29900800 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:25:38.567309+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 29900800 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:25:39.567523+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 29900800 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:25:40.567660+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 29900800 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:25:41.567833+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 29900800 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:25:42.567984+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 29900800 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:25:43.568171+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 29900800 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:25:44.568320+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 29900800 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:25:45.568492+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 29900800 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:25:46.568696+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 29900800 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:25:47.569338+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 29900800 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:25:48.569902+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 29900800 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:25:49.570684+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 29900800 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:25:50.571404+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 29900800 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:25:51.572000+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:25:52.572490+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:25:53.572727+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:25:54.572946+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:25:55.573144+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:25:56.573414+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:25:57.573551+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:25:58.573714+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:25:59.573915+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:26:00.574187+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:26:01.574436+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:26:02.574647+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:26:03.574810+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:26:04.575164+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:26:05.575320+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:26:06.575575+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:26:07.575695+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:26:08.575851+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:26:09.576059+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:26:10.576189+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:26:11.576378+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:26:12.576521+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:26:13.576714+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:26:14.576895+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:26:15.577068+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:26:16.577281+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:26:17.577484+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:26:18.577636+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:26:19.577818+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:26:20.578777+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:26:21.579379+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:26:22.579954+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:26:23.580323+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:26:24.581114+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:26:25.581845+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:26:26.582339+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:26:27.582505+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:26:28.582982+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:26:29.583164+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:26:30.583435+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:26:31.584323+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:26:32.585093+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:26:33.585480+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:26:34.585840+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:26:35.586022+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:26:36.586643+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:26:37.586959+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:26:38.587113+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:26:39.587443+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:26:40.587799+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:26:41.587991+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:26:42.588242+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:26:43.588565+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:26:44.588813+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:26:45.589099+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:26:46.589404+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:26:47.589629+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:26:48.589852+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:26:49.590114+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:26:50.590359+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:26:51.590541+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:26:52.590709+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:26:53.590837+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:26:54.590972+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:26:55.591116+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:26:56.591277+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:26:57.591632+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:26:58.591786+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:26:59.591947+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:27:00.592055+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:27:01.592216+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:27:02.592435+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:27:03.592624+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:27:04.592713+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:27:05.592842+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:27:06.593138+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:27:07.593355+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:27:08.593494+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:27:09.593652+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:27:10.593803+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:27:11.593946+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:27:12.594115+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:27:13.594268+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:27:14.594360+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:27:15.594478+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:27:16.594613+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:27:17.594783+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119693312 unmapped: 29884416 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:27:18.594933+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119693312 unmapped: 29884416 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:27:19.595059+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119693312 unmapped: 29884416 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:27:20.595206+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119693312 unmapped: 29884416 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:27:21.595371+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119693312 unmapped: 29884416 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:27:22.595517+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119693312 unmapped: 29884416 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:27:23.595659+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119693312 unmapped: 29884416 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:27:24.595819+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119693312 unmapped: 29884416 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:27:25.595944+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119693312 unmapped: 29884416 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:27:26.596160+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119693312 unmapped: 29884416 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:27:27.596329+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119693312 unmapped: 29884416 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:27:28.596509+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119693312 unmapped: 29884416 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:27:29.596634+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119693312 unmapped: 29884416 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:27:30.596804+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119693312 unmapped: 29884416 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:27:31.596953+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119693312 unmapped: 29884416 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:27:32.597120+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119701504 unmapped: 29876224 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:27:33.597291+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119701504 unmapped: 29876224 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:27:34.597456+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119701504 unmapped: 29876224 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:27:35.597614+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119701504 unmapped: 29876224 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:27:36.597808+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119701504 unmapped: 29876224 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:27:37.597991+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119701504 unmapped: 29876224 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:27:38.598134+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119701504 unmapped: 29876224 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:27:39.598340+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119701504 unmapped: 29876224 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:27:40.598526+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 29868032 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:27:41.598674+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 29868032 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:27:42.598855+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 29868032 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:27:43.599010+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 29868032 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:27:44.599194+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 29868032 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:27:45.599324+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 29868032 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:27:46.599493+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 29868032 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:27:47.599615+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 29868032 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:27:48.599728+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 29868032 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:27:49.599853+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 29868032 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:27:50.599984+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 29868032 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:27:51.600166+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 29868032 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:27:52.600309+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 29868032 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:27:53.600451+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 29868032 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:27:54.600578+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 29868032 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:27:55.600693+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 29868032 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:27:56.600849+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119717888 unmapped: 29859840 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:27:57.601005+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:42 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:42 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:42 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119717888 unmapped: 29859840 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:27:58.601892+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119717888 unmapped: 29859840 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:27:59.602329+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119717888 unmapped: 29859840 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:42 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:28:00.603148+0000)
Oct 08 10:30:42 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119717888 unmapped: 29859840 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:28:01.603390+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119717888 unmapped: 29859840 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:28:02.604139+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:43 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:43 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119717888 unmapped: 29859840 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:28:03.604417+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119717888 unmapped: 29859840 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:28:04.604663+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119717888 unmapped: 29859840 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:28:05.604911+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119717888 unmapped: 29859840 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:28:06.605496+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119717888 unmapped: 29859840 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:28:07.606008+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:43 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:43 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119717888 unmapped: 29859840 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:28:08.606349+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119726080 unmapped: 29851648 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:28:09.606573+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119726080 unmapped: 29851648 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:28:10.606961+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119726080 unmapped: 29851648 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:28:11.607187+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119726080 unmapped: 29851648 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:28:12.607410+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:43 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:43 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119726080 unmapped: 29851648 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:28:13.607605+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119726080 unmapped: 29851648 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:28:14.607866+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119726080 unmapped: 29851648 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:28:15.608098+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119726080 unmapped: 29851648 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:28:16.608380+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119726080 unmapped: 29851648 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:28:17.608600+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:43 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:43 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119726080 unmapped: 29851648 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:28:18.608762+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119726080 unmapped: 29851648 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:28:19.608974+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119726080 unmapped: 29851648 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:28:20.609129+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119734272 unmapped: 29843456 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:28:21.609275+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119734272 unmapped: 29843456 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:28:22.609440+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:43 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:43 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:43 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119734272 unmapped: 29843456 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:28:23.609591+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119734272 unmapped: 29843456 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:28:24.609798+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119734272 unmapped: 29843456 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:28:25.609976+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119734272 unmapped: 29843456 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:28:26.610204+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119734272 unmapped: 29843456 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:28:27.610336+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:43 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:43 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:43 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119734272 unmapped: 29843456 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:28:28.610492+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119734272 unmapped: 29843456 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:28:29.610638+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119734272 unmapped: 29843456 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:28:30.610786+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119734272 unmapped: 29843456 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:28:31.610944+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119734272 unmapped: 29843456 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:28:32.611102+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:43 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:43 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:43 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119734272 unmapped: 29843456 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:28:33.611317+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119734272 unmapped: 29843456 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:28:34.611464+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119742464 unmapped: 29835264 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:28:35.611662+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119742464 unmapped: 29835264 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:28:36.611964+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119742464 unmapped: 29835264 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:28:37.612117+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76c400 session 0x559f2cadcf00
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: handle_auth_request added challenge on 0x559f2d608400
Oct 08 10:30:43 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:43 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:43 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119742464 unmapped: 29835264 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:28:38.612288+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119742464 unmapped: 29835264 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:28:39.612439+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119742464 unmapped: 29835264 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:28:40.612574+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119742464 unmapped: 29835264 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:28:41.612765+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119742464 unmapped: 29835264 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:28:42.612912+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:43 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:43 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119750656 unmapped: 29827072 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:28:43.613094+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119750656 unmapped: 29827072 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:28:44.613276+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:43 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119750656 unmapped: 29827072 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:28:45.613448+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119750656 unmapped: 29827072 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:28:46.613628+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119750656 unmapped: 29827072 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:28:47.613913+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:43 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:43 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119750656 unmapped: 29827072 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:28:48.614102+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119750656 unmapped: 29827072 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:28:49.614312+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:28:50.614467+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119750656 unmapped: 29827072 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:28:51.614671+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119750656 unmapped: 29827072 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:28:52.614829+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119750656 unmapped: 29827072 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:43 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:43 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:28:53.615075+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119750656 unmapped: 29827072 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:28:54.615199+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119750656 unmapped: 29827072 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:28:55.615358+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119750656 unmapped: 29827072 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:28:56.615572+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119767040 unmapped: 29810688 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:28:57.615771+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119767040 unmapped: 29810688 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:43 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:43 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:28:58.615926+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119767040 unmapped: 29810688 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:28:59.616092+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119767040 unmapped: 29810688 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:29:00.616231+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119767040 unmapped: 29810688 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:29:01.616358+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119767040 unmapped: 29810688 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets getting new tickets!
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:29:02.616578+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _finish_auth 0
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:29:02.617556+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119775232 unmapped: 29802496 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:43 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:43 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:43 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:29:03.616699+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119775232 unmapped: 29802496 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:29:04.616834+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119775232 unmapped: 29802496 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:29:05.616966+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119775232 unmapped: 29802496 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:29:06.617134+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119775232 unmapped: 29802496 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:29:07.617303+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119775232 unmapped: 29802496 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:43 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:43 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:29:08.617475+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119783424 unmapped: 29794304 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:29:09.617597+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119783424 unmapped: 29794304 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:29:10.617719+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119783424 unmapped: 29794304 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:29:11.617848+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119783424 unmapped: 29794304 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:29:12.618010+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119783424 unmapped: 29794304 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:43 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:43 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:43 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:29:13.618155+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119783424 unmapped: 29794304 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:29:14.618282+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119783424 unmapped: 29794304 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:29:15.618428+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119783424 unmapped: 29794304 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:29:16.618612+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119783424 unmapped: 29794304 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:29:17.618738+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119783424 unmapped: 29794304 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:43 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:43 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:43 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:29:18.618868+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119783424 unmapped: 29794304 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:29:19.619011+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119783424 unmapped: 29794304 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:29:20.619166+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119783424 unmapped: 29794304 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:29:21.619330+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119783424 unmapped: 29794304 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:29:22.619476+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119783424 unmapped: 29794304 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 08 10:30:43 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:43 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:43 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:29:23.619648+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119791616 unmapped: 29786112 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:29:24.619797+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119791616 unmapped: 29786112 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:29:25.619973+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119791616 unmapped: 29786112 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:29:26.620189+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119791616 unmapped: 29786112 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:29:27.620339+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119791616 unmapped: 29786112 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:43 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:43 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:29:28.620611+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119791616 unmapped: 29786112 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:29:29.620819+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119791616 unmapped: 29786112 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:29:30.621022+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119791616 unmapped: 29786112 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:29:31.621293+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119791616 unmapped: 29786112 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:29:32.621459+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119791616 unmapped: 29786112 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:43 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:43 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:29:33.621635+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119791616 unmapped: 29786112 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:29:34.621785+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119791616 unmapped: 29786112 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:29:35.621959+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119791616 unmapped: 29786112 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:29:36.622863+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119791616 unmapped: 29786112 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:29:37.623082+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119791616 unmapped: 29786112 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:43 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:43 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:29:38.623211+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119791616 unmapped: 29786112 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:29:39.623395+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119799808 unmapped: 29777920 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:29:40.623553+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119799808 unmapped: 29777920 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:29:41.623770+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119799808 unmapped: 29777920 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:29:42.624622+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119799808 unmapped: 29777920 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:43 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:43 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:43 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:29:43.624780+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119799808 unmapped: 29777920 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:29:44.624940+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119799808 unmapped: 29777920 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:29:45.625114+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119799808 unmapped: 29777920 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:29:46.625264+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119799808 unmapped: 29777920 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:29:47.625436+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119799808 unmapped: 29777920 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:43 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:43 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:29:48.625581+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119799808 unmapped: 29777920 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:29:49.625721+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119799808 unmapped: 29777920 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:29:50.626059+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119799808 unmapped: 29777920 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:29:51.626194+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119799808 unmapped: 29777920 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:29:52.626326+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119799808 unmapped: 29777920 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:43 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:43 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:29:53.626456+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119799808 unmapped: 29777920 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:29:54.626608+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119799808 unmapped: 29777920 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:29:55.626765+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119799808 unmapped: 29777920 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:29:56.627079+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119799808 unmapped: 29777920 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:29:57.627343+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119808000 unmapped: 29769728 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:43 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:43 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:29:58.627494+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119808000 unmapped: 29769728 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:29:59.627612+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119808000 unmapped: 29769728 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:30:00.627762+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119808000 unmapped: 29769728 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:30:01.627932+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119808000 unmapped: 29769728 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:30:02.628091+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119808000 unmapped: 29769728 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:43 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:43 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:30:03.628228+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119808000 unmapped: 29769728 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:30:04.628371+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119808000 unmapped: 29769728 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:30:05.628500+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119808000 unmapped: 29769728 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:30:06.628689+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119808000 unmapped: 29769728 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:30:07.628825+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119808000 unmapped: 29769728 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 08 10:30:43 compute-0 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 08 10:30:43 compute-0 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:30:08.628994+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119808000 unmapped: 29769728 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:30:09.629140+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119873536 unmapped: 29704192 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: do_command 'config diff' '{prefix=config diff}'
Oct 08 10:30:43 compute-0 ceph-osd[81751]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 08 10:30:43 compute-0 ceph-osd[81751]: do_command 'config show' '{prefix=config show}'
Oct 08 10:30:43 compute-0 ceph-osd[81751]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 08 10:30:43 compute-0 ceph-osd[81751]: do_command 'counter dump' '{prefix=counter dump}'
Oct 08 10:30:43 compute-0 ceph-osd[81751]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 08 10:30:43 compute-0 ceph-osd[81751]: do_command 'counter schema' '{prefix=counter schema}'
Oct 08 10:30:43 compute-0 ceph-osd[81751]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:30:10.629277+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119857152 unmapped: 29720576 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:30:11.629422+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 120184832 unmapped: 29392896 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: tick
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_tickets
Oct 08 10:30:43 compute-0 ceph-osd[81751]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-08T10:30:12.629686+0000)
Oct 08 10:30:43 compute-0 ceph-osd[81751]: do_command 'log dump' '{prefix=log dump}'
Oct 08 10:30:43 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:30:43 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:30:43 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:30:43.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:30:43 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17742 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:43 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 08 10:30:43 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27881 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:43 compute-0 ceph-mon[73572]: from='client.17697 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:43 compute-0 ceph-mon[73572]: from='client.27836 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:43 compute-0 ceph-mon[73572]: from='client.27511 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:43 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/371870367' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 08 10:30:43 compute-0 ceph-mon[73572]: from='client.17706 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:43 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/3375777232' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 08 10:30:43 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/1313256790' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 08 10:30:43 compute-0 ceph-mon[73572]: from='client.27535 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:43 compute-0 ceph-mon[73572]: from='client.27854 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:43 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/3770399889' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 08 10:30:43 compute-0 ceph-mon[73572]: from='client.17724 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:43 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/4046998603' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 08 10:30:43 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/4153275886' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 08 10:30:43 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/3431903715' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 08 10:30:43 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27896 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct 08 10:30:43 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2541047330' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 08 10:30:43 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1389: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:30:43 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27577 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:43 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17760 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:43 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27911 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:43 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Oct 08 10:30:43 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/264219819' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 08 10:30:43 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:30:43 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:30:43 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:30:43.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:30:43 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27589 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:44 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:30:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:44 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:30:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:44 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:30:44 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:44 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:30:44 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27932 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:44 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17778 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:44 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Oct 08 10:30:44 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/329537465' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 08 10:30:44 compute-0 nova_compute[262220]: 2025-10-08 10:30:44.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:30:44 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27938 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:44 compute-0 ceph-mon[73572]: from='client.27550 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:44 compute-0 ceph-mon[73572]: from='client.27869 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:44 compute-0 ceph-mon[73572]: from='client.17742 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:44 compute-0 ceph-mon[73572]: from='client.27881 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:44 compute-0 ceph-mon[73572]: from='client.27896 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:44 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/2541047330' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 08 10:30:44 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/2414699130' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 08 10:30:44 compute-0 ceph-mon[73572]: pgmap v1389: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:30:44 compute-0 ceph-mon[73572]: from='client.27577 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:44 compute-0 ceph-mon[73572]: from='client.17760 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:44 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/2293664485' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 08 10:30:44 compute-0 ceph-mon[73572]: from='client.27911 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:44 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/1442086797' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 08 10:30:44 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/264219819' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 08 10:30:44 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/1259775984' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 08 10:30:44 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/329537465' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 08 10:30:44 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27941 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:44 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17799 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:44 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:30:44 compute-0 crontab[302878]: (root) LIST (root)
Oct 08 10:30:44 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27631 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:44 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17811 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:44 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27962 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:45 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:30:45 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:30:45 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:30:45.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:30:45 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Oct 08 10:30:45 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3305297458' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 08 10:30:45 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27652 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:45 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17826 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:45 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1390: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:30:45 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Oct 08 10:30:45 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27664 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:45 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1028679134' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 08 10:30:45 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17841 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:45 compute-0 ceph-mon[73572]: from='client.27589 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:45 compute-0 ceph-mon[73572]: from='client.27932 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:45 compute-0 ceph-mon[73572]: from='client.17778 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:45 compute-0 ceph-mon[73572]: from='client.27938 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:45 compute-0 ceph-mon[73572]: from='client.27941 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:45 compute-0 ceph-mon[73572]: from='client.17799 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:45 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/1372152975' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 08 10:30:45 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/3779101949' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 08 10:30:45 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/18107447' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 08 10:30:45 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3305297458' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 08 10:30:45 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/3490532746' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 08 10:30:45 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/2808389557' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 08 10:30:45 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:30:45] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 08 10:30:45 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:30:45] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 08 10:30:45 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:30:45 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:30:45 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:30:45.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:30:46 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Oct 08 10:30:46 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/945813633' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 08 10:30:46 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17853 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:46 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Oct 08 10:30:46 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2181162596' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 08 10:30:46 compute-0 nova_compute[262220]: 2025-10-08 10:30:46.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:30:46 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Oct 08 10:30:46 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1706432708' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 08 10:30:46 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Oct 08 10:30:46 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3286908895' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 08 10:30:46 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27685 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:47 compute-0 ceph-mon[73572]: from='client.27631 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:47 compute-0 ceph-mon[73572]: from='client.17811 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:47 compute-0 ceph-mon[73572]: from='client.27962 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:47 compute-0 ceph-mon[73572]: from='client.27652 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:47 compute-0 ceph-mon[73572]: from='client.17826 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:47 compute-0 ceph-mon[73572]: pgmap v1390: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:30:47 compute-0 ceph-mon[73572]: from='client.27664 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:47 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/1028679134' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 08 10:30:47 compute-0 ceph-mon[73572]: from='client.17841 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:47 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/2861111884' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 08 10:30:47 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/4179906354' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 08 10:30:47 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/945813633' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 08 10:30:47 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/2181162596' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 08 10:30:47 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/387154210' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 08 10:30:47 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:30:47 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:30:47 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:30:47.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:30:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Oct 08 10:30:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1968975931' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 08 10:30:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:30:47.275Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 08 10:30:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:30:47.275Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:30:47 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:30:47.275Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:30:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Oct 08 10:30:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3783763333' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 08 10:30:47 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1391: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:30:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Oct 08 10:30:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/838008458' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 08 10:30:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Oct 08 10:30:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/616920574' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 08 10:30:47 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:30:47 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:30:47 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:30:47.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:30:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:30:47
Oct 08 10:30:47 compute-0 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 08 10:30:47 compute-0 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct 08 10:30:47 compute-0 ceph-mgr[73869]: [balancer INFO root] pools ['.mgr', 'volumes', 'cephfs.cephfs.data', '.nfs', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta', 'images', 'vms', 'cephfs.cephfs.meta', 'backups', '.rgw.root']
Oct 08 10:30:47 compute-0 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct 08 10:30:47 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 08 10:30:47 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:30:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:30:47 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:30:48 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Oct 08 10:30:48 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2984381003' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 08 10:30:48 compute-0 ceph-mon[73572]: from='client.17853 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:48 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/1706432708' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 08 10:30:48 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3286908895' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 08 10:30:48 compute-0 ceph-mon[73572]: from='client.27685 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:48 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/1968975931' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 08 10:30:48 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3783763333' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 08 10:30:48 compute-0 ceph-mon[73572]: pgmap v1391: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:30:48 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/488192470' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 08 10:30:48 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/1023750385' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 08 10:30:48 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/4067196754' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 08 10:30:48 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/838008458' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 08 10:30:48 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/616920574' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 08 10:30:48 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/4187310018' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 08 10:30:48 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/3856928330' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 08 10:30:48 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/2650633272' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 08 10:30:48 compute-0 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 08 10:30:48 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/668117161' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 08 10:30:48 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Oct 08 10:30:48 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3753562022' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 08 10:30:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:30:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:30:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct 08 10:30:48 compute-0 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct 08 10:30:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct 08 10:30:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:30:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 08 10:30:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:30:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 10:30:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:30:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:30:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:30:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:30:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:30:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 08 10:30:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:30:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 08 10:30:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:30:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:30:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:30:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 08 10:30:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:30:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 08 10:30:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:30:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 08 10:30:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:30:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 08 10:30:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 08 10:30:48 compute-0 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 08 10:30:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 08 10:30:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:30:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:30:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:30:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:30:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 08 10:30:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 08 10:30:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 08 10:30:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 08 10:30:48 compute-0 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 08 10:30:48 compute-0 systemd[1]: Starting Hostname Service...
Oct 08 10:30:48 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Oct 08 10:30:48 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2351482930' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 08 10:30:48 compute-0 systemd[1]: Started Hostname Service.
Oct 08 10:30:48 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:30:48.915Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 08 10:30:48 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17964 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:30:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:30:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:30:49 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:49 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:30:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Oct 08 10:30:49 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3087641911' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 08 10:30:49 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:30:49 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:30:49 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:30:49.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:30:49 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/2984381003' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 08 10:30:49 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3753562022' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 08 10:30:49 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/958196065' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 08 10:30:49 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/1011416293' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 08 10:30:49 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/3005227474' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 08 10:30:49 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/1895567199' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 08 10:30:49 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/2351482930' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 08 10:30:49 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/3085769687' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 08 10:30:49 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/1520689448' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 08 10:30:49 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3356810179' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 08 10:30:49 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/558658829' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 08 10:30:49 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/3719698700' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 08 10:30:49 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/1034452980' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 08 10:30:49 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/3925859834' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 08 10:30:49 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3087641911' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 08 10:30:49 compute-0 nova_compute[262220]: 2025-10-08 10:30:49.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:30:49 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1392: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:30:49 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.28127 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:49 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17982 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Oct 08 10:30:49 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3196912211' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 08 10:30:49 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:30:49 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.28142 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:49 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:30:49 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:30:49 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:30:49.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:30:49 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.28151 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:49 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.18003 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:49 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.28163 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:50 compute-0 ceph-mon[73572]: from='client.17964 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:50 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/2291908893' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 08 10:30:50 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/1282438508' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 08 10:30:50 compute-0 ceph-mon[73572]: pgmap v1392: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:30:50 compute-0 ceph-mon[73572]: from='client.28127 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:50 compute-0 ceph-mon[73572]: from='client.17982 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:50 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3196912211' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 08 10:30:50 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/3630879339' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 08 10:30:50 compute-0 ceph-mon[73572]: from='client.28142 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:50 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/2834249148' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 08 10:30:50 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/102008294' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 08 10:30:50 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/1159049839' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 08 10:30:50 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.18021 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:50 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.28184 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:50 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Oct 08 10:30:50 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/686497220' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 08 10:30:50 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.18033 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:50 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.28208 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:50 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27850 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:50 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0)
Oct 08 10:30:50 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3054226017' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 08 10:30:50 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27856 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:51 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:30:51 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:30:51 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:30:51.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:30:51 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.28229 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:51 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.18045 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:51 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27868 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:51 compute-0 ceph-mon[73572]: from='client.28151 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:51 compute-0 ceph-mon[73572]: from='client.18003 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:51 compute-0 ceph-mon[73572]: from='client.28163 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:51 compute-0 ceph-mon[73572]: from='client.18021 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:51 compute-0 ceph-mon[73572]: from='client.28184 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:51 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/686497220' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 08 10:30:51 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/2799836876' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 08 10:30:51 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/169964451' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 08 10:30:51 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/249703252' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 08 10:30:51 compute-0 ceph-mon[73572]: from='client.18033 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:51 compute-0 ceph-mon[73572]: from='client.28208 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:51 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3054226017' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 08 10:30:51 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/3640013680' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 08 10:30:51 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1393: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:30:51 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Oct 08 10:30:51 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4072704330' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 08 10:30:51 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27877 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:51 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.28244 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:51 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.18060 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:51 compute-0 nova_compute[262220]: 2025-10-08 10:30:51.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:30:51 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27883 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:51 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:30:51 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:30:51 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:30:51.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:30:51 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Oct 08 10:30:51 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2621856933' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 08 10:30:51 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.28259 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:52 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 08 10:30:52 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 08 10:30:52 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.18084 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:52 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27895 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:52 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 08 10:30:52 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 08 10:30:52 compute-0 sudo[303872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 08 10:30:52 compute-0 sudo[303872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 08 10:30:52 compute-0 sudo[303872]: pam_unix(sudo:session): session closed for user root
Oct 08 10:30:52 compute-0 ceph-mon[73572]: from='client.27850 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:52 compute-0 ceph-mon[73572]: from='client.27856 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:52 compute-0 ceph-mon[73572]: from='client.28229 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:52 compute-0 ceph-mon[73572]: from='client.18045 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:52 compute-0 ceph-mon[73572]: from='client.27868 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:52 compute-0 ceph-mon[73572]: pgmap v1393: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:30:52 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/4072704330' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 08 10:30:52 compute-0 ceph-mon[73572]: from='client.27877 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:52 compute-0 ceph-mon[73572]: from='client.28244 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:52 compute-0 ceph-mon[73572]: from='client.18060 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:52 compute-0 ceph-mon[73572]: from='client.27883 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:52 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/3792097307' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 08 10:30:52 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/2621856933' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 08 10:30:52 compute-0 ceph-mon[73572]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 08 10:30:52 compute-0 ceph-mon[73572]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 08 10:30:52 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/907660550' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 08 10:30:52 compute-0 ceph-mon[73572]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 08 10:30:52 compute-0 ceph-mon[73572]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 08 10:30:52 compute-0 ceph-mon[73572]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 08 10:30:52 compute-0 ceph-mon[73572]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 08 10:30:52 compute-0 ceph-mon[73572]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 08 10:30:52 compute-0 ceph-mon[73572]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 08 10:30:52 compute-0 ceph-mon[73572]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 08 10:30:52 compute-0 ceph-mon[73572]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 08 10:30:52 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.18117 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:52 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27919 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:52 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Oct 08 10:30:52 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1187066560' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 08 10:30:52 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27952 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:53 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:30:53 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:30:53 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:30:53.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:30:53 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.18147 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:53 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.28328 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:53 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27958 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:53 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1394: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:30:53 compute-0 ceph-mon[73572]: from='client.28259 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:53 compute-0 ceph-mon[73572]: from='client.18084 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:53 compute-0 ceph-mon[73572]: from='client.27895 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:53 compute-0 ceph-mon[73572]: from='client.18117 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:53 compute-0 ceph-mon[73572]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 08 10:30:53 compute-0 ceph-mon[73572]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 08 10:30:53 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/4034175075' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 08 10:30:53 compute-0 ceph-mon[73572]: from='client.27919 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:53 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/1187066560' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 08 10:30:53 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/1261544946' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 08 10:30:53 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/1304863816' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 08 10:30:53 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Oct 08 10:30:53 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3982089414' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 08 10:30:53 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:30:53 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:30:53 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:30:53.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:30:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 08 10:30:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:54 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 08 10:30:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:54 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 08 10:30:54 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:54 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 08 10:30:54 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 08 10:30:54 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 08 10:30:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df"} v 0)
Oct 08 10:30:54 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/628765074' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 08 10:30:54 compute-0 nova_compute[262220]: 2025-10-08 10:30:54.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:30:54 compute-0 ceph-mon[73572]: from='client.27952 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:54 compute-0 ceph-mon[73572]: from='client.18147 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:54 compute-0 ceph-mon[73572]: from='client.28328 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:54 compute-0 ceph-mon[73572]: from='client.27958 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 08 10:30:54 compute-0 ceph-mon[73572]: pgmap v1394: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:30:54 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/773355315' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 08 10:30:54 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3982089414' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 08 10:30:54 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/2531324752' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 08 10:30:54 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/4040404635' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 08 10:30:54 compute-0 ceph-mon[73572]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 08 10:30:54 compute-0 ceph-mon[73572]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 08 10:30:54 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/628765074' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 08 10:30:54 compute-0 ceph-mon[73572]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 08 10:30:54 compute-0 ceph-mon[73572]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 08 10:30:54 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/3227494455' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 08 10:30:54 compute-0 ceph-mon[73572]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 08 10:30:54 compute-0 ceph-mon[73572]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 08 10:30:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump"} v 0)
Oct 08 10:30:54 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1457471241' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 08 10:30:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 08 10:30:54 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls"} v 0)
Oct 08 10:30:54 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2676275678' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 08 10:30:55 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:30:55 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 08 10:30:55 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:30:55.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 08 10:30:55 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.28015 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:55 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.18192 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:55 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1395: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:30:55 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.28400 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:55 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/1457471241' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 08 10:30:55 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/648501160' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 08 10:30:55 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/3868522006' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 08 10:30:55 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/2676275678' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 08 10:30:55 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/2479549826' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 08 10:30:55 compute-0 sshd-session[304261]: banner exchange: Connection from 216.218.206.68 port 18668: invalid format
Oct 08 10:30:55 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:30:55] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Oct 08 10:30:55 compute-0 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:30:55] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Oct 08 10:30:55 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:30:55 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 08 10:30:55 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:30:55.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 08 10:30:55 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat"} v 0)
Oct 08 10:30:55 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3970422219' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 08 10:30:56 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump"} v 0)
Oct 08 10:30:56 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3867283654' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 08 10:30:56 compute-0 nova_compute[262220]: 2025-10-08 10:30:56.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 08 10:30:56 compute-0 ceph-mon[73572]: from='client.28015 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:56 compute-0 ceph-mon[73572]: from='client.18192 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:56 compute-0 ceph-mon[73572]: pgmap v1395: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 08 10:30:56 compute-0 ceph-mon[73572]: from='client.28400 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:56 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/44372680' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 08 10:30:56 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3970422219' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 08 10:30:56 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/1020541050' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 08 10:30:56 compute-0 ceph-mon[73572]: from='client.? 192.168.122.102:0/1849412696' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 08 10:30:56 compute-0 ceph-mon[73572]: from='client.? 192.168.122.100:0/3867283654' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 08 10:30:56 compute-0 ceph-mon[73572]: from='client.? 192.168.122.101:0/3411176713' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 08 10:30:56 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.18216 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:57 compute-0 nova_compute[262220]: 2025-10-08 10:30:56.999 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:30:57 compute-0 nova_compute[262220]: 2025-10-08 10:30:56.999 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 08 10:30:57 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.28045 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 08 10:30:57 compute-0 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct 08 10:30:57 compute-0 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 08 10:30:57 compute-0 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:30:57.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 08 10:30:57 compute-0 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Oct 08 10:30:57 compute-0 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3575398913' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 08 10:30:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:30:57.277Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:30:57 compute-0 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:30:57.277Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 08 10:30:57 compute-0 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1396: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 08 10:30:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:30:57.431 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 08 10:30:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:30:57.431 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 08 10:30:57 compute-0 ovn_metadata_agent[163169]: 2025-10-08 10:30:57.431 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 08 10:30:57 compute-0 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.18234 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
